Wednesday, November 28, 2018

When Machines face Moral Dilemma

Many believe that driverless cars or autonomous vehicles (AV) are just around the corner. Trial runs have been conducted, still  there are several more hurdles  to tide over  before they can meet  specifications set by SAE, (Society of Automobile  Engineers).   SAE classifies automotives into 6 levels  beginning with level 0 where the human driver is all  in all and level 5 where human is just a passenger .  According to  the NHTSA website (National Highway Traffic Safety Administration, USA)  Level 5 vehicle is " An automated Driving System (ADS) on the vehicle (which) can do all the driving in all circumstances. The human occupants are just passengers and need never be involved in driving". 

"Just passengers"? -  well that is interesting; I foresee a future when Driving Licence becomes redundant. But that  brings up  several intricate questions too.  True, AVs will be designed with super safety features.  But after all these are mechanical-electronic contraptions and hence breakdowns and/or  accidents cannot be eliminated altogether. How will the  insurance policy be formulated?  Who should be held responsible? Surely passengers can't be guilty.  Should a mandatory  AMC replace the insurance cover? If so, would the manufacturer be liable?   The answers are not yet  in place. 

AVs will have to share  road-space with  human-driven vehicles, pedestrians, pets, stray animals etc. hence  they should be equipped with enough  Artificial intelligence(AI)  to meet all possible eventualities.  Let us imagine a scenario of  an AV is  negotiating  a busy market street.  An accident is imminent and  unavoidable;  whether the vehicle stops or swerves    lives will be lost.  How should  the algorithm for  solving this dilemma be written-   to save the lives of the few passengers within or the  many pedestrians on the road?  To save   the elderly over  the young;  the rich rather than the poor;  the females  and not the males? It is indeed a frightening task to write algorithms that define machine ethics.  To write a moral code for the AI system, it is necessary  to know how humans make moral judgements.  Almost every component  one can think of,  social background, age, gender,  education level, prosperity, cultural traits etc. influences an individual's thought process and the ethical choices he/she makes. For the human mind it is a dynamic process and not confined within the commands  of an algorithm.   But is there a pattern?  Can we ultimately define  a Global Moral Standard ?

That is what Awad et al set out to map. They floated an  online  questionnaire  in 10 languages. There was  only one question but nine situations; each situation  had just two disasters  to choose from.   If an  accident and subsequent  loss of lives  are  unavoidable should  one try to save   a) humans or pets;    (b) passengers or pedestrians (c)  the young or the elderly (d) abled or disabled (e) male or female (f) few or many (g) rich or the poor. Their results  titled The Moral Machine Experiment appear  in a  recent issue of Nature. The online survey generated  close to 4 million responses spread over 233 countries/territories/societies.  In spite of overlaps and cross overs  Awad et al  could arrange the collage into 3  clusters: the Western, the Eastern and the Southern.  The Western  cluster included North America and most of  the European countries except France; the Eastern cluster spanned  the geographical east from Japan to Middle East and the Southern cluster  consisted of Latin American countries, France, old French colonies, Hungary, Slovakia, Czech Republic etc.  

Awad et al conclude that despite the diversity of responses they could  detect three strong preferences across the clusters : "the preference to save humans; the preference to save more lives; the preference to save  young lives."   

REFERENCES: 
1. The Moral Machine Experiment : Awad et al Nature 563, pp 59-64  (2018)
2. The social Dilemma of autonomous vehicles: Bonnefon et al; Science 352, 1573-1576 (2016)
3. Cultural differences in moral judgement and behaviour across and within societies: Graham et al , Curr.Opin. Psychol.8, 125-130 (2016)


No comments:

Post a Comment