The social dilemma of autonomous vehicles
Codes of conduct in autonomous vehicles
Abstract
Get full access to this article
View all available purchase options and get full access to this article.
View all available purchase options and get full access to this article.
Select the format you want to export the citation of this publication.
AAAS login provides access to Science for AAAS members, and access to other journals in the Science family to users who have purchased individual subscriptions, as well as limited access for those who register for access.
Purchase digital access to this article
Download and print this article for your personal scholarly, research, and educational use.
Buy a single issue of Science for just $15 USD.
No eLetters have been published for this article yet.
eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed. Please read our Terms of Service before submitting your own eLetter.
RE: The social dilemma of autonomous vehicles
The main problem for the autonomous vehicles (AVs) that are created today is not ethical. It is a mistake. This mistake is that all these AVs are created for the existed environment with pedestrians and human operating vehicles. And problem is that you have to change the environment for AVs.
It was clearly demonstrated in the concept "Global Intelligent Transportation System" (GITS) http://global-its.org that was presented on 17th ITS World Congress in Busan. https://trid.trb.org/view.aspx?id=1137431
GITS concept is based on the postulate that human driven vehicles and ships must not be used in the environment where the automatically driven vehicles and ships are used and vice versa. It does not mean that the human vehicles or ships do not need the development of control systems that will provide the existing human driven transport systems with greater safety and efficiency. It means only that they cannot be used in the same environment. Proof: Boden Lake crash that I named "Boden Lake limit". It means that there is a limit of automation for human operated systems. The collision of two airplanes over the Boden Lake some years ago showed that the human priority was the final factor of that collision.
The ACs must have their own environment where they have not to have any ethical choice. Their environment has to exclude the choice to choose between two evils by excluding pedestrians from their environment.
This article is proving that GITS is the right solution for future AVs transportation.
RE: The social dilemma of autonomous vehicles
Dear readers, there is a very ethical choice and for you to evaluate. Autonomous vehicles should have a setup wizard or a control panel where the driver in each trip can provide related settings of his choice regarding the decision making of the autonomous vehicle. These data should be saved in some "box", and in case of an accident responsibility can be taken. This is ethical on driver's side, ethical also for the companies in relation of autonomous vehicle and client.
As free society, is also ethical, same like todays. Anybody can drive a car with a numerous of outcomes (even they pass thru a process to obtain a driver's licence).
As a society for the common good is not ethical, in case driver making the decision to save his/her self than the lives of pedestrians which are totally innocent.
This decision is not for companies to take, but for the drivers to decide and have the responsibility if so, giving them the advantage of thinking in advance than to act in rush in any difficult moment with limited time to respond.
However a note, that in any case, pedestrians have no responsibility and actually should be protected while they are just walking. Walking and never come back home, is a total tragedy. Also the responsibility taken by the driver thru the saved data "box", cannot bring back the life of an innocent pedestrian.
In the other hand, someone who decide to buy a car either autonomous or not, and decide that prefer in any possible accident someone else, innocent, to die instead of the driver, maybe that persons should not be allowed to operate those vehicles. This is regarding the common good in society.
Regarding the absolute dilemma "the social dilemma of autonomous vehicles":
A person decide with his/her free will to buy a car either manual or autonomous vehicle. The same person decide to operate/use that vehicle. A pedestrian, either have or not any related vehicle, at any time, as long as he/she walk and is not causing an accident, is innocent and is not the person who decide for the driver of the incoming (to cause accident) vehicle to buy/borrow/rent and operate that incoming vehicle, but is the driver's free will to do. Either company, rules, or driver decision making for AV response for driver's protection over pedestrians, is absolutely not logical. For private vehicles, which passengers should be aware.
In public transportation vehicles, analogy of possible casualties should be consider but either in this scenario, is preferable to avoid pedestrians and land in another direction with the best way to minimize casualties.
Better for public transportation to avoid any possible accident is a modified way for the safety of public, exclusive ways for transportation services which can minimize the cost, maybe can reach zero accidents, less pollution, no traffic, more effective and time is development.
RE: "Social dilemma" based on human "self-reports"
The authors pose a "social dilemma" for autonomous vehicles (AVs) with scenarios they describe as "unlikely" while relying on surveys (static self-reports) to make predictions about human preferences to these AV decisions. But we have known for decades that preferences self-reported by humans often mis-align with human behavior ([4];[9];[6]). For example, reported in Science News [3], 90% of female partners self-reported compliance with a drug regimen to prevent transmission from their HIV mates, indicating drug failure. But before rejecting the drug, blood samples collected at the same time as the self-reports were compared to discover that compliance by the females was only 30%, giving new life to the drug. "There was a profound discordance between what they told us … and what we measured," infectious disease specialist Jeanne Marrazzo said.
As two other examples, Nate Silver, the renowned political forecaster [2], declared a crises with polling last year after failing to predict the outcome of five national and international contests. Tetlock and Gardner [8] claimed that "forecasting ... is a skill that can be cultivated." Their webpage titled "Good Judgment" displayed the first question for their hand-picked superforecasters: "Will a majority of voters in Britain's upcoming referendum elect to remain in the European Union?" Despite giving only a 23% chance that the British would leave the EU [5], these superforecasters failed to predict Brexit in 2016.
At our AAAI symposium at Stanford in March [7], we constrained self-reported surveys with dynamic interdependence to tackle these more likely ethical scenarios: When four AVs approach an intersection with one AV "aware" its human driver is impaired, should the AVs coordinate with each other to protect their human occupants? Should we as a society allow a robot pilot of a team to take control when the robot becomes "aware" of an impending suicide by the airliner's human copilot? Should a robot take command of a USS submarine prepared for rapid ascent to prevent the submarine from hitting a Japanese tour boat?
Respectfully,
W.F. Lawless, Augusta, GA 30901
References:
1. Bonnefon, J.F., Shariff, A. & Rahwan, I. (2016), The social dilemma of autonomous vehicles, Science, 352(6293): 1573-1576.
2. Byers, D. (2015, 5/8), "Nate Silver: Polls are failing us", Politico, from http://www.politico.com/blogs/media/2015/05/nate-silver-polls-are-failin...
3. Cohen, J. (2013), Human Nature Sinks HIV Prevention Trial, Science, 351: 1160, from http://www.sciencemag.org/news/2013/03/human-nature-sinks-hiv-prevention...
4. Kelley, H.H. (1991), Lewin, situations, and interdependence, Journal of Social Issues 47: 211-233.
5. Kennedy, S. (2016, 6/25), "Superforecasters See 23% Brexit Chance as Economy Wins Out", Bloomberg, from http://www.bloomberg.com/news/articles/2016-05-18/superforecasters-see-2...
6. Lawless, W.F. (2016), "Preventing (another) Lubitz: The thermodynamics of teams and emotion", in Harald Atmanspacher, Thomas Filk and Emmanuel Pothos (Eds.), Quantum Interactions. LNCS 9535, Springer International Switzerland, pp. 207-215.
7. Mittu, R., Taylor, G., Sofge, D. & Lawless, W.F. (2016), Organizers: AI and the mitigation of human error: Anomalies, team metrics and thermodynamics. AAAI-2016 Symposium at Stanford; see https://www.aaai.org/Symposia/Spring/sss16symposia.php#ss01.\
8. Tetlock, P.E. & Gardner, D. (2015), Superforecasting: The Art and Science of Prediction, Crown.
9. Zell, E. & Krizan, Z. (2014), Do People Have Insight Into Their Abilities? A Metasynthesis? Perspectives on Psychological Science 9(2): 111-125.
Black box is not safe at all.
Before discussing the social dilemma of autonomous vehicles (1) , we must remove all black boxes from any system for security reason.
The OBD-II specification is made mandatory for all cars sold in the United States since 1996. The European Union makes EOBD mandatory for all gasoline (petrol) vehicles sold in the European Union since 2001.
The OBD-II and EOBD specifications both contain black boxes where all car manufactures cannot full-test the black boxes. Besides, they have no security provided in the OBD-II and EOBD specifications. In other words, for more than fifteen years with neglecting security problems, we have been driving naked cars.
In the age of autonomous cars, we must reconsider such unsecure mandatory specifications. Why have we been forced to live with black-box testing without understanding the details of the black-box? We all know that black-box testing is not suitable for identifying the defects (hardware/software) in the black box.
However, open source is not automatically more secure than closed source(2). The difference is with open source code you can verify for yourself (or pay someone to verify for you) whether the code is secure(2). With closed source programs you need to take it on faith that a piece of code works properly, open source allows the code to be tested and verified to work properly(2). Open source also allows anyone to fix broken code, while closed source can only be fixed by the vendor(1).
The open source hardware/software movement has been navigating us a good direction to get rid of all black boxes and to enhance security and incremental innovations.
References:
1. Jean-François Bonnefon, et al., The social dilemma of autonomous vehicles, Science 24 Jun 2016:Vol. 352, Issue 6293, pp. 1573-1576
2. http://www.infoworld.com/article/2985242/linux/why-is-open-source-softwa...
Autonomous Vehicles:
The advent of autonomous cars brings up a different issue- that of the enjoyment of driving. Most luxury cars sales are based on driving pleasure- which will vanish when autonomously driven. So the big profit makers of the Automobile industry will suffer most- but these are the change leaders. It will be interesting to see how this dilemma works out.