Advertisement

Codes of conduct in autonomous vehicles

When it becomes possible to program decision-making based on moral principles into machines, will self-interest or the public good predominate? In a series of surveys, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles (see the Perspective by Greene). Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle.
Science, this issue p. 1573; see also p. 1514

Abstract

Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
Get full access to this article

View all available purchase options and get full access to this article.

Already a Subscriber?

Supplementary Material

Summary

Materials and Methods
Supplementary Text
Fig. S1
Tables S1 to S8
Data Files S1 to S6

Resources

File (aaf2654_data_files.zip)
File (bonnefon.sm.pdf)

REFERENCES AND NOTES

1
Montemerlo B., Becker J., Bhat S., Dahlkamp H., Dolgov D., Ettinger S., Haehnel D., Hilden T., Hoffmann G., Huhnke B., Johnston D., Klumpp S., Langer D., Levandowski A., Levinson J., Marcil J., Orenstein D., Paefgen J., Penny I., Petrovskaya A., Pflueger M., Stanek G., Stavens D., Vogt A., Thrun S., Junior: The Stanford entry in the urban challenge. J. Field Robot. 25, 569–597 (2008).
2
Urmson C., Anhalt J., Bagnell D., Baker C., Bittner R., Clark M. N., Dolan J., Duggins D., Galatali T., Geyer C., Gittleman M., Harbaugh S., Hebert M., Howard T. M., Kolski S., Kelly A., Likhachev M., McNaughton M., Miller N., Peterson K., Pilnick B., Rajkumar R., Rybski P., Salesky B., Seo Y.-W., Singh S., Snider J., Stentz A., Whittaker W. R., Wolkowicki Z., Ziglar J., Bae H., Brown T., Demitrish D., Litkouhi B., Nickolaou J., Sadekar V., Zhang W., Struble J., Taylor M., Darms M., Ferguson D., Autonomous driving in urban environments: Boss and the urban challenge. J. Field Robot. 25, 425–466 (2008).
3
Waldrop M. M., Autonomous vehicles: No drivers required. Nature 518, 20–23 (2015).
4
van Arem B., van Driel C. J., Visser R., The impact of cooperative adaptive cruise control on traffic-flow characteristics. IEEE Trans. Intell. Transp. Syst. 7, 429–436 (2006).
5
K. Spieser, K. Treleaven, R. Zhang, E. Frazzoli, D. Morton, M. Pavone, “Toward a systematic approach to the design and evaluation of automated mobility-on-demand systems: A case study in Singapore,” in Road Vehicle Automation, G. Meyer, S. Beiker, Eds. (Lecture Notes in Mobility Series, Springer, 2014), pp. 229–245.
6
P. Gao, R. Hensley, A. Zielke, “A roadmap to the future for the auto industry,” McKinsey Quarterly (October 2014); www.mckinsey.com/industries/automotive-and-assembly/our-insights/a-road-map-to-the-future-for-the-auto-industry.
7
N. J. Goodall, “Machine ethics and automated vehicles,” in Road Vehicle Automation, G. Meyer, S. Beiker, Eds. (Lecture Notes in Mobility Series, Springer, 2014), pp. 93–102.
8
Gray K., Waytz A., Young L., The moral dyad: A fundamental template unifying moral judgment. Psychol. Inq. 23, 206–215 (2012).
9
J. Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion (Pantheon Books, 2012).
10
W. Wallach, C. Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2008).
11
F. Rosen, Classical Utilitarianism from Hume to Mill (Routledge, 2005).
12
J. D. Greene, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them (Atlantic Books, 2014).
13
Côté S., Piff P. K., Willer R., For whom do the ends justify the means? Social class and utilitarian moral judgment. J. Pers. Soc. Psychol. 104, 490–503 (2013).
14
Everett J. A. C., Pizarro D. A., Crockett M. J., Inference of trustworthiness from intuitive moral judgments. J. Exp. Psychol. Gen. 145, 772–787 (2016).
15
Kass N. E., An ethics framework for public health. Am. J. Public Health 91, 1776–1782 (2001).
16
Sunstein C. R., Vermeule A., Is capital punishment morally required? Acts, omissions, and life-life tradeoffs. Stanford Law Rev. 58, 703–750 (2005).
17
Dietz T., Ostrom E., Stern P. C., The struggle to govern the commons. Science 302, 1907–1912 (2003).
18
Dawes R. M., Social dilemmas. Annu. Rev. Psychol. 31, 169–193 (1980).
19
Van Lange P. A. M., Joireman J., Parks C. D., Van Dijk E., The psychology of social dilemmas: A review. Organ. Behav. Hum. Decis. Process. 120, 125–141 (2013).
20
Posner E. A., Sunstein C. R., Dollars and death. Univ. Chic. Law Rev. 72, 537–598 (2005).
21
Vladeck D. C., Machines without principals: Liability rules and artificial intelligence. Wash. Law Rev. 89, 117–150 (2014).
22
Deng B., Machine ethics: The robot’s dilemma. Nature 523, 24–26 (2015).
23
Gold N., Colman A. M., Pulford B. D., Cultural differences in response to real-life and hypothetical trolley problems. Judgm. Decis. Mak. 9, 65–76 (2014).

Information & Authors

Information

Published In

Science
Volume 352Issue 629324 June 2016
Pages: 1573 - 1576

History

Received: 15 January 2016
Accepted: 21 April 2016

Permissions

Request permissions for this article.

Authors

Affiliations

Jean-François Bonnefon
Toulouse School of Economics, Institute for Advanced Study in Toulouse, Center for Research in Management, CNRS, University of Toulouse Capitole, Toulouse, France.
Department of Psychology, University of Oregon, Eugene, OR 97403, USA.
Iyad Rahwan*
The Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.

Notes

*
Present address: Department of Psychology and Social Behavior, 4201 Social and Behavioral Sciences Gateway, University of California, Irvine, Irvine, CA 92697, USA.
†Corresponding author. Email: [email protected]

Metrics & Citations

Metrics

Citations

View Options

Media

Figures

Other

Tables

Share

Information & Authors
Published In
issue cover image
Science
Volume 352|Issue 6293
24 June 2016
Submission history
Received:15 January 2016
Accepted:21 April 2016
Published in print:24 June 2016
Metrics & Citations
Article usage
Altmetrics
Export citation

Select the format you want to export the citation of this publication.

Cited by
  1. Using large-scale experiments and machine learning to discover theories of human decision-making, Science, 372, 6547, (1209-1214), (2021)./doi/10.1126/science.abe2629
    Abstract
  2. Analyzing Dilemmas Posed by Artificial Intelligence and 4IR Technologies Requires using all Available Models, Including the Existing International Human Rights Framework and Principles of AI Ethics, SSRN Electronic Journal, (2021).https://doi.org/10.2139/ssrn.3874279
    Crossref
  3. Parenting in the Digital Contexts: Are Parents Ready to Use Automated Vehicles to Transport Children?, Parenting - Studies by an Ecocultural and Transactional Perspective, (2021).https://doi.org/10.5772/intechopen.83010
    Crossref
  4. Women oppose sin stocks more than men do, Finance Research Letters, 41, (101803), (2021).https://doi.org/10.1016/j.frl.2020.101803
    Crossref
  5. undefined, 2021 IEEE 15th International Symposium on Applied Computational Intelligence and Informatics (SACI), (000437-000442), (2021).https://doi.org/10.1109/SACI51354.2021.9465555
    Crossref
  6. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research, Artificial Intelligence, 296, (103473), (2021).https://doi.org/10.1016/j.artint.2021.103473
    Crossref
  7. Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence, Minds and Machines, 31, 2, (215-237), (2021).https://doi.org/10.1007/s11023-021-09556-9
    Crossref
  8. Key factors associated with Australian parents’ willingness to use an automated vehicle to transport their unaccompanied children, Transportation Research Part F: Traffic Psychology and Behaviour, 78, (137-152), (2021).https://doi.org/10.1016/j.trf.2021.02.010
    Crossref
  9. Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities, Journal of Business Research, 129, (961-974), (2021).https://doi.org/10.1016/j.jbusres.2020.08.024
    Crossref
  10. The relational logic of moral inference, , (1-64), (2021).https://doi.org/10.1016/bs.aesp.2021.04.001
    Crossref
  11. See more
Loading...
Share
Share article link

Share on social media
Get Access
Log in to view the full text

AAAS Log in

AAAS login provides access to Science for AAAS members, and access to other journals in the Science family to users who have purchased individual subscriptions, as well as limited access for those who register for access.

Log in via OpenAthens.
Log in via Shibboleth.
More options

Purchase digital access to this article

Download and print this article for your personal scholarly, research, and educational use.

Purchase this issue in print

Buy a single issue of Science for just $15 USD.

View Options
Tables
References

(0)eLetters

No eLetters have been published for this article yet.

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed. Please read our Terms of Service before submitting your own eLetter.