Jump to ContentJump to Main Navigation
Lethal Autonomous WeaponsRe-Examining the Law and Ethics of Robotic Warfare$
Users without a subscription are not able to see the full content.

Jai Galliott, Duncan MacIntosh, and Jens David Ohlin

Print publication date: 2021

Print ISBN-13: 9780197546048

Published to Oxford Scholarship Online: April 2021

DOI: 10.1093/oso/9780197546048.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.date: 27 January 2022

May Machines Take Lives to Save Lives? Human Perceptions of Autonomous Robots (with the Capacity to Kill)

May Machines Take Lives to Save Lives? Human Perceptions of Autonomous Robots (with the Capacity to Kill)

Chapter:
(p.89) 6 May Machines Take Lives to Save Lives? Human Perceptions of Autonomous Robots (with the Capacity to Kill)
Source:
Lethal Autonomous Weapons
Author(s):

Matthias Scheutz

Bertram F. Malle

Publisher:
Oxford University Press
DOI:10.1093/oso/9780197546048.003.0007

In the future, artificial agents are likely to make life-and-death decisions about humans. Ordinary people are the likely arbiters of whether these decisions are morally acceptable. We summarize research on how ordinary people evaluate artificial (compared to human) agents that make life-and-death decisions. The results suggest that many people are inclined to morally evaluate artificial agents’ decisions, and when asked how the artificial and human agents should decide, they impose the same norms on them. However, when confronted with how the agents did in fact decide, people judge the artificial agents’ decisions differently from those of humans. This difference is best explained by justifications people grant the human agents (imagining their experience of the decision situation) but do not grant the artificial agent (whose experience they cannot imagine). If people fail to infer the decision processes and justifications of artificial agents, these agents will have to explicitly communicate such justifications to people, so they can understand and accept their decisions.

Keywords:   moral psychology, moral dilemmas, human-robot interaction, blame, moral justifications

Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.

Please, subscribe or login to access full text content.

If you think you should have access to this title, please contact your librarian.

To troubleshoot, please check our FAQs , and if you can't find the answer there, please contact us .