Jump to ContentJump to Main Navigation
Moral MachinesTeaching Robots Right from Wrong$
Users without a subscription are not able to see the full content.

Wendell Wallach and Colin Allen

Print publication date: 2009

Print ISBN-13: 9780195374049

Published to Oxford Scholarship Online: January 2009

DOI: 10.1093/acprof:oso/9780195374049.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 25 November 2020

DOES HUMANITY WANT COMPUTERS MAKING MORAL DECISIONS?

DOES HUMANITY WANT COMPUTERS MAKING MORAL DECISIONS?

Chapter:
(p.37) Chapter 3 DOES HUMANITY WANT COMPUTERS MAKING MORAL DECISIONS?
Source:
Moral Machines
Author(s):

Wendell Wallach

Colin Allen (Contributor Webpage)

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780195374049.003.0004

The chapter begins with an overview of philosophy of technology to provide a context for the specific concerns raised by the prospect of artificial moral agents. Some concerns, such as whether artificial moral agents will lead humans to abrogate responsibility to machines, seem particularly pressing. Other concerns, such as the prospect of humans becoming literally enslaved to machines, seem highly speculative. The unsolved problem of technology risk assessment is how heavily to weigh catastrophic possibilities against the advantages provided by new technologies. When should the precautionary principle be invoked? Historically, philosophers of technology have served as external critics, but increasingly philosophers are engaged in engineering activism, bringing sensitivity to human values into the design of systems. Human anthropomorphism of robotic dolls, robopets, household robots, companion robots, sex toys, and even military robots raises questions of whether these artifacts dehumanize people and substitute impoverished relationships for real human interactions.

Keywords:   artificial moral agents, anthropomorphism, companion robot, engineering activism, household robot, military robot, precautionary principle, philosophy of technology, risk assessment, sex toys

Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.

Please, subscribe or login to access full text content.

If you think you should have access to this title, please contact your librarian.

To troubleshoot, please check our FAQs , and if you can't find the answer there, please contact us .