D 2020

Would You Do It?: Enacting Moral Dilemmas in Virtual Reality for Understanding Ethical Decision-Making

NIFORATOS, Evangelos, Adam PALMA, Roman GLUSZNY, Athanasios VOURVOPOULOS, Fotios LIAROKAPIS et. al.

Basic information

Original name

Would You Do It?: Enacting Moral Dilemmas in Virtual Reality for Understanding Ethical Decision-Making

Authors

NIFORATOS, Evangelos, Adam PALMA, Roman GLUSZNY, Athanasios VOURVOPOULOS and Fotios LIAROKAPIS (300 Greece, belonging to the institution)

Edition

New York, NY, USA, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, p. 1-12, 12 pp. 2020

Publisher

ACM

Other information

Language

English

Type of outcome

Stať ve sborníku

Field of Study

10201 Computer sciences, information science, bioinformatics

Country of publisher

United States of America

Confidentiality degree

není předmětem státního či obchodního tajemství

Publication form

electronic version available online

RIV identification code

RIV/00216224:14330/20:00118590

Organization unit

Faculty of Informatics

ISBN

978-1-4503-6708-0

UT WoS

000696110400077

Keywords in English

decision-making; moral dilemmas; ethics; ethical AI; VR

Tags

International impact, Reviewed
Změněno: 5/11/2021 15:05, RNDr. Pavel Šmerk, Ph.D.

Abstract

V originále

A moral dilemma is a decision-making paradox without unambiguously acceptable or preferable options. This paper investigates if and how the virtual enactment of two renowned moral dilemmas---the Trolley and the Mad Bomber---influence decision-making when compared with mentally visualizing such situations. We conducted two user studies with two gender-balanced samples of 60 participants in total that compared between paper-based and virtual-reality (VR) conditions, while simulating 5 distinct scenarios for the Trolley dilemma, and 4 storyline scenarios for the Mad Bomber's dilemma. Our findings suggest that the VR enactment of moral dilemmas further fosters utilitarian decision-making, while it amplifies biases such as sparing juveniles and seeking retribution. Ultimately, we theorize that the VR enactment of renowned moral dilemmas can yield ecologically-valid data for training future Artificial Intelligence (AI) systems on ethical decision-making, and we elicit early design principles for the training of such systems.