2020
Would You Do It?: Enacting Moral Dilemmas in Virtual Reality for Understanding Ethical Decision-Making
NIFORATOS, Evangelos; Adam PALMA; Roman GLUSZNY; Athanasios VOURVOPOULOS; Fotios LIAROKAPIS et. al.Základní údaje
Originální název
Would You Do It?: Enacting Moral Dilemmas in Virtual Reality for Understanding Ethical Decision-Making
Autoři
NIFORATOS, Evangelos; Adam PALMA; Roman GLUSZNY; Athanasios VOURVOPOULOS a Fotios LIAROKAPIS
Vydání
New York, NY, USA, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, od s. 1-12, 12 s. 2020
Nakladatel
ACM
Další údaje
Jazyk
angličtina
Typ výsledku
Stať ve sborníku
Obor
10201 Computer sciences, information science, bioinformatics
Stát vydavatele
Spojené státy
Utajení
není předmětem státního či obchodního tajemství
Forma vydání
elektronická verze "online"
Kód RIV
RIV/00216224:14330/20:00118590
Organizační jednotka
Fakulta informatiky
ISBN
978-1-4503-6708-0
UT WoS
000696110400077
EID Scopus
2-s2.0-85091313894
Klíčová slova anglicky
decision-making; moral dilemmas; ethics; ethical AI; VR
Příznaky
Mezinárodní význam, Recenzováno
Změněno: 5. 11. 2021 15:05, RNDr. Pavel Šmerk, Ph.D.
Anotace
V originále
A moral dilemma is a decision-making paradox without unambiguously acceptable or preferable options. This paper investigates if and how the virtual enactment of two renowned moral dilemmas---the Trolley and the Mad Bomber---influence decision-making when compared with mentally visualizing such situations. We conducted two user studies with two gender-balanced samples of 60 participants in total that compared between paper-based and virtual-reality (VR) conditions, while simulating 5 distinct scenarios for the Trolley dilemma, and 4 storyline scenarios for the Mad Bomber's dilemma. Our findings suggest that the VR enactment of moral dilemmas further fosters utilitarian decision-making, while it amplifies biases such as sparing juveniles and seeking retribution. Ultimately, we theorize that the VR enactment of renowned moral dilemmas can yield ecologically-valid data for training future Artificial Intelligence (AI) systems on ethical decision-making, and we elicit early design principles for the training of such systems.