Innocence over utilitarianism. Heightened moral standards for robots in rescue dilemmas
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985955%3A_____%2F23%3A00572950" target="_blank" >RIV/67985955:_____/23:00572950 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/00216208:11310/23:10480010
Výsledek na webu
<a href="https://doi.org/10.1002/ejsp.2936" target="_blank" >https://doi.org/10.1002/ejsp.2936</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1002/ejsp.2936" target="_blank" >10.1002/ejsp.2936</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Innocence over utilitarianism. Heightened moral standards for robots in rescue dilemmas
Popis výsledku v původním jazyce
Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robots’ and humans’ decisions in a rescue dilemma with no possibility of deontological inaction. A robot’s decision to rescue an innocent victim of an accident was judged more positively than the decision to rescue two people culpable for the accident (Studies 1–2b). This pattern repeated in a large-scale web survey (Study 3, N = ∼19,000) and reversed when all victims were equally culpable/innocent (Study 5). Differences in judgments about humans’ and robots’ decisions were largest for norm-violating decisions. In sum, robots are not always expected to make utilitarian decisions, and their decisions are judged differently from those of humans based on other moral standards as well.
Název v anglickém jazyce
Innocence over utilitarianism. Heightened moral standards for robots in rescue dilemmas
Popis výsledku anglicky
Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robots’ and humans’ decisions in a rescue dilemma with no possibility of deontological inaction. A robot’s decision to rescue an innocent victim of an accident was judged more positively than the decision to rescue two people culpable for the accident (Studies 1–2b). This pattern repeated in a large-scale web survey (Study 3, N = ∼19,000) and reversed when all victims were equally culpable/innocent (Study 5). Differences in judgments about humans’ and robots’ decisions were largest for norm-violating decisions. In sum, robots are not always expected to make utilitarian decisions, and their decisions are judged differently from those of humans based on other moral standards as well.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
60301 - Philosophy, History and Philosophy of science and technology
Návaznosti výsledku
Projekt
—
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
European Journal of Social Psychology
ISSN
0046-2772
e-ISSN
1099-0992
Svazek periodika
53
Číslo periodika v rámci svazku
4
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
26
Strana od-do
779-804
Kód UT WoS článku
000955799200001
EID výsledku v databázi Scopus
2-s2.0-85150948051