Good robots, bad robots: Morally valenced behavior effects on perceived mind, morality, and trust

Research output: Contribution to journalArticlepeer-review

Abstract

Both robots and humans can behave in ways that engender positive and negative evaluations of their behaviors and associated responsibility. However, extant scholarship on the link between agent evaluations and valenced behavior has generally treated moral behavior as a monolithic phenomenon and largely focused on moral deviations. In contrast, contemporary moral psychology increasingly considers moral judgments to unfold in relation to a number of moral foundations (care, fairness, authority, loyalty, purity, liberty) subject to both upholding and deviation. The present investigation seeks to discover whether social judgments of humans and robots emerge differently as a function of moral foundation-specific behaviors. This work is conducted in two studies: (1) an online survey in which agents deliver observed/mediated responses to moral dilemmas and (2) a smaller laboratory-based replication with agents delivering interactive/live responses. In each study, participants evaluate the go
Original languageEnglish
JournalInternational Journal of Social Robotics
DOIs
StatePublished - Sep 10 2020

Fingerprint Dive into the research topics of 'Good robots, bad robots: Morally valenced behavior effects on perceived mind, morality, and trust'. Together they form a unique fingerprint.

Cite this