Although current social machine technology cannot exhibit the hallmarks of human morality or agency, popular culture representations and emerging technology make it increasingly important to examine human interlocutors’ perception of social machines (e.g., digital assistants, chatbots, robots) as moral agents. To facilitate such scholarship, the notion of perceived moral agency (PMA) is proposed and defined, and a metric developed and validated through two studies: (1) a large-scale online survey featuring potential scale items and concurrent validation metrics for both machine and human targets, and (2) a scale validation study with robots presented as variably agentic and moral. The PMA metric is shown to be reliable, valid, and exhibiting predictive utility.
|Journal||Computers in Human Behavior|
|State||Published - 2019|