Accession Number:

AD1099109

Title:

Developing and Signaling Trust in Synthetic Autonomous Agents (SAAs)

Descriptive Note:

Technical Report,26 Sep 2018,29 Sep 2019

Corporate Author:

Arizona State University Tempe United States

Personal Author(s):

Report Date:

2019-10-25

Pagination or Media Count:

10.0

Abstract:

Major Goals Goal 1. The primary goal of this one year research project was to draw on social psychological research in order to specify the morals and values of good drivers that may be available for programming SAAs to make decisions and behave with moral integrity. Goal 2. Our second goal was to begin to test the feasibility of programming value-governed parameters of SAAs, in a newly developed, four-wheel, skid-steer robotic car that resembles a 128 scale self-driving car which we refer to as a Go-CHART. Goal 3. Our third goal was to identify the most efficacious signal of programmed moral integrity in order to garner appropriate trust from human operators and the general public.Synthetic Autonomous Agents SAAs e.g., self-driving cars, unmanned search and rescue vehicles, lethal autonomous weapons can accomplish tasks too difficult or risky for humans and we must not fail in preparing for this advancing technology. Yet opponents argue that SAAs should never be developed and, instead, humans must maintain meaningful human control Roff and Moyes, 2016 in every case because SAAs may fall into enemy hands, become disconnected from their human counterparts, or may initiate undesirable outcomes. One way to overcome this distrust of autonomous agents is to ensure that SAAs behave with moral integrity. Whether or not SAAs are deemed to be true moral agents, we contend they can be programmed to make decisions and to behave as responsible moral agents. To date, morality has generally been conceptualized as either deontological following rules regardless of the outcome or utilitarian accomplishing a worthy goal. However, the two systems often conflict, require the programming of all possible rules or outcomes, and people rarely agree about which system is best Awad, et al., 2018 Conway and Gawronski, 2013. As one example, people agree that self-driving cars should never drive on sidewalks deontological.

Subject Categories:

  • Cybernetics
  • Surface Transportation and Equipment
  • Psychology

Distribution Statement:

APPROVED FOR PUBLIC RELEASE