DID YOU KNOW? DTIC has over 3.5 million final reports on DoD funded research, development, test, and evaluation activities available to our registered users. Click
HERE to register or log in.
Accession Number:
AD1157334
Title:
Human Decisions on Targeted and Non-Targeted Adversarial Samples
Descriptive Note:
[Technical Report, Research Paper]
Corporate Author:
CARNEGIE-MELLON UNIV PITTSBURGH PA
Report Date:
2017-01-01
Pagination or Media Count:
6
Abstract:
In a world that relies increasingly on large amounts of data and on powerful Machine Learning ML models, the veracity of decisions made by these systems is essential. Adversarial samples are inputs that have been perturbed to mislead the interpretation of the ML and are a dangerous vulnerability. Our research takes a first step into what can be an important innovation in cognitive science we analyzed humans judgments and decisions when confronted with targeted inputs constructed to make a ML model purposely misclassify an input as something else and non-targeted a noisy perturbed input that tries to trick the ML model adversarial samples. Our findings suggest that although ML models that produce non-targeted adversarial samples can be more efficient than targeted samples they result in more incorrect human classifications than those of targeted samples. In other words, non-targeted samples interfered more with human perception and categorization decisions than targeted samples.
Distribution Statement:
[A, Approved For Public Release]