IMPACT OF FEEDBACK ON ACCURACY OF CONFIDENCE LEVELS ASSIGNED BY INTERPRETERS
Technical research note
ARMY BEHAVOIR AND SYSTEMS RESEARCH LAB ARLINGTON VA
Pagination or Media Count:
The study dealt with the utility of feedback presented under simulated computerized conditions in improving the performance of interpreters in judging the value of their own identifications. Results supported previous findings that interpreters do not as a rule make dependable evaluations of their identifications. Confidence ratings made by interpreters in the high performance subgroup were generally more accurate and complete than those made in the low performance subgroup. Two feedback techniques in which the interpreters were given only data on previous rating performance--their own and their own plus that of other classes--resulted in somewhat more accurate expressions of confidence than did the technique in which interpreters were given their own corrected reports and the imagery they had previously interpreted. Confidence ratings reported by the interpreter group receiving no feedback were the least precise. It was concluded that interpreters confidence ratings can be improved by practice in applying a knowledge-of-results frame of reference. Findings suggest, however, that more than two practice sessions are needed for the interpreter to reach an operationally useful level of accuracy in evaluating the information he provides.
- Military Intelligence