University of New South Wales KENSINGTON Australia
The research team sought to address the question of how humans would ever trust systems that are capable of reacting to unforeseen circumstances in mission and safety-critical situations. Trust becomes a critical issue for humans working with robots, especially when the latter can autonomously learn and adapt to new situations. By definition the behavior of these types of machines cannot be formally verified in advance. This project studied the change in trust during a mixed initiative task under varying degrees of transparency of the adaptation process. The two main research contributions are 1 the design and development of a robotic cognitive architecture that includes the ability of the robot to adapt autonomously to a change in the task environment, and 2 modelling and evaluating the evolving human-robot trust relationship as the robot learns on the job. This project has formalized a cognitive hierarchy CH that bears a similarity to the NIST 4DRCS framework. It integrated symbolic and sub-symbolic representations in a modular framework where nodes are sub-tasks that maintain their own belief-state and generate behavior. The CH formalization has been extended to include context, and learningadaptation. The CH can succinctly represent complex goal directed behavior and has broad application in the area of robotics. Trust experiments with humans on a mixed initiative task involving a Baxter robot show a significant increase in trust when the robot explains its intention after adapting. The investigation has pointed to potential ethical issues if robots are given a choice as to their degree of their participation. Machine learning theory implies verification of learned behavior is possible, but intractable in practice. These results have been presented in multiple peer-reviewed journals throughout the project duration.