The Intersection of Robust Intelligence and Trust: Hybrid Teams, Firms, and Systems
Abstract:
We are developing the physics of interdependent uncertainty relations to efficiently and effectively control interdependence in autonomous hybrid teams i.e., arbitrary combinations of humans, robots and machines, which cannot be done presently. Uncertainty is created in states of interdependence between social objects at one extreme interdependence reduces to independent agents and certainty but with asocial, low-power solutions generating little meaning or understanding in social contexts oppositely, the length of interdependence increases across a group, deindividuating its members until individual identity dissolves e.g., cults, gangs, well-run teams, increasing power, efficiency and meaning, but also the chances of maladaptation e.g., tragic mistakes. We focus on how interdependence increases the robust intelligence of a group by increasing its autonomy while decreasing its entropy but requiring external control to be indirect. For humans teamwork is an unsolved theoretical problem solving it should generalize to the effective computational control of hybrid teams, a path forward for the users of a team to trust it to operate safely in hostile environments. Present theories of interdependence, like game theory or social science, are inadequate to formulate strategies to control teams alternative theories like machine learning can control swarms with pattern formations, but not interdependence such as multi-tasking operations. While alternative theories cannot be used to model teams, decision-making and social conflict hostile mergers checks balances the same time, ours can.