Satisficing Trust in Human-Robot Teams

Is it possible to develop and measure the levels of trust needed in robots and humans, so that such teams behave in a trustworthy manner both in terms of functionality as well as within legal, ethical, and operational constraints?

The big issue

The development of human-robot systems in many aspects of our lives, including security, defence, and law enforcement, promises enhanced safety and prosperity. It also raises many questions about such systems' effectiveness, reliability, legality, and ethics. Is it possible to develop and measure the levels of trust needed in robots and humans, so that such teams behave in a trustworthy manner both in terms of functionality as well as within legal, ethical, and operational constraints?  

Our response

"Satisficing trust in human-robot teams" (HuRST) emerged from the EPSRC Robots for National Security Sandpit in May 2022. Here, ‘satisficing’ refers to achieving a sufficient trust level over time as the mission requirements and circumstances change, recognising that in practice, trust will not be a ‘black and white’ issue. The interdisciplinary team from Birmingham, UCL, Loughborough and Bristol, with expertise in robotics, machine learning, human factors, human-computer interaction, law and criminology – seeks to design humans into autonomous systems and enable human-robot teams to exceed the capabilities of humans or robots alone. HuRST will pursue these aims through system engineering, computer modelling, experiments and theory development.

We will implement metrics appropriate for both humans and robots to quantify their trust in their teammates and recognise that these metrics might vary over time. The HuRST team will explore effective configurations of human-robot systems through innovations in Allocation of Function and Co-Active Design. Concepts in satisficing trust will be explore in computer simulations, for example in considering suitable robot reward structures for Multi-Agent Reinforcement Learning and Safe Reinforcement Learning. University of Bristol’s Fenswood Farm will be a location for experiments with real-world human-robot testbed systems.

The benefits

Our research will enable human-robot teams to produce faster, safer, more thorough and systematic searches of large areas than the current practice; and will have broad implications for the development of trustworthy human-autonomous agent collectives.

What’s next

Six defined work packages will contribute to the research and overall report. Bristol is leading on two: first, defining the concepts of trust and accountability that influence the project (Dr Milivojevic). We seek to understand better the challenges and implications of autonomy and agency within the context of accountability to design legally and ethically robust human-robot relationships. Second, we will build and experiment with real human-robot teams (Dr Hunt). These experiments will enable real functionality for searching buildings and the environments around them.

How is BDFI involved?

BDFI supported the research grant application. In addition, BDFI facilities such as Neutral Lab or Reality Emulator may be used in some phases of the project.

Researchers

Chris Baber (lead)
Mirco Musolesi
Patrick Waterson
Sanja Milivojevic
Edmund Hunt

Collaborators

University of Birmingham
University of Bristol
University College London
Loughborough University

Edit this page