About

The Trustworthy Systems Laboratory (TSL) has been established to explore demonstrably trustworthy systems. Confidence in a system's trustworthiness can be gained in many different ways, including by design, through transparency, and through rigorous verification and validation. The new lab will help to address the increasing global demand for design techniques and systems that are not only reliable but also secure and robust to failures.

TSL will focus on all aspects of trustworthiness throughout the system stack, from hardware to software, and addressing aspects such as safety and functional correctness, predictability, security, privacy, as well as traditional dependability, covering integrity, robustness, reliability and graceful degradation.

Confidence in a system’s trustworthiness can be gained in many different ways, including:

  • by design, systems that are simple are also understandable;
  • through transparency, systems that allow us an insight into how they make decisions, why they act in a certain way or how they use resources become understandable and perhaps even controllable; and
  • through verification and validation, rigorous proof complemented by high-fidelity simulation and intelligent testing can provide convincing evidence of a system’s trustworthiness.

The core members are experts in system design, analysis, verification and validation, machine learning and AI. All share an interest in using their expertise to address the intellectual challenges in Trustworthy Systems. Current research interests include the design of simple, understandable, reliable and efficient computing systems, explainable machine learning and AI, resource consumption analysis that enables time and energy transparency from hardware to software and advanced V&V including formal methods and intelligent testing.

The challenge of Trustworthy Systems reaches beyond engineering; engagement of experts from disciplines like Psychology and Law is also necessary. Combining our expertise will lead to novel solutions that enable gaining confidence in the trustworthiness of a system:

  • V&V is much easier for systems that are simple by design (we must understand how to design software and hardware for V&V);
  • resource consumption analysis is much easier for systems that are predictable (we must understand how to design systems that are predictable to be able to analyse them); 
  • explanations could enable V&V of otherwise opaque machine learning (we must find ways to explain the outcomes of ML in order to reason about them); and 
  • agency could mimic the intelligence smart systems encounter in the real world during simulation (we must understand how to exploit AI to gain trust in a system).

In addition, understanding human behaviour and legislation can inform engineering decisions and ensures relevance and wide impact.

Edit this page