Seminar series

Attend a Seminar

To attend a Seminar please join our Teams channel.

Here you will find a link to the meeting which is publicly available for anyone to join.

Welcome to the Trustworthy Systems Lab Seminar Series!

Here you can find all the upcoming and previous seminars in the series. The focus of the seminars is on promoting discussion and debate around the topic of Trustworthiness. 

The format of the talks are 20-30 mins of presentation and 30 mins for discussion and questions. We usually hold these weekly on a Wednesday lunchtime at midday (12:00). This is an open group so please share widely and get in contact if you wish to observe, join the debate or give a presentation.

Please contact us to be added to the mailing list where you will be sent an invitation to the talks each week along with an update of the next talks.


Details of upcoming talks and speakers can be found below.


 

30th April 2025, Making Robots Explainable, Safe and Trustworthy

Autonomous robot systems operating in real-world environments are required to understand their surroundings, assess their capabilities, and explain what they have seen, what they have done, what they planning to do, and why. These explanations need to be tailored to different stakeholders including end-users, developers, and regulators. In this talk, I will discuss how to design, develop, and evaluate fundamental AI technologies in simulation and real-world applications to make robots explainable, safe, and trustworthy and how this can help to overcome critical barriers which impede the current deployment of autonomous systems in economically and socially important areas.


 Prof. Lars Kunze

Lars Kunze is a Professor in Safety for Robotics and Autonomous Systems at the Bristol Robotics Laboratory (BRL) at UWE Bristol. Before joining BRL, he was a Departmental Lecturer (Assistant Professor) in Robotics in the Oxford Robotics Institute (ORI) and the Department of Engineering Science at the University of Oxford (where he is now a Visiting Fellow). He is also the Technical Lead at the Responsible Technology Institute (RTI), an international centre of excellence at Oxford University.
Professor Kunze's areas of expertise lie in the fields of robotics and artificial intelligence (AI). He has a background in Cognitive Science (BSc, 2006) and Computer Science (MSc, 2008) which he studied at the University of Osnabrück, and partly at the University of Edinburgh. He received his PhD (Dr. rer. nat.) from the Technical University of Munich, in 2014.
Within the ORI, he leads the Cognitive Robotics Group (CRG) which performs research into scene understanding, causal cognition, and explainability for autonomous systems; motivated by applications in complex, real-world environments. From 2019 to 2022 he led the AAIP SAX project which demonstrated explainability for autonomous vehicles in challenging real-world driving scenarios. From 2022 to 2024, he led the UKRI Trustworthy Autonomous Systems RAILS project which investigated approaches for the responsible deployment of AI technologies in autonomous systems including AVs, drones, and service robots. More recent highlights include the development and deployment of a novel sensor backpack to understand how road infrastructure and road user behaviour can affect the safety of cyclists (RobotCycle).

21st May 2025, AI Security: Language Models, Data Encryption, Software Verification

Neural networks are slowly getting integrated into safety-critical systems. Unfortunately, we still lack a full suite of algorithms and tools to guarantee their safety. In this talk, I will present a few open challenges in AI safety and security: consistent behaviour in language models, machine learning over encrypted data, model compression with error guarantees, bug-free floating-point software. Here, I will claim that formal methods are the key to address these challenges, as long as we can settle on an unambiguous specification. 


 Dr. Edoardo Maino

Dr. Edoardo Manino is a Lecturer (Assistant Professor) in AI Security at The University of Manchester. He has a lifelong interest in AI algorithms, from symbolic AI to machine learning. He spent most of his research career at Russell Group institutions in the UK, funded by EPSRC and the Alan Turing Institute. His background is in Bayesian machine learning, a topic he was awarded a PhD from the University of Southampton in 2020. In the past years, he has been interested in all variations of provably safe machine learning, from pen and paper proofs on tractable models to automated testing and verification of deep neural networks and large language models. He has a strong record of cross-disciplinary publications, spanning human computation, software engineering, hardware design, signal processing, network science and game theory. 

28th May 2025, Why did you do that? Use actual causality to explain the problems (or the lack of problems) in your life

In this talk I will survey the existing and suggested connections between the theory of actual causality and a number of applied areas in CS, focusing on explainability of black-box systems, such as neural networks. 


 Prof. Hanah Chockler

Professor Hana Chockler is a Professor in the Department of Informatics, King’s College London. In 2021-22, in parallel with her faculty appointment, she has been working as a Principal Scientist in a start-up company causaLens, whose mission is to introduce causal reasoning to support human decision-making in a variety of domains. Prior to joining KCL in 2013, Prof Chockler worked at IBM Research in the formal verification and software engineering departments. Her research interests span a wide variety of topics, including formal verification and synthesis, causal reasoning and its applications, and, most recently, computational questions related to the regulation of AI systems. Prof Chockler is a co-lead in the CHAI Hub (Causality in Healthcare AI systems), working on explainability of AI systems in healthcare.

Edit this page