Welcome to the Trustworthy Systems Lab Seminar Series! Here you can find all the upcoming and previous seminars in the series. The focus of the seminars is on promoting discussion and debate around the topic of Trustworthiness. The format of the talks are 20-30 mins of presentation and 30 mins for discussion and questions. We usually hold these weekly on a Wednesday lunchtime at midday (12:00). This is an open group so please share widely and get in contact if you wish to observe, join the debate or give a presentation. Please contact us to be added to the mailing list where you will be sent an invitation to the talks each week along with an update of the next talks.

To attend the Seminar please join our Teams channel. Here you will find a link to the meeting which is publically available for anyone to join.

15th June 2022 - Trustworthy Swarm Interaction


UoB_logo_smallRobot swarms create interesting opportunities when it comes storage and organisation solutions. The deployment of swarms can potentially provide small businesses access to efficient, automated storage without the need to purchase expensive, high maintenance warehouses. To achieve this, users and operators of robot swarms will need to monitor the operations of swarms in a distributed way, without explicitly tracking every agent, and without the need for significant infrastructure or set up. Similarly, operators will need to be able to interact and adjust swarm behaviour in an intuitive and simple manner. James will present his recent work exploring these ideas and how these concepts can relate to trustworthiness.


James Wilson is a Research Associate at the University of Bristol, investigating trustworthiness within swarm robotics. He completed his PhD at the University of York, developing bio inspired swarm behavioural controllers using virtual hormone systems. His recent work includes research on effective means of distributing user relevant information across swarm agents and exploration of effective means of interacting with swarm of robots for real-world use cases.

22nd June 2022 - IEEE 7001-2021: A new standard on Transparency of Autonomous Systems


UoB_logo_smallIEEE standard 7001-2021 is a new standard on Transparency of Autonomous Systems [1]. Published on 4 March 2022, the standard sets out measurable, testable levels of transparency, so that autonomous systems can be objectively assessed and levels of compliance determined. One of an emerging set of new standards in robotics and AI [2], 7001 is, to the best of our knowledge, the world’s first technical standard on transparency. In this talk I will outline the thinking behind 7001, its scope and structure [3]. I will introduce the 5 stakeholder groups addressed in 7001 and their different transparency needs, and illustrate how the standard can be applied in practice, with worked examples. I will argue that transparency is an essential ingredient of responsible and trustworthy AI.


Alan Winfield is Professor of Robot Ethics at the University of the West of England, Bristol, visiting Professor at the University of York, and Associate Fellow of the Cambridge Centre for the Future of Intelligence. Winfield co-founded the Bristol Robotics Laboratory and his research is focussed on the science, engineering and ethics of intelligent robots. Winfield is an advocate for robot ethics; he sits on the executive of the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems, and chairs Working Group P7001, drafting a new IEEE standard on Transparency of Autonomous Systems. Winfield also sits on the World Economic Forum’s Global AI Council. He has published over 250 works including Robotics: A Very Short Introduction (Oxford University Press, 2012).

References:

[1] "IEEE Standard for Transparency of Autonomous Systems," in IEEE Std 7001-2021, pp.1-54, 4 March 2022, doi: 10.1109/IEEESTD.2022.9726144.
[2] Winfield, A. (2019). Ethical Standards in Robotics and AI. Nat. Electron. 2, 46–48. doi:10.1038/s41928-019-0213-6
[3] Winfield AFT, Booth S, Dennis LA, Egawa T, Hastie H, Jacobs N, Muttram RI, Olszewska JI, Rajabiyazdi F, Theodorou A, Underwood MA, Wortham RH and Watson E (2021) IEEE P7001: A Proposed Standard on Transparency. Front. Robot. AI 8:665729. doi: 10.3389/frobt.2021.665729

 


Below are the previous talks, you can view previous slides on our teams channel files section.


7th June 2022 (1pm) - Rockets, Route-Analyzers, Rotorcraft, and Robonaut2: Intelligent, On-board Runtime Reasoning


lab_temp_logicRuntime Verification (RV) has become critical to the deployment of a wide range of systems, including aircraft, spacecraft, satellites, rovers, and robots, as well as the systems that control them, like air traffic control systems and space stations. The most useful, important, and safety-critical jobs will require these systems to operate both intelligently and autonomously, with the ability to sense and respond to both nominal and off-nominal conditions. It is essential that we enable reasoning sufficient to react to dynamic environments and detect critical failures on-board, in real time, to enable mitigation triggering. We are challenged by the constraints of real-life embedded operation that limit the system instrumentation, space, timing, power, weight, cost, and other operating conditions of on-board, runtime verification. While the research area of RV is vast, there is a dearth of RV tools that can operate within these constraints, and without violating, e.g., FAA or NASA rules for air and space flight certification.

The Realizable, Responsive, Unobtrusive Unit (R2U2) analyzes specifications that combine temporal logics with powerful reasoning to provide formal assurances during runtime, enabling self-assessment of critical systems. This presentation overviews recent algorithmic advances and the case studies they enabled, including embedding on-board the humanoid robot Robonaut2, a UTM (UAS Traffic Management) system, a CubeSat, and the NASA Lunar Gateway.

 

*** Please note this talk is at 1pm ***


Kristin Yvonne RozierProfessor Kristin Yvonne Rozier heads the Laboratory for Temporal Logic in Aerospace Engineering at Iowa State University; previously she spent 14 years as a Research Scientist at NASA and three semesters as an Assistant Professor at the University of Cincinnati. She earned her Ph.D. from Rice University and B.S. and M.S. degrees from The College of William and Mary. Dr. Rozier's research focuses on automated techniques for the formal specification, validation, and verification of safety critical systems. Her primary research interests include: design-time checking of system logic and system requirements; runtime system health management; and safety and security analysis.

Her advances in computation for the aerospace domain earned her many awards including: the NSF CAREER Award; the NASA Early Career Faculty Award; American Helicopter Society's Howard Hughes Award; Women in Aerospace Inaugural Initiative-Inspiration-Impact Award; two NASA Group Achievement Awards; two NASA Superior Accomplishment Awards; Lockheed Martin Space Operations Lightning Award; AIAA's Intelligent Systems Distinguished Service Award. She holds an endowed position as Black & Veatch faculty fellow, is an Associate Fellow of AIAA, and is a Senior Member of IEEE, ACM, and SWE. Dr. Rozier has served on the NASA Formal Methods Symposium Steering Committee since working to found that conference in 2008.

1st June 2022 - The current state of verification, validation and safety of Autonomous Vehicles


foretellix_logoAutonomous Vehicles (AVs) are still not widely used, though practical applications in some verticals (e.g. mining) are already starting. I will describe the main V&V challenges of AVs, and why AV safety is considered such a hard problem. Then I will try to describe the current ideas about best practices, and what my company (Foretellix) is doing about it.


Yoav Hollander has been involved in the verification of complex systems for more years than he cares to remember. He Invented the “e” verification language, and founded Verisity to commercialize the language and related VLSI verification methodology. Later, Yoav founded Foretellix, a company dedicated to verifying autonomous vehicles, where he plays the role of CTO.

25th May 2022 - Professor John A. McDermid on Safety and cyber security of safety-critical and high-integrity systems.

york-uniJohn's research interests are safety-critical systems; safety engineering; safety of autonomy and robotics; impact of cyber security on safety and software engineering for high integrity systems.


JohnMcDermidProf John McDermid received his undergraduate education at Cambridge University and later his Ph.D. at the University of Birmingham, UK in 1981. John was with the UK Ministry of Defence as a research scientist and spent five years in the software industry. He then moved to take up the position of Chair in Software Engineering at the University of York since 1987. He became Head of Department of Computer Science for the years 2006-2012 and 2016-2017. He is author or editor of six books and has published over 370 papers. He was a Vice President of the BCS (Bristish Computing Society) from 2000 to 2003 and a founding member of the United Kingdom Computing Research Committee (UK CRC). He is a member of the Defence Scientific Advisory Council and the Rolls-Royce Electrical and Controls Advisory Board. John was made an OBE and elected to Fellow of the Royal Academy of Engineering. He is a fellow of the British Computer Society.

4th May 2022 - Ethical AI in practice


lv_logoBusinesses deploying machine learning models into customer facing platforms must have a rigorous process of model governance & risk analysis to ensure that ethical risks are taken into consideration to avoid potential negative impacts on customers. LV= have a mature data science team of over 50 and are leading the way when it comes to best practice when implementing machine learning into their systems and processes. In this talk, David will give an overview of the areas businesses should consider when it comes to AI ethics & governance to ensure that ethics are baked into the design and development of machine learning models.


David Hopkinson is a Data Science Manager at LV= where a mature data science team has developed and deployed dozens of machine learning solutions across the business.  He also works closely with the University of Bristol Digital Futures Institute on novel research into AI ethics.  David has a PhD in Engineering from the University of Cambridge and has worked in data science for 4 years on a range of commercial challenges.

27th April 2022 - Automated Vehicles: Developments and Challenges


smmt_logoThe Society of Motor Manufacturers and Traders (SMMT) is one of the largest and most influential trade associations in the UK. Its resources, reputation and unrivalled automotive data place it at the heart of the UK automotive industry.SMMT is the voice of the UK motor industry, supporting and promoting its members’ interests, at home and abroad, to government, stakeholders and the media. SMMT represents more than 800 automotive companies in the UK, providing them with a forum to voice their views on issues affecting the sector, helping to guide strategies and build positive relationships with government and regulatory authorities.


David Wong is Senior Technology and Innovation Manager at SMMT, the UK’s automotive industry body with more than 800 members including all major vehicle manufacturers, component suppliers, aftermarket businesses, technology and engineering firms, and mobility start-ups. David is SMMT’s lead on electric and fuel cell vehicles, connected and automated vehicles, autotech and future mobility innovation. In addition to automotive companies, David often works with stakeholders from technology, transport, telecoms, energy, legal, insurance, infrastructure and government on policy, strategy and market development issues. He also sits on the UK Automotive Council Technology Group and is a Non-Executive Director of Cenex. He was a member of the UK Department for Digital, Culture, Media and Sport’s Future Communications Challenge Group, which advised the UK Government on 5G strategy.

13th April 2022 - Collective transport of arbitrarily shaped objects using robot swarms


toshiba_logoOut-of-the-box swarm solutions powering industrial logistics will need to adapt to the tasks at hand, coordinating in a distributed manner to transport objects of different sizes. This work designs and evaluates a collective transport strategy to move large and arbitrarily shaped objects in warehouse environments. The strategy uses a decentralised recruitment and decision-making process, ensuring that sufficient robots are in place for a coordinated, safe lift and transport of the object. Results show robots having no prior knowledge about the object’s size and shape were successfully able to transport them in simulation. This work was recently published in the Springer Journal of Artificial Life and Robotics: https://link.springer.com/article/10.1007/s10015-022-00730-5


Marius Jurt is a Research Engineer with a passion for cooperative multi-robot systems. He joined Toshiba BRIL in 2020, after completing an MSc in Robotics at the University of Bristol. With nearly 8 years of experience in the industry and applied research, Marius has a diverse background in automation & robotics, and electrical engineering & information technology. His current research interest is in how mobile cyber-physical collectives can self-organise and work together most effectively, focusing on scalability, adaptability, reliability.

6th April 2022 - MISRA C


ldra_logoMISRA C is a set of software development guidelines for the C programming language developed by The MISRA Consortium. Its aims are to facilitate code safety, security, portability and reliability in the context of embedded systems, specifically those systems programmed in ISO C / C90 / C99


Andrew Banks is a Technical Specialist at LDRA with more than 30 years' experience of high-integrity real-time/embedded software development. A Chartered Fellow of the British Computer Society, he graduated from the University of Lancaster in 1989, and has spent most of his career within the aerospace, defence and automotive sectors. Andrew is committed to standards development - he has been involved with MISRA since 2007 and has been Chairman of the MISRA C Working Group since early 2013; he is the Chairman of the BSI "Software Testing" Working Group; and an active participant in other BSI, ISO, IET and SCSC work, including the recent revision of ISO 26262.

30th March 2022 - Real-time Trajectory Planning for Autonomous Driving in Urban Areas


nissan_logoIn order to achieve trustworthy autonomous driving on urban road among other road users, not only the safety but also the smooth behaviors are significant. At the same time, it should be executed in real time on embedded processer which has a limited computational capability. Extracting the essence of dynamic programming, I developed the efficient optimization-based trajectory planning method, which enables to ensure the safety distance to surrounding objects and smooth longitudinal and lateral behaviors in real time. Some simulation and experimental results show the effectiveness in the real world. I will also introduce the previous and current Nissan’s CAV activity in the UK and Japan.


nissan_carAkinobu Goto is a research engineer in autonomous driving on urban road. He joined Nissan Motor Co., Ltd. in 2014, after completing an MSc in Control Engineering at the Osaka University, Japan. With nearly 7 years of experience in the industry, he just started his work in Trustworthy Systems Laboratory as a visiting researcher for two years from Feb 2022. His research interests in how the autonomous vehicle can gain trust from other road users in an urban complexed situation which needs negotiation, implicit communication.

23rd March 2022 - Efficient Evaluation of Perception Tasks in Simulation for Autonomous Vehicles


5ai_logo‌Characterising erroneous behaviours of systems which contain deep learning models before deploying them into safety-critical scenarios is crucial but presents many challenges. For tasks involving compute intensive object detectors, large scale testing of the model in simulation will incur a significant computational cost. I will present our recent results on an efficient low-fidelity simulation approach in the Carla driving simulator, which enables state of the art LiDAR detectors to be tested at reduced computational expense using surrogate models. I will also outline the research on uncertainty for perception systems at Five, and the applications to the domain of autonomous vehicles. https://www.five.ai/


Jonathan SadeghiDr. Jonathan Sadeghi is a Research Engineer based in Bristol, one of the offices of a startup called Five AI that is focused towards developing AI systems to help build driverless cars. He obtained his PhD in Engineering from the University of Liverpool (2020), focusing on uncertainty quantification and machine learning. His current research interests span the intersection of computer vision and probabilistic machine learning with applications to autonomous vehicles. https://jcsadeghi.github.io/

9th June 2021 - The Inmos Transputer


inmosI will talk about the formation of the microelectronics company Inmos and the development of the Inmos transputer, the first microcomputer designed for concurrent processing. The team at Inmos developed innovative design tools, the concurrent programming language occam and several versions of the transputer including the floating point transputer that was verified using formal methods. By 1990, the transputer user group included 5000 people in 40 countries.  


UoB_logo_smallDavid May is Emeritus Professor of Computer Science at Bristol University. He is known for numerous innovations in computer architecture including the Inmos transputer, the occam concurrent programming language, the ST Chameleon system-on-chip architecture and the Xmos multithreaded multicore processor. He is the author of over 100 papers and 50 patents. David was elected a Fellow of the Royal Society in 1990 for his contributions to computer architecture and parallel computing, and a Fellow of the Royal Academy of Engineering in 2010. His interests are in computer architecture; design and verification; mobile and wearable computing; robotics; high-performance computing. He maintains active relationships with technology industry and investors, and has acted as an advisor to several early-stage companies. Alongside this, he has advised on intellectual property issues and acted as an expert witness in litigation.

2nd June 2021 - Generic Pattern of Life and Behaviour Analysis


robotDespite being widely used within the field of intelligence generation, there is no formal definition for the concept of Pattern of Life (PoL). PoL analysis is applicable to the behaviours of human and non-human entities in a wide range of applications. Currently, humans generate PoL intelligence manually and this is a time consuming task, often leading to humans being overloaded with data. This presentation reviews the field and defines a set of generic PoL concepts that provide a consistent set of terminology for use in PoL based research. A generic PoL processing scheme is proposed which can be the basis of systems that will automatically produce PoL intelligence from heterogeneous data, ready for further exploitation by the human operator.


thales_logoDr Rachel Craddock joined Thales UK in 1998, after working at the University of Reading as a Research Fellow in the Department of Cybernetics. With nearly 30 years of experience in Artificial Intelligence and machine learning, Rachel is one of Thales UK’s technical experts, specialising in applied Artificial Intelligence and machine learning, data fusion and Pattern of Life / behaviour analytics. She applies these specialisms to a variety of application areas, both civil and defence, including cyber security, crisis management, autonomous vehicles and command and control systems. Most of her work involves "the extraction of intelligence from sensor data", with the aims of providing meaningful information to operators and reducing operator overload. Rachel is also the neurodiversity lead for Thales UK, working on increasing inclusion of neurodivergent people into the workplace.

19th May 2021 - Doing good research in system safety


checklistWhen you propose a method or tool for doing system safety work, it’s very hard to know if it makes things better or worse. After all, the outcomes we care about (serious accidents) are extremely rare, and the organisations that develop safety-critical systems tend to be large and complex. Given this uncertainty, it’s quite easy for a method or tool that looks and sounds great in theory to give no benefit in practice — or even to make things worse. To counteract this, good system safety research needs to be based on thorough empirical study of how safety is achieved by real organisations in the relevant domain. In this talk, I’ll go over some of the methods we plan to use at York to combat this uncertainty, and illustrate them with reference to a class of practices that we at York have advocated for a lot in the past — structured safety cases.


york-uniDr Rob Alexander is a Lecturer in the Department of Computer Science at the University of York. He does a range of research in automated testing and system safety, with a particular interest in the safety and security validation of autonomous robots, but the thing that really gets him agitated is the rigorous study of systems safety practice as they are practiced in the real world.

12th May 2021 - Testing of Autonomous Software Systems


cobotIndustrial collaborative robots are ever-growing complex systems which embed more and more self-decision and autonomous planning capabilities. For that reason, it is crucial to thoroughly validate their software systems with appropriate software testing techniques. As it has become very difficult to predict exactly and accurately their expected behaviors in all situations, several Artificial Intelligence methods are explored to facilitate the selection and scheduling of test cases and to predict expected test results. My talk will review some of these methods and how they are deployed to testing robotic software systems.


simula_logoArnaud Gotlieb, chief research scientist at Simula Research Laboratory in Norway, has worked on the application of Artificial Intelligence to the validation of software-intensive systems, cyber-physical systems including industrial robotics and autonomous systems. He completed his PhD on automatic test data generation using constraint logic programming in 2000 at the University of Nice-Sophia Antipolis and got habilitated (HDR) in Dec. 2011 from University of Rennes, France. Dr. Gotlieb has co-authored more than 120 publications in Artificial Intelligence and Software Engineering and developed several tools for testing safety-critical systems. He was the scientific coordinator of the French ANR-CAVERN project (2008-2011) for Inria, dedicated to the verification of software systems with abstraction-based methods and he led the research-based innovation centre Certus dedicated to software validation and verification (2011-2019) at Simula. He was awarded with the prestigious RCN FRINATEK grant for the T-LARGO project on testing learning robots (2018-2022). He leads the industrial pilots experiments of the H2020 AI4EU Project (2019-2021), dedicated to the creation of the European AI-on-demand platform. Dr. Gotlieb has served in many PCs including IJCAI, AAAI, CP, ICSE-SEIP, ICST, ISSRE, co-chaired the scientific program of QSIC 2013, the SEIP track of ICSE 2014, the “Testing and Verification” track of CP from 2016 to 2019. He co-chaired the first IEEE Artificial Intelligence Testing Conference in 2019 and he is an associate editor of the Wiley Software Testing, Verification and Reliability journal. In 2021, he has co-created RESIST, the first Inria-Simula associate team dedicated the development of resilient software-systems.

5th May 2021 - Robot Accident Investigation: a case study in Responsible Robotics


robot_brokenRobot accidents are inevitable. Although rare, they have been happening since assembly-line robots were first introduced in the 1960s. But a new generation of social robots are now becoming commonplace. Unlike industrial robots, which are deployed in safety cages, social robots are designed to operate in human environments and interact closely with humans; the likelihood of robot accidents is therefore much greater for social robots than industrial robots. In this talk I will outline a draft framework for social robot accident investigation; a framework that proposes both the technology [1] and processes [2] that would allow social robot accidents to be investigated. I will position accident investigation within the practice of responsible robotics, and argue that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation.


Bristol Robotics Lab text logo: BRLAlan Winfield is Professor of Robot Ethics at the University of the West of England, Bristol, visiting Professor at the University of York, and Associate Fellow of the Cambridge Centre for the Future of Intelligence. Winfield co-founded the Bristol Robotics Laboratory and his research is focussed on the science, engineering and ethics of intelligent robots. Winfield is an advocate for robot ethics; he sits on the executive of the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems, and chairs Working Group P7001, drafting a new IEEE standard on Transparency of Autonomous Systems. Winfield also sits on the World Economic Forum’s Global AI Council. He has published over 250 works including Robotics: A Very Short Introduction (Oxford University Press, 2012).   

28th April 2021 - Identifying Heterogeneous Multi-Processing (HMP) configurations for performance and energy efficiency


batteryEnergy consumption is a critical component in designing applications and computing systems for economic and environmental reasons. In the modern embedded systems running on battery, the least energy consumption may represent a better user experience, cost, or the capability to integrate more high-level features. Current architectures provide many control knobs to reduce applications' energy consumption, like reducing the number of used cores or scaling down their frequency. However, choosing the right values for these knobs tailored to an application's needs is a complicated task in embedded systems, particularly in heterogeneous architectures. Many commercially devices in the embedded and mobile world as ARM big.LITTLE provides two different types of processing cores, delivering a higher performance-energy ratio than the homogeneous architectures. However, this implies exploring a larger configuration space (number of cores and frequencies of each cluster), increasing the complexity when developing energy management solutions. In such a context, this seminar will introduce a methodology to find configurations that yield optimal performance and energy consumption trade-offs for parallel applications in HMP systems. The method is intended to support the operating system to make scheduling decisions at runtime according to the given application's performance and energy consumption requirements, and according to the system it is running on.


ufrn-logoDemetrios is a PhD student at the Federal University (UFRN) in Brazil. He has been collaborating with the Trustworthy Systems Lab at Bristol since 2017 in the field of Energy Efficient Software funded initially through a Royal Society Newton Advanced Fellowship. Since September 2019, he has been working as a visiting researcher at the University of Bristol, joining the TeamPlay team. Demetrios is investigating the performance and energy trade-offs of multi-threaded applications running on heterogeneous embedded devices. His research interests are High-Performance and Energy-Efficient Computing, parallel computing and their applications. 

21st April 2021 - Assurance of Machine Learning in Autonomous Systems


robotMachine Learning (ML) is now used in a range of systems with results that are reported to exceed, under certain conditions, human performance. Many of these systems, in domains such as healthcare, automotive and manufacturing, exhibit high degrees of autonomy and are safety critical. Establishing justified confidence in ML forms a core part of the safety case for these systems. In this talk, I will introduce a methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS). AMLAS comprises a set of safety case patterns and a process for (1) systematically integrating safety assurance into the development of ML components and (2) for generating the evidence base for explicitly justifying the acceptable safety of these components when integrated into autonomous system applications. I will give examples from current case studies and experiments that are used to evaluate AMLAS.


york-uniIbrahim Habli is a Senior Lecturer in Safety-Critical Systems at the University of York. He researches safety cases, software safety and autonomous systems. He teaches extensively on York's postgraduate programme in safety-critical systems engineering. He is also an academic lead on the Assuring Autonomy International Programme. 

14th April 2021 - Dynamic Power and Energy Modelling in the TeamPlay Research Project


TeamPlay - Time, Energy and security Analysis for Multi/Many-core heterogenous PLAtforms - is a tree year international research project funded by the EU Horizon 2020 programme. The primary objective is to develop a set of tools and techniques to analyse and optimise execution time, energy usage and security at the source code level. The UOB TSL group is a key partner in the project, leading the work package that involves the development of power and energy models for the target platforms and use-cases. This talk will showcase the modelling methodologies developed by the team and includes a demonstration of the model generation software.

More information about the project is available at https://www.teamplay-h2020.eu/ 


UoB_logo_small Kris Nikov is a Senior Research Associate in SCEEM (School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths) at the University of Bristol. His research interests include energy modelling and management as well as software and hardware performance optimisation with a focus on reconfigurable and embedded heterogeneous systems.

7th April 2021 - It's Good to Talk: On the Principles of Collective Learning in Distributed Autonomous Systems


MLIn collective learning a population of intelligent agents learn about the world by receiving direct evidence from the environment and through local interactions in which individuals share information and/or opinions. In this talk we discuss possible general principles for successful collective learning and illustrate them using agent-based simulations and robot experiments for a number of different types of learning problem.


UoB_logo_small ‌Jonathan Lawry is Professor of Artificial Intelligence and Head of the Intelligent Systems Laboratory at the University of Bristol. His research focus is reasoning under uncertainty, including probabilistic models of vagueness (fuzziness) and imprecision, applied across application domains including environmental modelling, flood risk modelling, robotics, and multi-agent communications. In recent years he has been investigating the role of uncertainty representation in collective learning, where a population of agents learn from each other and also from direct evidence.

 

24th March 2021 - Programming Robots for Safety and Robustness


robotRobotic systems blend hardware and software in a holistic way that raises many crosscutting concerns (concurrency, time constraints, ...). For this reason, general-purpose languages often lead to a poor fit between language features and implementation requirements. In this talk I will argue for the use of Domain-Specific Languages (DSLs) to overcome this problem, enabling the programmer to quickly and precisely solve complex problems within the robotics domain, while opening up interesting opportunities for verification and validation. I will focus on the use of DSLs to abstract over safety and robustness, using industrial robots, mobile robots, and aerial robots as examples.


SDU-uniUlrik Pagh Schultz is an Associate Professor at the Center for Unmanned Aerial Systems, University of Southern Denmark. His research interest is focused on the design and implementation of programming languages and other high-level software abstractions for aerial, mobile, and self-organizing robots with a focus on safety, reliability and robustness.

 

17th March 2021 - Energy-labeling web-based IT systems


batteryRunning ICT currently consumes 6% of the electricity worldwide. ICT has three main energy consumers: (1) user-devices that connect to the Internet (42%), (2) networking equipment that transmits data between
devices (27%) and (3) servers (and local network) for keeping data (31%). While hardware consumes energy, it can only be as efficient as the software running on it and, thus, a reduction of the energy consumption that is directly attributable the execution of software would reduce the overall energy-consumption of ICT. Based on the fact that consumer-oriented energy-labels have delivered substantial energy savings in other sectors, we hypothesize that providing consumer-oriented energy-labels for ICT software will promote more energy-efficient software. Over time, this is expected to deliver substantial energy savings. Our aim is to develop energy labels that enable web-site creators and consumers to opt for low-energy consuming sites and thereby promoting energy-efficient software. In this talk we present the motivation, our approach and the challenges of delivering consumer-oriented energy-labels for software.


ruc-uniDr Maja H. Kirkeby is an assistant professor at Roskilde University. Maja's current and previous research relates to energy-aware computing, more specifically resource-consumption analyses based on user-behavior. Maja's research interests span both energy-aware computing and other program properties.

 

10th March 2021 - A voyage through recent advances in the modelling, verification and synthesis of stochastic systems


stochasticStochastic modelling is a powerful tool for establishing performance, dependability and other key properties of systems during design, verification, and at runtime. However, the usefulness of this tool depends on the accuracy of the models being verified, on the efficiency of the verification, and on the ability to synthesise models corresponding to effective system architectures and configurations. This talk will describe how recent approaches to stochastic model learning, verification and synthesis address major challenges posed by these prerequisites, enabling the application of stochastic modelling to a broader range of systems - including autonomous systems.


york-uniRadu Calinescu is Reader of Computer Science at the University of York, where he leads a research team developing mathematically based techniques and tools for the modelling, analysis, verification, and engineering of autonomous systems. He is the PI on the £3M UKRI Trustworthy Autonomous Systems Node in Resilience, and the Safety of AI Theme Lead on the £12M Assuring Autonomy International Programme. He is a Senior IEEE Member, a founding member of the working group developing the IEEE Guide for the Verification of Autonomous Systems, received a BCS Distinguished Dissertation Award for his University of Oxford PhD thesis, and has published over 120 research papers.

24th Feb 2021 - OmniCAV: A Simulation and Modelling System to enable “CAVs for All”


omniCAV_smallThe CCAV funded OmniCAV project is laying the foundations for the development of a comprehensive, robust and secure simulator, aimed at providing a certification tool for Connected Autonomous Vehicles (CAVs) that can be used by regulatory and accreditation bodies, insurers and manufacturers to accelerate the safe development of CAVs. To achieve this, OmniCAV is using highly detailed road maps, together with a powerful combination of traffic management, accident and CCTV data, to create a high-fidelity traffic and driving simulation environment to interact with the AV under test. Scenarios for testing are developed and randomised in a holistic way to avoid CAVs training to specific conditions. Critically, the simulator offers coverage of a representative element of the U.K. road network, through encompassing rural roads, peri-urban and urban roads. The validity of the synthetic test environment compared to the real-world is of particular importance, and OmniCAV will be tested and refined through an iterative approach involving real-world comparisons and working in conjunction with a CAV testbed. The presentation will give a top level overview of progress in the project which is due to conclude Summer 2021.


aimsun_logoMark Brackstone is Project Manager for the OmniCAV Project and has been R&D Project Manager and bid lead with Aimsun Ltd for 6 years. Mark leads Aimsun’s involvement in a range of Innovate UK and EU H2020 projects in the area of connected and autonomous vehicles that has included HumanDrive, CAPRI and Flourish. He has a background in simulation, ITS and driving behaviour based on both industrial experience and academic work, and has just achieved the milestone of having spent half his career in industry after spending the first half in academia. 

17th Feb 2021 - Advanced stimulus for simulation-based design verification Pt.II


MLSimulation-based verification is an important and widely used technique in Design Verification. In simulation-based verification, functional coverage is an important metric to measure the progress of the verification. However, with the increasing size of commercial digital design, the closure of functional coverage is hard to achieve and requires tremendous domain expertise, making the verification becomes more and more time-consuming. This seminar will introduce several methodologies of how machine learning algorithms can be implemented in simulation-based design verification, in order to accelerate the convergence of functional coverage.


UoB_logo_smallNyasha Masamba is a research student at the University of Bristol. After obtaining his BSc Computing degree, he spent six years in industry working in software engineering and analytics roles. Thereafter, Nyasha gained his MSc in Advanced Computing during which he got involved in robotics verification. Nyasha is currently part of the Trustworthy Systems Laboratory. In his PhD he collaborates with Infineon Technologies, investigating the use of artificial intelligence to further automate coverage driven test generation.

This is a 2-part talk over 2 consecutive weeks with different speakers.

10th Feb 2021 - Advanced stimulus for simulation-based design verification Pt.I


MLSimulation-based verification is an important and widely used technique in Design Verification. In simulation-based verification, functional coverage is an important metric to measure the progress of the verification. However, with the increasing size of commercial digital design, the closure of functional coverage is hard to achieve and requires tremendous domain expertise, making the verification becomes more and more time-consuming. This seminar will introduce several methodologies of how machine learning algorithms can be implemented in simulation-based design verification, in order to accelerate the convergence of functional coverage.


UoB_logo_smallXuan Zheng just started his first year as a PhD student, in the field of automating Design Verification by using Machine Learning. In his later period 2019, he worked with Infineon Bristol,UK and collectively proved the effectiveness of using the test selector based on Novelty Detection to accelerate the convergence of functional coverage.

This is a 2-part talk over 2 consecutive weeks with different speakers.

3rd Feb 2021 - Why do things go wrong (or right)?


causalityIn this talk, I will (briefly) introduce the theory of actual causality as defined by Halpern and Pearl. This theory turns out to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition. I will outline the evolution of the definitions of actual causality and intuitive reasons for the many parameters in the definition using examples from formal verification. I will also introduce the definition of responsibility, which quantifies the definition of causality. We will look in more detail at some applications of causality in the software engineering and deep learning domains. It is interesting to note that explanation of counter-examples using the definition of actual causality is implemented in an industrial tool and produces results that are usually consistent with the users’ intuition, hence it is a popular and widely used feature of the tool. The talk is based on several papers, and is not limited to my own research. The talk is reasonably self-contained.


KCLDr Hana Chockler is a Reader in the Department of Informatics, King’s College London. Prior to joining KCL in 2013, Hana worked at IBM Research in the formal verification and in software engineering departments. Hana’s research interests span a variety of topics, including formal verification of hardware and software, synthesis of hardware designs, learning, and the applications of actual causality to deep learning.

You can view previous slides on our teams channel files.‌

Edit this page