TSL Seminar Series Archive, 2021-2022 Academic Year

This page contains information on Seminars that occurred during the 2021-2022 Academic Year. Details of upcoming talks, and how to attend, can be found on the main page: Seminar Series

Attend a Seminar

To attend a Seminar please join our Teams channel.

Here you will find a link to the meeting which is publicly available for anyone to join.

22nd June 2022 - IEEE 7001-2021: A new standard on Transparency of Autonomous Systems


UoB_logo_smallIEEE standard 7001-2021 is a new standard on Transparency of Autonomous Systems [1]. Published on 4 March 2022, the standard sets out measurable, testable levels of transparency, so that autonomous systems can be objectively assessed and levels of compliance determined. One of an emerging set of new standards in robotics and AI [2], 7001 is, to the best of our knowledge, the world’s first technical standard on transparency. In this talk I will outline the thinking behind 7001, its scope and structure [3]. I will introduce the 5 stakeholder groups addressed in 7001 and their different transparency needs, and illustrate how the standard can be applied in practice, with worked examples. I will argue that transparency is an essential ingredient of responsible and trustworthy AI.


Alan Winfield is Professor of Robot Ethics at the University of the West of England, Bristol, visiting Professor at the University of York, and Associate Fellow of the Cambridge Centre for the Future of Intelligence. Winfield co-founded the Bristol Robotics Laboratory and his research is focussed on the science, engineering and ethics of intelligent robots. Winfield is an advocate for robot ethics; he sits on the executive of the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems, and chairs Working Group P7001, drafting a new IEEE standard on Transparency of Autonomous Systems. Winfield also sits on the World Economic Forum’s Global AI Council. He has published over 250 works including Robotics: A Very Short Introduction (Oxford University Press, 2012).

References:

[1] "IEEE Standard for Transparency of Autonomous Systems," in IEEE Std 7001-2021, pp.1-54, 4 March 2022, doi: 10.1109/IEEESTD.2022.9726144.
[2] Winfield, A. (2019). Ethical Standards in Robotics and AI. Nat. Electron. 2, 46–48. doi:10.1038/s41928-019-0213-6
[3] Winfield AFT, Booth S, Dennis LA, Egawa T, Hastie H, Jacobs N, Muttram RI, Olszewska JI, Rajabiyazdi F, Theodorou A, Underwood MA, Wortham RH and Watson E (2021) IEEE P7001: A Proposed Standard on Transparency. Front. Robot. AI 8:665729. doi: 10.3389/frobt.2021.665729

 

15th June 2022 - Trustworthy Swarm Interaction


UoB_logo_smallRobot swarms create interesting opportunities when it comes storage and organisation solutions. The deployment of swarms can potentially provide small businesses access to efficient, automated storage without the need to purchase expensive, high maintenance warehouses. To achieve this, users and operators of robot swarms will need to monitor the operations of swarms in a distributed way, without explicitly tracking every agent, and without the need for significant infrastructure or set up. Similarly, operators will need to be able to interact and adjust swarm behaviour in an intuitive and simple manner. James will present his recent work exploring these ideas and how these concepts can relate to trustworthiness.


James Wilson is a Research Associate at the University of Bristol, investigating trustworthiness within swarm robotics. He completed his PhD at the University of York, developing bio inspired swarm behavioural controllers using virtual hormone systems. His recent work includes research on effective means of distributing user relevant information across swarm agents and exploration of effective means of interacting with swarm of robots for real-world use cases.

7th June 2022 (1pm) - Rockets, Route-Analyzers, Rotorcraft, and Robonaut2: Intelligent, On-board Runtime Reasoning


lab_temp_logicRuntime Verification (RV) has become critical to the deployment of a wide range of systems, including aircraft, spacecraft, satellites, rovers, and robots, as well as the systems that control them, like air traffic control systems and space stations. The most useful, important, and safety-critical jobs will require these systems to operate both intelligently and autonomously, with the ability to sense and respond to both nominal and off-nominal conditions. It is essential that we enable reasoning sufficient to react to dynamic environments and detect critical failures on-board, in real time, to enable mitigation triggering. We are challenged by the constraints of real-life embedded operation that limit the system instrumentation, space, timing, power, weight, cost, and other operating conditions of on-board, runtime verification. While the research area of RV is vast, there is a dearth of RV tools that can operate within these constraints, and without violating, e.g., FAA or NASA rules for air and space flight certification.

The Realizable, Responsive, Unobtrusive Unit (R2U2) analyzes specifications that combine temporal logics with powerful reasoning to provide formal assurances during runtime, enabling self-assessment of critical systems. This presentation overviews recent algorithmic advances and the case studies they enabled, including embedding on-board the humanoid robot Robonaut2, a UTM (UAS Traffic Management) system, a CubeSat, and the NASA Lunar Gateway.

 

*** Please note this talk is at 1pm ***


Kristin Yvonne RozierProfessor Kristin Yvonne Rozier heads the Laboratory for Temporal Logic in Aerospace Engineering at Iowa State University; previously she spent 14 years as a Research Scientist at NASA and three semesters as an Assistant Professor at the University of Cincinnati. She earned her Ph.D. from Rice University and B.S. and M.S. degrees from The College of William and Mary. Dr. Rozier's research focuses on automated techniques for the formal specification, validation, and verification of safety critical systems. Her primary research interests include: design-time checking of system logic and system requirements; runtime system health management; and safety and security analysis.

Her advances in computation for the aerospace domain earned her many awards including: the NSF CAREER Award; the NASA Early Career Faculty Award; American Helicopter Society's Howard Hughes Award; Women in Aerospace Inaugural Initiative-Inspiration-Impact Award; two NASA Group Achievement Awards; two NASA Superior Accomplishment Awards; Lockheed Martin Space Operations Lightning Award; AIAA's Intelligent Systems Distinguished Service Award. She holds an endowed position as Black & Veatch faculty fellow, is an Associate Fellow of AIAA, and is a Senior Member of IEEE, ACM, and SWE. Dr. Rozier has served on the NASA Formal Methods Symposium Steering Committee since working to found that conference in 2008.

1st June 2022 - The current state of verification, validation and safety of Autonomous Vehicles


foretellix_logoAutonomous Vehicles (AVs) are still not widely used, though practical applications in some verticals (e.g. mining) are already starting. I will describe the main V&V challenges of AVs, and why AV safety is considered such a hard problem. Then I will try to describe the current ideas about best practices, and what my company (Foretellix) is doing about it.


Yoav Hollander has been involved in the verification of complex systems for more years than he cares to remember. He Invented the “e” verification language, and founded Verisity to commercialize the language and related VLSI verification methodology. Later, Yoav founded Foretellix, a company dedicated to verifying autonomous vehicles, where he plays the role of CTO.

22nd June 2022 - IEEE 7001-2021: A new standard on Transparency of Autonomous Systems


UoB_logo_smallIEEE standard 7001-2021 is a new standard on Transparency of Autonomous Systems [1]. Published on 4 March 2022, the standard sets out measurable, testable levels of transparency, so that autonomous systems can be objectively assessed and levels of compliance determined. One of an emerging set of new standards in robotics and AI [2], 7001 is, to the best of our knowledge, the world’s first technical standard on transparency. In this talk I will outline the thinking behind 7001, its scope and structure [3]. I will introduce the 5 stakeholder groups addressed in 7001 and their different transparency needs, and illustrate how the standard can be applied in practice, with worked examples. I will argue that transparency is an essential ingredient of responsible and trustworthy AI.


Alan Winfield is Professor of Robot Ethics at the University of the West of England, Bristol, visiting Professor at the University of York, and Associate Fellow of the Cambridge Centre for the Future of Intelligence. Winfield co-founded the Bristol Robotics Laboratory and his research is focussed on the science, engineering and ethics of intelligent robots. Winfield is an advocate for robot ethics; he sits on the executive of the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems, and chairs Working Group P7001, drafting a new IEEE standard on Transparency of Autonomous Systems. Winfield also sits on the World Economic Forum’s Global AI Council. He has published over 250 works including Robotics: A Very Short Introduction (Oxford University Press, 2012).

References:

[1] "IEEE Standard for Transparency of Autonomous Systems," in IEEE Std 7001-2021, pp.1-54, 4 March 2022, doi: 10.1109/IEEESTD.2022.9726144.
[2] Winfield, A. (2019). Ethical Standards in Robotics and AI. Nat. Electron. 2, 46–48. doi:10.1038/s41928-019-0213-6
[3] Winfield AFT, Booth S, Dennis LA, Egawa T, Hastie H, Jacobs N, Muttram RI, Olszewska JI, Rajabiyazdi F, Theodorou A, Underwood MA, Wortham RH and Watson E (2021) IEEE P7001: A Proposed Standard on Transparency. Front. Robot. AI 8:665729. doi: 10.3389/frobt.2021.665729

 

4th May 2022 - Ethical AI in practice


lv_logoBusinesses deploying machine learning models into customer facing platforms must have a rigorous process of model governance & risk analysis to ensure that ethical risks are taken into consideration to avoid potential negative impacts on customers. LV= have a mature data science team of over 50 and are leading the way when it comes to best practice when implementing machine learning into their systems and processes. In this talk, David will give an overview of the areas businesses should consider when it comes to AI ethics & governance to ensure that ethics are baked into the design and development of machine learning models.


David Hopkinson is a Data Science Manager at LV= where a mature data science team has developed and deployed dozens of machine learning solutions across the business.  He also works closely with the University of Bristol Digital Futures Institute on novel research into AI ethics.  David has a PhD in Engineering from the University of Cambridge and has worked in data science for 4 years on a range of commercial challenges.

27th April 2022 - Automated Vehicles: Developments and Challenges


smmt_logoThe Society of Motor Manufacturers and Traders (SMMT) is one of the largest and most influential trade associations in the UK. Its resources, reputation and unrivalled automotive data place it at the heart of the UK automotive industry.SMMT is the voice of the UK motor industry, supporting and promoting its members’ interests, at home and abroad, to government, stakeholders and the media. SMMT represents more than 800 automotive companies in the UK, providing them with a forum to voice their views on issues affecting the sector, helping to guide strategies and build positive relationships with government and regulatory authorities.


David Wong is Senior Technology and Innovation Manager at SMMT, the UK’s automotive industry body with more than 800 members including all major vehicle manufacturers, component suppliers, aftermarket businesses, technology and engineering firms, and mobility start-ups. David is SMMT’s lead on electric and fuel cell vehicles, connected and automated vehicles, autotech and future mobility innovation. In addition to automotive companies, David often works with stakeholders from technology, transport, telecoms, energy, legal, insurance, infrastructure and government on policy, strategy and market development issues. He also sits on the UK Automotive Council Technology Group and is a Non-Executive Director of Cenex. He was a member of the UK Department for Digital, Culture, Media and Sport’s Future Communications Challenge Group, which advised the UK Government on 5G strategy.

13th April 2022 - Collective transport of arbitrarily shaped objects using robot swarms


toshiba_logoOut-of-the-box swarm solutions powering industrial logistics will need to adapt to the tasks at hand, coordinating in a distributed manner to transport objects of different sizes. This work designs and evaluates a collective transport strategy to move large and arbitrarily shaped objects in warehouse environments. The strategy uses a decentralised recruitment and decision-making process, ensuring that sufficient robots are in place for a coordinated, safe lift and transport of the object. Results show robots having no prior knowledge about the object’s size and shape were successfully able to transport them in simulation. This work was recently published in the Springer Journal of Artificial Life and Robotics: https://link.springer.com/article/10.1007/s10015-022-00730-5


Marius Jurt is a Research Engineer with a passion for cooperative multi-robot systems. He joined Toshiba BRIL in 2020, after completing an MSc in Robotics at the University of Bristol. With nearly 8 years of experience in the industry and applied research, Marius has a diverse background in automation & robotics, and electrical engineering & information technology. His current research interest is in how mobile cyber-physical collectives can self-organise and work together most effectively, focusing on scalability, adaptability, reliability.

6th April 2022 - MISRA C


ldra_logoMISRA C is a set of software development guidelines for the C programming language developed by The MISRA Consortium. Its aims are to facilitate code safety, security, portability and reliability in the context of embedded systems, specifically those systems programmed in ISO C / C90 / C99


Andrew Banks is a Technical Specialist at LDRA with more than 30 years' experience of high-integrity real-time/embedded software development. A Chartered Fellow of the British Computer Society, he graduated from the University of Lancaster in 1989, and has spent most of his career within the aerospace, defence and automotive sectors. Andrew is committed to standards development - he has been involved with MISRA since 2007 and has been Chairman of the MISRA C Working Group since early 2013; he is the Chairman of the BSI "Software Testing" Working Group; and an active participant in other BSI, ISO, IET and SCSC work, including the recent revision of ISO 26262.

30th March 2022 - Real-time Trajectory Planning for Autonomous Driving in Urban Areas


nissan_logoIn order to achieve trustworthy autonomous driving on urban road among other road users, not only the safety but also the smooth behaviors are significant. At the same time, it should be executed in real time on embedded processer which has a limited computational capability. Extracting the essence of dynamic programming, I developed the efficient optimization-based trajectory planning method, which enables to ensure the safety distance to surrounding objects and smooth longitudinal and lateral behaviors in real time. Some simulation and experimental results show the effectiveness in the real world. I will also introduce the previous and current Nissan’s CAV activity in the UK and Japan.


nissan_carAkinobu Goto is a research engineer in autonomous driving on urban road. He joined Nissan Motor Co., Ltd. in 2014, after completing an MSc in Control Engineering at the Osaka University, Japan. With nearly 7 years of experience in the industry, he just started his work in Trustworthy Systems Laboratory as a visiting researcher for two years from Feb 2022. His research interests in how the autonomous vehicle can gain trust from other road users in an urban complexed situation which needs negotiation, implicit communication.

23rd March 2022 - Efficient Evaluation of Perception Tasks in Simulation for Autonomous Vehicles


5ai_logo‌Characterising erroneous behaviours of systems which contain deep learning models before deploying them into safety-critical scenarios is crucial but presents many challenges. For tasks involving compute intensive object detectors, large scale testing of the model in simulation will incur a significant computational cost. I will present our recent results on an efficient low-fidelity simulation approach in the Carla driving simulator, which enables state of the art LiDAR detectors to be tested at reduced computational expense using surrogate models. I will also outline the research on uncertainty for perception systems at Five, and the applications to the domain of autonomous vehicles. https://www.five.ai/


Jonathan SadeghiDr. Jonathan Sadeghi is a Research Engineer based in Bristol, one of the offices of a startup called Five AI that is focused towards developing AI systems to help build driverless cars. He obtained his PhD in Engineering from the University of Liverpool (2020), focusing on uncertainty quantification and machine learning. His current research interests span the intersection of computer vision and probabilistic machine learning with applications to autonomous vehicles. https://jcsadeghi.github.io/

Edit this page