The Bristol Interactive AI Summer School (BIAS) 2024
The Interactive AI CDT hosted 'BIAS', an in-person summer school between the 3rd and 5th of September 2024. For three days the fundamentals and the latest progress in key areas of AI was discussed by a range of experts from both industry and academia. This year, we focused on human-centered AI with topics such as interpretable neural networks, human-in-the-loop knowledge-driven AI and interactive machine learning. We welcomed both PhD students and early-career researchers in AI and neighbouring areas, as well as industry partners.
This year's event took place at Engineers' House, Clifton Down, Bristol.
Public Programme - (Subject to change)
Tuesday 3rd September, 2024
09.30-10:15: Registration
10.15: Welcome from the IAI CDT Director, Professor Peter Flach
10.30: Speaker: Professor Colin Gavaghan (Bristol Digital Futures Institute) "Oversight and control of AI"
Talk title: "Oversight and control of AI"
Various rules and guidelines require that AI and algorithms used in important decisions must be subject to human oversight and control. This is thought to offer reassurance that human decision-makers will always remain 'in the loop'. But what does that mean in practice, and how can we ensure that such control is actually meaningful? In this talk, I will explain what the law says in this regard, and explore the possible pathways and obstacles to compliance.
11.45: Speaker: Dr Davide Moltisanti (University of Bath) "Labelling actions in videos.."
Talk title: "Labelling actions in videos: assessing how semantic and temporal ambiguity affect recognition in video models"
Abstract:
Action recognition is the task of recognising what human action is visible in a trimmed video using a video classifier such as a neural network. This is typically a supervised problem where models learn from labelled data. To label data human annotators decide how to semantically annotate each action (e.g., what verb and noun should describe the action) and how to temporally delimit the action (i.e., decide when the action starts and ends). These annotations are often ambiguous and somewhat arbitrary - for example the same action can be described with multiple equivalent verbs and nouns, and it is difficult to consistently pinpoint the beginning and the ending of actions when a dataset is annotated by multiple annotators. These issues are often overlooked, yet are problematic since most models expect no semantic overlap between classes and struggle to learn actions when these are not trimmed consistently. In this talk I will go through a series of papers that try to address these problems. I will also talk about an alternative type of temporal annotation (single timestamps) that alleviates temporal ambiguity whilst also lowering labelling burden.
Bio:
14.00: Poster Sessions from IAI CDT students (University of Bristol)
16:00 Speaker: Zeyad Sakr (Dyson)
Talk title: "Bridging the gap: navigating the AI buzz in industry"
Abstract:
The buzz around AI is immense in industry, managing this excitement is crucial. How can you identify projects that truly benefit from machine learning? Bridging the gap between what is expected and what can be accomplished is a delicate balance. Join this talk to explore the intricacies of applying AI in industry, from filtering projects to achieving realistic goals during the hype.
Wednesday 4th September, 2024
9.00 Registration/ 9.45: Speakers: Dr Oliver Ray (University of Bristol) and CDT IAI PhD student Oli Deane
Talk title: Neuro-Symbolic Policy Learning via Expert-guided Acquisition of Soft-constraints.
Abstract:
This talk will introduce a neuro-symbolic approach which combines Interactive Inductive Logic Programming with Deep Q Learning in order facilitate the learning of domain knowledge from expert examples and user interaction.
10.45: Speaker: Dr Rahul Nair (IBM, University College Dublin)
Talk title: Trusted AI in industry applications
Abstract:
What does it take to trust machine-driven decisions? I first describe some open challenges in deploying AI systems in practice using examples in a few domains. I then present IBM’s open-source trusted AI toolkits focusing on explainability, fairness, and governance. I describe our recent work on human-compatible perspectives and principled approaches to assessing AI model safety. I conclude by describing AutoFair, our ongoing EU research effort aimed at trusted and human compatible automation.
Bio:
Rahul Nair is a Senior Research Scientist at IBM Research and Adjunct Associate Professor at the School of Computer Science, University College Dublin. His research interests are in trusted AI, model transparency, explainability, and technology for development. His expertise is in optimization and machine learning which he has applied across sectors particularly transportation, healthcare, and business computing. He holds a Ph.D. from University of Maryland College Park.
https://research.ibm.com/people/rahul-nair
12:00: Speaker: Dr Steve Moyle, Amplify Intelligence, (UK) "A machine assisted logical approach to understanding cyber"
Talk title: "A machine assisted logical approach to understanding cyber attacks"
Abstract:
Cyber Security (or insecurity) is a natural corollary of the limits of computability. In a sense, we might call it "Turing's Curse". Evolution has found ways to survive infinity, so too, we need to better survive cyber insecurity.
The good news is that the problem tends to be very logical, and can succumb to the rigours of logical forms of reasoning. This talk looks back on a few case studies of applying mechanised reasoning to specific classes of cyber attacks.
We then show some recently published techniques where human cyber defenders -- with their hunches -- guide a logical AI to perform an intelligent search through the vast intrusion detection system logs, to come to a usable understanding of cyber behaviour. Maybe, strictly logical approaches are not necessary, and that the rise of generative systems will prevail.
Bio:
Steve Moyle is extremely fortunate to have studied under someone who regularly played chess with Alan Turing, and passed down Turing's passion for machine intelligence encompassing logic. Steve has an industrial engineering background, overlaid with a doctorate in Computer Science from Oxford. Inspired by joint work with one of his Oxford students he founded a cyber security product company which was acquired by Oracle. He has a long-time research relationship with the AI group at Bristol University.
Alas, having seen AI winters blow in and Summers re-emerge, he feels that he is on the losing side, with the rise of the success of
connectionist approaches now dominating the logical ones.
Followed by lunch: 13:00
14:00 Speakers: Liam Wilkinson - Head of Applied AI Incubator & Billy Crawford - Head of AI for Linked Data at the MoJ
Title: "Applying AI to Legislation in Government"
Abstract:
A practical guide to exploring and drafting UK Legislation with AI. This talk will cover some of the fundamentals of Legislation structure, semantic search, embedding model fine tuning and prototyping in government.
Bio:
Liam Wilkinson is the Head of Applied AI at the Incubator for AI in DSIT. He has a background as a Number 10 Innovation Fellow, AI Architect at Microsoft and AI Engineer in UK + US Government. He is currently working to build AI systems for navigating Legislation and other Government applications.
Title: "Greater than the Sum of its Parts"
Abstract:
The existing structures of government, designed around specific policy areas, have historically operated in isolation, leading to fragmented data, digital, and analytical systems. This talk explores how data linking can bridge these gaps, fostering more integrated and effective governance. By connecting disparate datasets, new opportunities for artificial intelligence (AI) emerge, enabling better decision-making and improved public services. This presentation will highlight key case studies and outline the potential impacts of AI-driven data integration on policy and delivery.
Bio:
Billy Crawford is a Head of Technology and Data for the Better Outcomes through Linked Data (BOLD) programme. A Data Scientist by training, he has worked on machine learning, modelling, and dashboarding in the public and private sectors.
15:30: Demetra Brady and Antonia Paterson (Google DeepMind) - AI Ethics & Safety workshop
As part of the interactive segment of the Summer School, Google DeepMind will conduct an "AI Ethics & Safety Workshop" during which they will introduce the work of the Google DeepMind Responsibility team and outline the processes involved in evaluating the ethical and societal implications of AI systems. Participants will engage in thoughtful deliberation and discussion of appropriate responses to real-world scenarios and will gain insights into the complexities of ethical decision-making in AI development
17:00 - 18:30 Social Event: Drinks reception at Engineers' House (Piano room)
Thursday 5th September, 2024 - This student-centered day will be a mix of Industry talks, panel discussions and training aimed at IAI CDT PhD students
9.00: Registration and breakfast
09.30: VOX Coaching - Personal Impact and Confident Networking
Personal Impact and Confident Networking – aimed at PhD CDT students, run by VOX Coaching with trainer Al Nedjari Course description: Important meetings - whether in person or on screen; encounters with high-powered individuals, interviews, networking at conferences - they all put your communication skills to the test. What you say matters, of course, but so does how you say it. If your communication style isn’t up to scratch, your message may be lost. This lively and involving course will provide you with practical approaches to making communication memorable. We’ll look at how you can engage with others, hold attention and speak so they want to listen.
|
12:00 Panel session: Exploring Career Paths after a PhD
Join us for a panel discussion featuring former PhD students who have taken different career paths after finishing their PhD. We will have speakers who are working as a postdoc, a startup, a medium-sized company, and a large corporation. Each panelist will share their unique experiences and their perspective to help inform current PhD students about their potential next steps.
13:00 Industry Expo (including lunch)
Join us for a networking lunch connecting PhD students with industry professionals. Following the morning's "How to Network" session, this event offers a practical opportunity to put the networking skills into action. Enjoy a buffet lunch while engaging with representatives from various industries. This is a great chance to explore potential internships, future collaborations, and career opportunities.
15:30-16:00: Student cafe
A relaxed and informal session where we will reflect on the day. This is an opportunity to discuss and process what you’ve learned about networking strategies, the experiences of former PhD students, and interactions with industry representatives. This will be breakout-room style, and we will share what our groups have discussed.