The Bristol Interactive AI Summer School (BIAS) 2022
The Interactive AI CDT were delighted to host 'BIAS', a summer school between the 6th and 8th of September 2022.
This unique event focused on machine learning and other forms of data-driven AI, intelligent reasoning and other forms of knowledge-intensive AI, human-AI interaction, and how to do all this in a responsible manner. For three days the fundamentals and latest progress in these key areas of AI were discussed by a range of experts.
The Summer School was aimed at PhD students and early-career researchers in AI and neighbouring areas.
Public Programme
Tuesday 6th September, 2022.
08.50-09.20: Registration
9.30: Welcome from the IAI CDT Director, Professor Peter Flach
10.00: James Cussens 'Algorithms for learning Bayesian networks'
Bayesian networks (BNs) are directed acyclic graphs (DAGs) with nodes representing random variables. Although there are a number of reasons for learning a Bayesian network from data the most interesting one is *causal discovery*: (attempting to) learn a DAG which represents a (probabilistic) causal model.
In this talk I will avoid getting into the details of the (very many) existing algorithms for BN learning and instead will focus on the high-level ideas behind the main families of algorithms. I will also present empirical results produced using the "benchpress" benchmarking system for comparing BN learning systems.
12.00: James Ladyman 'Attributing cognitive and affective states to AI systems'
The attribution of affective and cognitive states, and various kinds of agency, to nonhumans is liable to both false positives and false negatives. There are systematic reasons to expect errors of certain kinds. Some of these have to do with our nature or the nature of the technology, and some arise because the technology is developing in the context of massive inequalities of knowledge, power and wealth. Background factors make matters worse, but there are solutions to all these problems. It is possible to imagine very different kinds of AI and ways of implementing it in robots.
13.00: Lunch
14.15: Interactive AI CDT students: poster session
16.00: D Bennet (Aalto) ‘Complexity and embodiment in HCI’, 16.25: Dr A O'Kane 'AI, personal health & social care'
16:00 Daniel Bennett (is currently a post-doctoral researcher in the Department of Computer Science at Aalto University, and has just submitted his PhD in the BIG lab under the supervision of Oussama Metatla and Anne Roudaut):
Title: Complexity and Embodiment in Human Computer Interaction
HCI and AI has a long tradition of "embodied" approaches, which emphasise how human behaviour and cognition are deeply grounded in social and physical context, and in bodily movement. These ideas have been hugely influential on design and qualitative work in HCI. However, in certain ways have proven difficult to integrate into traditional approaches to AI: they emphasise that behaviour is highly relational, and emergent --- features which have traditionally been difficult to model.
In this talk I introduce some key ideas of embodied interaction, discussing what they tell us about human technology use. I then discuss some new approaches from complexity science which are beginning to open windows onto emergent and relational features of human behaviour, making some of them accessible to computational interaction and AI.
Dan will be giving the first talk at 16.00 and will be remote. (Email: db15237@bristol.ac.uk)
16:25 Dr Aisling O'Kane 'Artificial intelligence, personal health, and social care'
Wednesday 7th September, 2022
9.30 Keynote: Liz Sonenberg (Melbourne) Imperfectly rational, rationally imperfect, or perfectly irrational:...
Title: Imperfectly rational, rationally imperfect, or perfectly irrational: Challenges for human-centred AI
Automated decision aids are generally intended to support a human decision maker and improve the quality of decisions made. But the availability of such decision aids can foster automation bias, i.e. over-reliance on their advice, and the use of AI-generated explanations has been proposed to mitigate this effect.
I will reflect on the implications of human cognitive biases in AI-supported decision making, describe some investigations of the effect of explanations on automation bias, and discuss related considerations in the design of human-centred AI systems.
11.30: Dr Nirav Ajmeri 'Ethics and fairness in sociotechnical systems'
Ethics is inherently a multiagent concern---an amalgam of (1) one party's concern for another and (2) a notion of justice. To capture the multiagent conception, the first part of this talk will introduce ethics as a sociotechnical construct. Specifically, I will describe how ethics can be modelled and analysed, and requirements on ethics (value preferences) can be elicited, in a sociotechnical system (STS). An STS comprises of autonomous social entities (principals, i.e., people and organizations), and technical entities (agents, who help principals), and resources (e.g., data, services, sensors, and actuators). In the second part of the talk, I will discuss recent works on how to (1) specify a decentralized STS, representing ethical postures of individual agents as well as the systemic (STS level) ethical posture, (2) reasoning about ethics, including how individual agents can select actions that align with the ethical postures of all concerned principals; and (3) elicit value preferences (which capture ethical requirements) of stakeholders.
13.00: Lunch
14.15: Interactive AI Worldbuilding Workshop: Vanessa Hanschke & Tashi Namgyal (Computer Science, University of Bristol)
In this creative collaborative workshop, we explore ideas of future AI worlds and how humanity has somehow steered away from autodestruction. What do you think the world will look like in 2045 if AI has had a transformative impact on our society? What part does your research play in this world? Choose your biases, answer some hard questions about wealth inequality, write newspaper headlines and paint a picture of the world in 2045. The best world will win a prize.
16.45: Impromptu talks
18.30: Summer BBQ
Thursday 8th September, 2022
9.30: Dr Raul Santos-Rodriguez 'A human-centric machine learning pipeline'
The talk will focus on how to extend the traditional machine learning pipeline to deal with complex real-world situations that involve addressing requirements such as sustainability, robustness, or safety. To this end, we will discuss how to leverage and embed human input at different stages of the pipeline. As an example, we will focus on situations where data comes with imperfect and weak labels. First, we will describe a general approach to such problems, diving into specific applications. Then, we will present an overview of interpretability techniques that we will use to measure and understand data and label quality.
11.30: Dr Oliver Ray & Dr Steve Moyle (Oxford) 'Explanatory and interactive knowledge-based machine learning with...'
TITLE: Explanatory and interactive knowledge-based machine learning with inductive logic programming
AUTHORS: Oliver Ray (Bristol), Oliver Deane (Bristol) and Steve Moyle (Oxford)
ABSTRACT: This talk will introduce a cyber threat elucidation task in order to motivate the need for eXplanatory Interactive Relational Machine Learning (XIRML). We will briefly review a range of increasingly expressive existing machine learning paradigms in order to argue for the adoption of Interactive Inductive Logic Programming (i-ILP) as a pragmatic basis for XIRML; and we will highlight some recent progress demonstrated by proof of principle prototype called Acuity. We will also outline some other recent work on explanation and interaction in classical ML and describe how these may be assimilated into our i-ILP framework.
13.00: Lunch
14.15: Peta Masters (KCL): Just Like That: The formalisation of seemingly non-rational beliefs by reference to magic...
As we enter the era of human-AI teams, it becomes increasingly important for AI systems to understand how to interpret human behaviour and how their behaviour will be interpreted by the humans they encounter. “Just Like That” examines this problem by reference to magic and deception, focusing on aspects of human perception and cognition that professional deceivers exploit (e.g., Bowyer, 1982, Bell, 2003; Whaley, 1982; Kuhn, 2019), that behavioural economists characterise as processing errors (Kahneman, 2011) but which others insist are rational responses to an uncertain world (King & Kay, 2020). Importantly, there is general agreement that these apparent quirks and foibles are consistently observed. They may seem non-rational and idiosyncratic but they are predictable; they occur not only in the formation of false belief but in the formulation of beliefs in general. The distinction is critical. Behaviour that is genuinely idiosyncratic is unpredictable; behaviour that is predictable can be modelled. In this talk, I will show you some magic tricks - from YouTube, not performed by me! We will look at the principles behind the tricks and see how seemingly non-rational human responses can be formalised and incorporated into the reasoning of autonomous systems to enhance their interactive capabilities.
Peta Masters is a computer scientist with a background in theatre. She gained her doctorate at RMIT in Melbourne with a thesis on goal recognition and deception and her first paper, with supervisor Sebastian Sardina, won the Pragnesh Jay Modi Best Student Paper Award at AAMAS17. She worked on Deceptive AI at the University of Melbourne with Liz Sonenberg and a multi-disciplinary team under the direction of Wally Smith, a psychologist and amateur magician, and is currently a Researcher with the Trustworthy Autonomous Systems (TAS) Hub at King’s College London.
Refs:
· J. Bowyer Bell. 2003. Toward a theory of deception. International Journal of Intelligence and Counterintelligence 16, 2 (2003), 244–279.
· J. Barton Bowyer. 1982. Cheating: Deception in War & Magic, Games & Sports. St Martin’s Press.
· Daniel Kahneman. 2011. Thinking, fast and slow. Farrar, Straus and Giroux, U.S.A.
· John Anderson Kay and Mervyn A King. 2020. Radical uncertainty. Bridge Street Press Decision-making beyond the numbers.
· Gustav Kuhn. 2019. Experiencing the impossible: The science of magic. MIT Press.
· Barton Whaley. 1982. Toward a general theory of deception. The Journal of Strategic Studies 5, 1 (1982), 178–192.
15.45: Impromptu talks
16.30: Closing remarks
BIAS Summer School
6 - 8 September 2022