Ellie Prosser

General Profile:

I joined the Interactive AI CDT straight after my 5-year integrated Master’s degree (including a placement year) in Electronic Engineering with Computer Systems at the University of Surrey. This degree covered many aspects that intersect with AI such as speech and audio processing, computer vision, robotics, and a dedicated AI module. During my placement year I used machine learning in the generation of tests for verification in the autonomous vehicle sector. These experiences developed my interest in AI but none of them covered the ethics of these systems. Having an emphasis on human computer interaction is one of the key concepts for maintaining ethical practices in the field of AI. This led me to pursuing this specific CDT as it encompasses many necessary ethical considerations. When brainstorming potential research ideas, I found that my key interests are in psychology, cybersecurity, and obviously interactive AI.

Research Project Summary:

The working title of my research project is ‘interactive agents for online child safety’. The enormity of the Internet makes the task of keeping children safe online a very difficult one. Online grooming is one possible misuse of the Internet that threatens children. It is impossible for law enforcement to monitor and prevent all online grooming themselves, which leads to a need for other solutions that do not rely on manpower alone. Cyberbullying is another risk to children using the Internet. Similarly to the problem of online grooming, it is impossible to completely eradicate cyberbullying using human intervention. AI provides a promising solution to keeping children safe online as it can be used to automatically detect threatening behaviours in chats that pose a safety risk to children. In the context of online child grooming, this involves detecting predatory behaviour in chats so that children can be provided with real-time protection. The continuation of this problem then lies in the handling of threatening conversations once they have been detected. Interventions that can gently guide youths, with advice rather than giving them strict rules which threaten their autonomy and sense of agency, could provide one possible solution to keeping them safe. For the issue of online grooming these interventions aim to disrupt the online grooming process ideally by stopping a relationship being formed between predator and victim.

The aim of my research is to investigate both research areas of detection of threatening behaviour in chats and then interventions to provide a solution that focuses on teaching children how to keep themselves safe online. A practical aim from this research is to implement a framework that takes a real-time continual online conversation, detects threatening behaviour posing a risk to children, and generates relevant interventions throughout to keep children safe online.

For the 3-month summer project I focused on the issue of online child grooming and produced a preliminary framework including a two-stage pipeline with a sexual predator identification block and an intervention block. The identification block took segments of a conversation, determined if it was predatory or non-predatory using a language model and a binary classifier, and updated a predatory score that was maintained throughout the conversation. The intervention block took the predatory score and the message content to generate an advice message that was relevant to the message context.

My preliminary framework provides the basis for a larger research project with a plethora of identified potential improvements within the pipeline and further work to be done. The task of online child grooming suffers from a data availability issue with current datasets being unrepresentative of the target domain as predatory conversations in the datasets took place between predators and adults posing as children rather than children themselves. In general, it is difficult to obtain children’s data for tasks that aim to improve their online safety. This data issue needs to be analysed further and a potential contribution of this work would be to either find a way to improve the representation of the data or to create a new dataset. Therefore, the key objectives of my research can be summarised into three general areas:

  • Improving the availability of data for researching (and building preventative systems targeting) online child safety
  • Building upon state-of-the-art harm classification problems (e.g., in sexual predator detection) to enable real-time detection of harm
  • Designing an intervention framework that generates context-and-age-appropriate advice for children online, to provide effective situational guidance that focuses on forming safe behaviours online rather than strictly dictating children’s Internet use

 

Supervisors:

Website:

Edit this page