Today, an open letter, signed by leading cyber security experts across the country was published, highlighting the ‘alarming misunderstandings and misconceptions’ around the Online Safety Bill.
The Online Safety Bill provision on scanning messages shared through apps such as WhatsApp and Signal is the focus of intense debate due to its potential for large-scale impact on human rights. Scientists from the UK’s National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN) have called on government and parliament to study the independent scientific evaluation of the tools proposed to undertake such scanning as part of the Government’s Safety Tech Challenge Fund.
With end-to-end encryption (E2EE), no third parties including service providers such as WhatsApp and Signal, can read messages as they travel between the sender and the receiver.
The evaluation concluded that although none of the tools propose to weaken or break the E2EE protocol, from a Human Rights perspective, the confidentiality of the E2EE service users’ communications cannot be guaranteed when all content intended to be sent privately within the E2EE service is monitored pre-encryption.
The Home Secretary, writing in The Telegraph last week, noted that the programme had “demonstrated that it would be technically feasible to detect child sexual abuse in environments which utilise encryption.”
“The issue is that the technology being discussed is not fit as a solution” said Awais Rashid, Professor of Cyber Security at the University of Bristol and Director of the REPHRAIN Centre, who has worked on development of automated tools to detect child abuse material online as well as engineering privacy into software systems for 15 years.
“Our evaluation shows that the solutions under consideration will compromise privacy at large and have no built-in safeguards to stop repurposing of such technologies for monitoring any personal communications.
“Nor are there any mechanisms for ensuring transparency and accountability of who will receive this data and for what purposes will it be utilised.
“Parliament must take into account the independent scientific evidence in this regard. Otherwise the Online Safety Bill risks providing carte blanche for monitoring personal communications and potential for unfettered surveillance on a societal scale.”
The evaluation also highlighted that big challenges stem from the absence of documented, ethically responsible benchmark datasets for developing and evaluating such tools and lack of detailed experimental information due to the proprietary nature of such tools.
Dr. Claudia Peersman, Research Fellow at University of Bristol and the REPHRAIN Centre, who led the evaluation said: “The REPHRAIN evaluation team, consisting of experts in online child protection, privacy and security, AI technologies and Human Rights, strongly supports the development of automated tools for tackling online child sexual abuse and exploitation. We have been collaborating with law enforcement globally for over 10 years to develop new solutions to support their online child protection investigations in non-private online environments, such as peer-to-peer networks and social media; and the offline analysis of suspects’ seized hardware.
“However, our recent assessment of the tools funded by the Safety Tech Challenge Fund showed that striking a fair balance between the rights of law-abiding users, potential CSAM victims and perceived perpetrators becomes a key issue when such tools are being deployed in the context of large scale monitoring of people’s private messages within end-to-end encryption environments.
“Scientific debate is only just starting on how Responsible and Ethical AI principles, such as Human Rights implications, security, explainability, transparency, non-bias, accountability and disputability, should be implemented in a highly sensitive and important field such as online child protection. For example, none of the existing CSAM detection tools is currently able to detect victims of all ethnic, age and gender groups, due to the lack of diversity in current training datasets. Therefore, we argue that an agreed international standard for benchmark datasets and CSAM tool evaluation should be developed that takes into account such principles and that is shared by child protection organisations, the private sector, and researchers - thus protecting all children and law-abiding users.”