Guaranteeing public sector adoption of trustworthy AI - a task that should not be left to procurement

The UK government's approach to AI regulation is lacking and, if left unaddressed, this could cause problems in the public sector.

About the research

In its March 2023 White Paper on the regulation of Artificial Intelligence (AI), the UK Government confirmed that it has no plans to enact new laws or create a new AI regulator. AI regulation is left to future guidance by existing regulators based on the five general principles of accountability, transparency, fairness, safety, and contestability.

However, there is currently no regulator directly tasked with monitoring the adoption of AI by the public sector, and not all cases of AI adoption fall under the regulatory remit of existing regulators such as the Information Commissioner’s Office, the Equality and Human Rights Commission or the Biometrics and Surveillance Camera Commissioner. Moreover, some of the existing rules constraining, for example, automated decision-making in the public sector, are at risk of being watered down under the Data Protection and Digital Information (No. 2) Bill.

The AI White Paper thus perpetuates the current deregulatory approach, whereby the adoption of AI by the public sector is solely left to public buyers asked to ‘confidently and responsibly procure AI technologies for the benefit of citizens’.

The research critically assessed this regulatory strategy and found that guaranteeing public sector adoption of trustworthy AI cannot be solely left to procurement.

This briefing makes recommendations that will safeguard individual rights and social interests.

Policy recommendations

Decision makers seeking to guarantee that the public sector only adopts trustworthy AI and that individual rights and social interests are not jeopardised should:

  • Acknowledge that public buyers are not well placed to ‘regulate by contract’ the adoption of AI in the public sector, and that the creation of an independent authority is urgently needed.
  • Create a new ‘AI in the public sector Authority’ (AIPSA), ensure its political and commercial independence and adequately resource it, especially concerning digital skills and capabilities
  • Task this new Authority (‘AIPSA’) with the verification and certification of industry led standards prior to their use in public procurement procedures, and with the development of independent standards where no suitable standards can otherwise be certified.
  • Task ‘AIPSA’ with granting permissions or licences for the public sector to use digital technologies, and in particular AI, prior to their deployment.
  • Invest in a major programme to build digital capabilities in the public sector, starting with specific financial commitments towards a talent attraction plan, to be developed by ‘AIPSA’.
  • Refrain from using public procurement as a tool of digital industrial policy and ensure the public sector only adopts sufficiently tested and assured digital technologies.

Key findings

Contrary to the position in the AI White Paper, public buyers are not adequately placed to ‘confidently and responsibly procure AI technologies for the benefit of citizens’:

  • Public buyers suffer from significant digital skills gaps.
  • Overreliance in external consultants to plug those gaps exacerbates regulatory risks.
  • Public buyers are also under a structural conflict of interest due to their operational interest in the deployment of AI, which can trump the adequate protection of individual rights and social interests in their ‘AI regulation by contract’.
  • Public buyers tasked with goals of digital industrial policy, such as under the newly-launched Foundation Model Taskforce, are under particularly acute conflicts of interest.

Public procurement does not offer the right tools for ‘AI regulation by contract’:

  • The complexity of the digital technologies to be procured, especially AI, makes it difficult to define specific and prescriptive requirements to be embedded in procurement documents.
  • Even where such requirements could be specified, procurement rules limit the technological prescriptiveness public buyers can exercise and require consideration of alternatives.
  • Evaluation of ‘standard’ and ‘alternative’ solutions is further complicated by the lack of generally accepted methodologies and metrics, especially in relation to key characteristics of trustworthy AI such as explainability or intelligibility.
  • Reliance on industry led standards would privatise AI regulation and expose it to commercialisation, which is incompatible with the role of ‘AI regulation by contract’.

It is necessary to urgently relieve procurement of the regulatory role it has been assigned. This requires:

  • Creating external, independent oversight of the process of adoption of digital technologies, and AI in particular, by the public sector.
  • Generating mandatory requirements that take into consideration public interest and adequately protect individual rights, to avoid reliance on industry led standards unable to provide the necessary guarantees and guardrails.

Contact the researchers

Professor Albert Sanchez-Graells, Professor of Economic Law and Co-Director of the Centre for Global Law and Innovation, University of Bristol Law School: a.sanchez-graells@bristol.ac.uk

Author

Professor Albert Sanchez-Graells, University of Bristol

for albert SG 133

Edit this page