Think before you B (AI), structured decision-making for AI public buyers

How can we help local authorities to make informed and correct decisions when procuring AI?

The big issues

Public authorities are quickly deploying AI and forms of automation, procuring it from tech providers and consulting companies. This happens under fragmented and insufficient regulation, limited guidance, and a significant digital skills gap in the public sector.  

Public buyers are only offered limited support by existing tools, such as the Guidelines for AI Procurement, the Guide to Using AI in the Public Sector, or the more recent Introduction to AI Assurance. Depending on the use case, public buyers also need to consider the ICO’s Guidance on AI and data protection. These guidelines, and those offered by private standard-setting organisations such as ISO or IEEE, are formulated at rather high levels of abstraction, are not easy to implement, and there is no single point of reference facilitating structured decision-making prior to launching the procurement of AI or automation solutions. 

Support mechanisms and (limited but) increased resources are being put in place for central government buyers. Local authorities, however, face particularly challenging circumstances, while under significant pressures to deploy these technologies in search of efficiencies to minimise cuts in local public service provision. This can trigger problematic procurement practices, create significant systemic risks, and result in local harms. 

Our response 

Our purpose is to develop the proof-of-concept for a ‘plug and play’ decision-making tool to bridge this gap and support, in particular, responsible AI procurement by local authorities. The tool will bring into a single place different strands of guidance and be tailored to supporting decisions on whether and how to procure AI in the local authority context. 

We will develop the proof of concept for, and start to build an impact assessment model that enables procurement professionals with general expertise to easily carry out risk assessments that surface potential second- and third-order consequences of deploying complex algorithmic tools and systems.

The focus will be on promoting structured and evidence-based thinking about the implications of AI procurement, and the associated risks and trade-offs, as well as consideration of key decisions on how to engage with the market and how to design the future procurement process. 

Project Team

  • Albert Sanchez-Graells (PI, Law School, UoB)
  • Rachel Coldicutt (CO-I, Careful Industries & Promising Trouble) 
Edit this page