Articulating the public interest in procurement law and policy

Earlier this year, I had some very interesting conversations with Bristol colleagues about the relationship between law, regulation, and the public interest. These conversations led to a series of blog posts that are being published in our Law School blog.

This prompted me to think a bit more in detail about how the public interest is articulated in public procurement law and policy. Eventually, I wrote a draft paper based on the review of procurement goals embedded in the UK’s new Procurement Act 2023, which will enter into force next month (on 28 October 2023).

I presented the paper at the SLS conference today (the slides are available here) and had some initial positive feedback. I would be interested in additional feedback before I submit it for peer review.

As always, comments warmly welcome: a.sanchez-graells@bristol.ac.uk. The abstract is as follows:

In this paper, I explore the notion of public interest embedded in the Procurement Act 2023. I use this new piece of legislation as a contemporary example of the difficulty in designing a 'public interest centred' system of public procurement regulation. I show how a mix of explicit, referential, and implicit public interest objectives results in a situation where there are multiple sources of objectives contracting authorities need to consider in their decision-making, but there is no prioritisation of sources or objectives. I also show that, despite this proliferation of sources and objectives and due to the unavailability of effective means of judicial challenge or administrative oversight, contracting authorities retain almost unlimited discretion to shape the public interest and 'what it looks like' in relation to the award of each public contract. I conclude with a reflection the need to reconsider the ways in which public procurement can foster the public interest, in light of its limitations as a regulatory tool.

Response to the UK’s March 2023 White Paper "A pro-innovation approach to AI regulation"

Together with colleagues at the Centre for Global Law and Innovation of the University of Bristol Law School, I submitted a response to the UK Government’s public consultation on its ‘pro-innovation’ approach to AI regulation. For an earlier assessment, see here.

The full submission is available at https://ssrn.com/abstract=4477368, and this is the executive summary:

The white paper ‘A pro-innovation approach to AI regulation’ (the ‘AI WP’) claims to advance a ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ model that leverages the capabilities and skills of existing regulators to foster AI innovation. This model, we are told, would be underpinned by a set of principles providing a clear, unified, and flexible framework improving upon the current ‘complex patchwork of legal requirements’ and striking ‘the right balance between responding to risks and maximising opportunities.’

In this submission, we challenge such claims in the AI WP. We argue that:

  • The AI WP does not advance a balanced and proportionate approach to AI regulation, but rather, an “innovation first” approach that caters to industry and sidelines the public. The AI WP primarily serves a digital industrial policy goal ‘to make the UK one of the top places in the world to build foundational AI companies’. The public interest is downgraded and building public trust is approached instrumentally as a mechanism to promote AI uptake. Such an approach risks breaching the UK’s international obligations to create a legal framework that effectively protects fundamental rights in the face of AI risks. Additionally, in the context of public administration, poorly regulated AI could breach due process rules, putting public funds at risk.

  • The AI WP does not embrace an agile regulatory approach, but active deregulation. The AI WP stresses that the UK ‘must act quickly to remove existing barriers to innovation’ without explaining how any of the existing safeguards are no longer required in view of identified heightened AI risks. Coupled with the “innovation first” mandate, this deregulatory approach risks eroding regulatory independence and the effectiveness of the regulatory regimes the AI WP claims to seek to leverage. A more nuanced regulatory approach that builds on, rather than threatens, regulatory independence is required.

  • The AI WP builds on shaky foundations, including the absence of a mapping of current regulatory remits and powers. This makes it near impossible to assess the effectiveness and comprehensiveness of the proposed approach, although there are clear indications that regulatory gaps will remain. The AI WP also presumes continuity in the legal framework, which ignores reforms currently promoted by Government and further reforms of the overarching legal regime repeatedly floated. It seems clear that some regulatory regimes will soon see their scope or stringency limited. The AI WP does not provide clear mechanisms to address these issues, which undermine its core claim that leveraging existing regulatory regimes suffices to address potential AI harms. This is perhaps particularly evident in the context of AI use for policing, which is affected by both the existence of regulatory gaps and limitations in existing legal safeguards.

  • The AI WP does not describe a full, workable regulatory model. Lack of detail on the institutional design to support the central function is a crucial omission. Crucial tasks are assigned to such central function without clarifying its institutional embedding, resourcing, accountability mechanisms, etc.

  • The AI WP foresees a government-dominated approach that further risks eroding regulatory independence, in particular given the “innovation first” criteria to be used in assessing the effectiveness of the proposed regime.

  • The principles-based approach to AI regulation suggested in the AI WP is undeliverable due to lack of detail on the meaning and regulatory implications of the principles, barriers to translation into enforceable requirements, and tensions with existing regulatory frameworks. The minimalistic legislative intervention entertained in the AI WP would not equip regulators to effectively enforce the general principles. Following the AI WP would also result in regulatory fragmentation and uncertainty and not resolve the identified problem of a ‘complex patchwork of legal requirements’.

  • The AI WP does not provide any route towards sufficiently addressing the digital capabilities gap, or towards mitigating new risks to capabilities, such as deskilling—which create significant constraints on the likely effectiveness of the proposed approach.

Full citation: A Charlesworth, K Fotheringham, C Gavaghan, A Sanchez-Graells and C Torrible, ‘Response to the UK’s March 2023 White Paper "A pro-innovation approach to AI regulation"’ (June 19, 2023). Available at SSRN: https://ssrn.com/abstract=4477368.