In a new draft chapter for my monograph, I explore how, within the broader process of public sector digitalisation, and embroiled in the general ‘race for AI’ and ‘race for AI regulation’, public procurement has two roles. In this post, I summarise the main arguments (all sources, included for quoted materials, are available in the draft chapter).
This chapter frames the analysis in the rest of the book and will be fundamental in the review of the other drafts, so comments would be most welcome (a.sanchez-graells@bristol.ac.uk).
Public sector digitalisation is accelerating in a regulatory vacuum
Around the world, the public sector is quickly adopting digital technologies in virtually every area of its activity, including the delivery of public services. States are not solely seeking to digitalise their public sector and public services with a view to enhance their operation (internal goal), but are also increasingly willing to use the public sector and the construction of public infrastructure as sources of funding and spaces for digital experimentation, to promote broader technological development and boost national industries in a new wave of (digital) industrial policy (external goal). For example, the European Commission clearly seeks to make the ‘public sector a trailblazer for using AI’. This mirrors similar strategic efforts around the globe. The process of public sector digitalisation is thus embroiled in the broader race for AI.
Despite the fact that such dynamic of public sector digitalisation raises significant regulatory risks and challenges, well-known problems in managing uncertainty in technology regulation—ie the Collingridge dilemma or pacing problem (‘cannot effectively regulate early on, so will probably regulate too late’)—and different normative positions, interact with industrial policy considerations to create regulatory hesitation and side-line anticipatory approaches. This creates a regulatory gap —or rather a laissez faire environment—whereby the public sector is allowed to experiment with the adoption of digital technologies without clear checks and balances. The current strategy is by and large one of ‘experiment first, regulate later’. And while there is little to no regulation, there is significant experimentation and digital technology adoption by the public sector.
Despite the emergence of a ‘race for AI regulation’, there are very few attempts to regulate AI use in the public sector—with the EU’s proposed EU AI Act offering a (partial) exception—and general mechanisms (such as judicial review) are proving slow to adapt. The regulatory gap is thus likely to remain, at least partially, in the foreseeable future—not least, as the effective functioning of new rules such as the EU AI Act will not be immediate.
Procurement emerges as a regulatory gatekeeper to plug that gap
In this context, proposals have started to emerge to use public procurement as a tool of digital regulation. Or, in other words, to use the acquisition of digital technologies by the public sector as a gateway to the ‘regulation by contract’ of their use and governance. Think tanks, NGOs, and academics alike have stressed that the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’, and that procurement ‘is a central policy tool governments can deploy to catalyse innovation and influence the development of solutions aligned with government policy and society’s underlying values’. Public procurement is thus increasingly expected to play a crucial gatekeeping role in the adoption of digital technologies for public governance and the delivery of public services.
Procurement is thus seen as a mechanism of ‘regulation by contract’ whereby the public buyer can impose requirements seeking to achieve broad goals of digital regulation, such as transparency, trustworthiness, or explainability, or to operationalise more general ‘AI ethics’ frameworks. In more detail, the Council of Europe has recommended using procurement to: (i) embed requirements of data governance to avoid violations of human rights norms and discrimination stemming from faulty datasets used in the design, development, or ongoing deployment of algorithmic systems; (ii) ‘ensure that algorithmic design, development and ongoing deployment processes incorporate safety, privacy, data protection and security safeguards by design’; (iii) require ‘public, consultative and independent evaluations of the lawfulness and legitimacy of the goal that the [procured algorithmic] system intends to achieve or optimise, and its possible effects in respect of human rights’; (iv) require the conduct of human rights impact assessments; or (v) promote transparency of the ‘use, design and basic processing criteria and methods of algorithmic systems’.
Given the absence of generally applicable mandatory requirements in the development and use of digital technologies by the public sector in relation to some or all of the stated regulatory goals, the gatekeeping role of procurement in digital ‘regulation by contract’ would mostly involve the creation of such self-standing obligations—or at least the enforcement of emerging non-binding norms, such as those developed by (voluntary) standardisation bodies or, more generally, by the technology industry. In addition to creating risks of regulatory capture and commercial determination, this approach may overshadow the difficulties in using procurement for the delivery of the expected regulatory goals. A closer look at some selected putative goals of digital regulation by contract sheds light on the issue.
Procurement is not at all suited to deliver incommensurable goals of digital regulation
Some of the putative goals of digital regulation by contract are incommensurable. This is the case in particular of ‘trustworthiness’ or ‘responsibility’ in AI use in the public sector. Trustworthiness or responsibility in the adoption of AI can have several meanings, and defining what is ‘trustworthy AI’ or ‘responsible AI’ is in itself contested. This creates a risk of imprecision or generality, which could turn ‘trustworthiness’ or ‘responsibility’ into mere buzzwords—as well as exacerbate the problem of AI ethics-washing. As the EU approach to ‘trustworthy AI’ evidences, the overarching goals need to be broken down to be made operational. In the EU case, ‘trustworthiness’ is intended to cover three requirements for lawful, ethical, and robust AI. And each of them break down into more detailed or operationalizable requirements.
In turn, some of the goals into which ‘trustworthiness’ or ‘responsibility’ breaks down are also incommensurable. This is notably the case of ‘explainability’ or interpretability. There is no such thing as ‘the explanation’ that is required in relation to an algorithmic system, as explanations are (technically and legally) meant to serve different purposes and consequently, the design of the explainability of an AI deployment needs to take into account factors such as the timing of the explanation, its (primary) audience, the level of granularity (eg general or model level, group-based, or individual explanations), or the level of risk generated by the use of the technical solution. Moreover, there are different (and emerging) approaches to AI explainability, and their suitability may well be contingent upon the specific intended use or function of the explanation. And there are attributes or properties influencing the interpretability of a model (eg clarity) for which there are no evaluation metrics (yet?). Similar issues arise with other putative goals, such as the implementation of a principle of AI minimisation in the public sector.
Given the way procurement works, it is ill-suited for the delivery of incommensurable goals of digital regulation.
Procurement is not well suited to deliver other goals of digital regulation
There are other goals of digital regulation by contract that are seemingly better suited to delivery through procurement, such as those relating to ‘technical’ characteristics such as neutrality, interoperability, openness, or cyber security, or in relation to procurement-adjacent algorithmic transparency. However, the operationalisation of such requirements in a procurement context will be dependent on a range of considerations, such as judgements on the need to keep information confidential, judgements on the state of the art or what constitutes a proportionate and economically justified requirement, the generation of systemic effects that are hard to evaluate within the limits of a procurement procedure, or trade-offs between competing considerations. The extent to which procurement will be able to operationalise the desired goals of digital regulation will depend on its institutional embeddedness and on the suitability of procurement tools to impose specific regulatory approaches. Additional analysis conducted elsewhere (see here and here) suggests that, also in relation to these regulatory goals, the emerging approach to AI ‘regulation by contract’ cannot work well.
Procurement digitalisation offers a valuable case study
The theoretical analysis of the use of procurement as a tool of digital ‘regulation by contract’ (above) can be enriched and further developed with an in-depth case study of its practical operation in a discrete area of public sector digitalisation. To that effect, it is important to identify an area of public sector digitalisation which is primarily or solely left to ‘regulation by contract’ through procurement—to isolate it from the interaction with other tools of digital regulation (such as data protection, or sectoral regulation). It is also important for the chosen area to demonstrate a sufficient level of experimentation with digitalisation, so that the analysis is not a mere concretisation of theoretical arguments but rather grounded on empirical insights.
Public procurement is itself an area of public sector activity susceptible to digitalisation. The adoption of digital tools is seen as a potential source of improvement and efficiency in the expenditure of public funds through procurement, especially through the adoption of digital technology solutions developed in the context of supply chain management and other business operations in the private sector (or ‘ProcureTech’), but also through the adoption of digital tools tailored to the specific goals of procurement regulation, such as the prevention of corruption or collusion. There is emerging evidence of experimentation in procurement digitalisation, which is shedding light on regulatory risks and challenges.
In view of its strategic importance and the current pace of procurement digitalisation, it is submitted that procurement is an appropriate site of public sector experimentation in which to explore the shortcomings of the approach to AI ‘regulation by contract’. Procurement is an adequate case study because, being a ‘back-office’ function, it does not concern (likely) high-risk uses of AI or other digital technologies, and it is an area where data protection regulation is unlikely to provide a comprehensive regulatory framework (eg for decision automation) because the primary interactions are between public buyers and corporate institutions.
Procurement therefore currently represents an unregulated digitalisation space in which to test and further explore the effectiveness of the ‘regulation by contract’ approach to governing the transition to a new model of digital public governance.
* * * * * *
The full draft is available on SSRN as: Albert Sanchez-Graells, ‘The two roles of procurement in the transition towards digital public governance: procurement as regulatory gatekeeper and as site for public sector experimentation’ (March 10, 2023): https://ssrn.com/abstract=4384037.