UK's 'pro-innovation approach' to AI regulation won't do, particularly for public sector digitalisation

Regulating artificial intelligence (AI) has become the challenge of the time. This is a crucial area of regulatory development and there are increasing calls—including from those driving the development of AI—for robust regulatory and governance systems. In this context, more details have now emerged on the UK’s approach to AI regulation.

Swimming against the tide, and seeking to diverge from the EU’s regulatory agenda and the EU AI Act, the UK announced a light-touch ‘pro-innovation approach’ in its July 2022 AI regulation policy paper. In March 2023, the same approach was supported by a Report of the Government Chief Scientific Adviser (the ‘GCSA Report’), and is now further developed in the White Paper ‘AI regulation: a pro-innovation approach’ (the ‘AI WP’). The UK Government has launched a public consultation that will run until 21 June 2023.

Given the relevance of the issue, it can be expected that the public consultation will attract a large volume of submissions, and that the ‘pro-innovation approach’ will be heavily criticised. Indeed, there is an on-going preparatory Parliamentary Inquiry on the Governance of AI that has already collected a wealth of evidence exploring the pros and cons of the regulatory approach outlined there. Moreover, initial reactions eg by the Public Law Project, the Ada Lovelace Institute, or the Royal Statistical Society have been (to different degrees) critical of the lack of regulatory ambition in the AI WP—while, as could be expected, think tanks closely linked to the development of the policy, such as the Alan Turing Institute, have expressed more positive views.

Whether the regulatory approach will shift as a result of the expected pushback is unclear. However, given that the AI WP follows the same deregulatory approach first suggested in 2018 and is strongly politically/policy entrenched—for the UK Government has self-assessed this approach as ‘world leading’ and claims it will ‘turbocharge economic growth’—it is doubtful that much will necessarily change as a result of the public consultation.

That does not mean we should not engage with the public consultation, but the opposite. In the face of the UK Government’s dereliction of duty, or lack of ideas, it is more important than ever that there is a robust pushback against the deregulatory approach being pursued. Especially in the context of public sector digitalisation and the adoption of AI by the public administration and in the provision of public services, where the Government (unsurprisingly) is unwilling to create regulatory safeguards to protect citizens from its own action.

In this blogpost, I sketch my main areas of concern with the ‘pro-innovation approach’ in the GCSA Report and AI WP, which I will further develop for submission to the public consultation, building on earlier views. Feedback and comments would be gratefully received: a.sanchez-graells@bristol.ac.uk.

The ‘pro-innovation approach’ in the GCSA Report — squaring the circle?

In addition to proposals on the intellectual property (IP) regulation of generative AI, the opening up of public sector data, transport-related, or cyber security interventions, the GCSA Report focuses on ‘core’ regulatory and governance issues. The report stresses that regulatory fragmentation is one of the key challenges, as is the difficulty for the public sector in ‘attracting and retaining individuals with relevant skills and talent in a competitive environment with the private sector, especially those with expertise in AI, data analytics, and responsible data governance‘ (at 5). The report also further hints at the need to boost public sector digital capabilities by stressing that ‘the government and regulators should rapidly build capability and know-how to enable them to positively shape regulatory frameworks at the right time‘ (at 13).

Although the rationale is not very clearly stated, to bridge regulatory fragmentation and facilitate the pooling of digital capabilities from across existing regulators, the report makes a central proposal to create a multi-regulator AI sandbox (at 6-8). The report suggests that it could be convened by the Digital Regulatory Cooperation Forum (DRCF)—which brings together four key regulators (the Information Commissioner’s Office (ICO), Office of Communications (Ofcom), the Competition and Markets Authority (CMA) and the Financial Conduct Authority (FCA))—and that DRCF should look at ways of ‘bringing in other relevant regulators to encourage join up’ (at 7).

The report recommends that the AI sandbox should operate on the basis of a ‘commitment from the participant regulators to make joined-up decisions on regulations or licences at the end of each sandbox process and a clear feedback loop to inform the design or reform of regulatory frameworks based on the insights gathered. Regulators should also collaborate with standards bodies to consider where standards could act as an alternative or underpin outcome-focused regulation’ (at 7).

Therefore, the AI sandbox would not only be multi-regulator, but also encompass (in some way) standard-setting bodies (presumably UK ones only, though), without issues of public-private interaction in decision-making implying the exercise of regulatory public powers, or issues around regulatory capture and risks of commercial determination, being considered at all. The report in general is extremely industry-orientated, eg in stressing in relation to the overarching pacing problem that ‘for emerging digital technologies, the industry view is clear: there is a greater risk from regulating too early’ (at 5), without this being in any way balanced with clear (non-industry) views that the biggest risk is actually in regulating too late and that we are collectively frog-boiling into a ‘runaway AI’ fiasco.

Moreover, confusingly, despite the fact that the sandbox would be hosted by DRCF (of which the ICO is a leading member), the GCSA Report indicates that the AI sandbox ‘could link closely with the ICO sandbox on personal data applications’ (at 8). The fact that the report is itself unclear as to whether eg AI applications with data protection implications should be subjected to one or two sandboxes, or the extent to which the general AI sandbox would need to be integrated with sectoral sandboxes for non-AI regulatory experimentation, already indicates the complexity and dubious practical viability of the suggested approach.

It is also unclear why multiple sector regulators should be involved in any given iteration of a single AI sandbox where there may be no projects within their regulatory remit and expertise. The alternative approach of having an open or rolling AI sandbox mechanism led by a single AI authority, which would then draw expertise and work in collaboration with the relevant sector regulator as appropriate on a per-project basis, seems preferable. While some DRCF members could be expected to have to participate in a majority of sandbox projects (eg CMA and ICO), others would probably have a much less constant presence (eg Ofcom, or certainly the FCA).

Remarkably, despite this recognition of the functional need for a centralised regulatory approach and a single point of contact (primarily for industry’s convenience), the GCSA Report implicitly supports the 2022 AI regulation policy paper’s approach to not creating an overarching cross-sectoral AI regulator. The GCSA Report tries to create a ‘non-institutionalised centralised regulatory function’, nested under DRCF. In practice, however, implementing the recommendation for a single AI sandbox would create the need for the further development of the governance structures of the DRCF (especially if it was to grow by including many other sectoral regulators), or whichever institution ‘hosted it’, or else risk creating a non-institutional AI regulator with the related difficulties in ensuring accountability. This would add a layer of deregulation to the deregulatory effect that the sandbox itself creates (see eg Ranchordas (2021)).

The GCSA Report seems to try to square the circle of regulatory fragmentation by relying on cooperation as a centralising regulatory device, but it does this solely for the industry’s benefit and convenience, without paying any consideration to the future effectiveness of the regulatory framework. This is hard to understand, given the report’s identification of conflicting regulatory constraints, or in its terminology ‘incentives’: ‘The rewards for regulators to take risks and authorise new and innovative products and applications are not clear-cut, and regulators report that they can struggle to trade off the different objectives covered by their mandates. This can include delivery against safety, competition objectives, or consumer and environmental protection, and can lead to regulator behaviour and decisions that prioritise further minimising risk over supporting innovation and investment. There needs to be an appropriate balance between the assessment of risk and benefit’ (at 5).

This not only frames risk-minimisation as a negative regulatory outcome (and further feeds into the narrative that precautionary regulatory approaches are somehow not legitimate because they run against industry goals—which deserves strong pushback, see eg Kaminski (2022)), but also shows a main gap in the report’s proposal for the single AI sandbox. If each regulator has conflicting constraints, what evidence (if any) is there that collaborative decision-making will reduce, rather than exacerbate, such regulatory clashes? Are decisions meant to be arrived at by majority voting or in any other way expected to deactivate (some or most) regulatory requirements in view of (perceived) gains in relation to other regulatory goals? Why has there been no consideration of eg the problems encountered by concurrency mechanisms in the application of sectoral and competition rules (see eg Dunne (2014), (2020) and (2021)), as an obvious and immediate precedent of the same type of regulatory coordination problems?

The GCSA report also seems to assume that collaboration through the AI sandbox would be resource neutral for participating regulators, whereas it seems reasonable to presume that this additional layer of regulation (even if not institutionalised) would require further resources. And, in any case, there does not seem to be much consideration as to the viability of asking of resource-strapped regulators to create an AI sandbox where they can (easily) be out-skilled and over-powered by industry participants.

In my view, the GCSA Report already points at significant weaknesses in the resistance to creating any new authorities, despite the obvious functional need for centralised regulation, which is one of the main weaknesses, or the single biggest weakness, in the AI WP—as well as in relation to a lack of strategic planning around public sector digital capabilities, despite well-recognised challenges (see eg Committee of Public Accounts (2021)).

The ‘pro-innovation approach’ in the AI WP — a regulatory blackhole, privatisation of ai regulation, or both

The AI WP envisages an ‘innovative approach to AI regulation [that] uses a principles-based framework for regulators to interpret and apply to AI within their remits’ (para 36). It expects the framework to ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ (para 37). As will become clear, however, such ‘innovative approach’ solely amounts to the formulation of high-level, broad, open-textured and incommensurable principles to inform a soft law push to the development of regulatory practices aligned with such principles in a highly fragmented and incomplete regulatory landscape.

The regulatory framework would be built on four planks (para 38): [i] an AI definition (paras 39-42); [ii] a context-specific approach (ie a ‘used-based’ approach, rather than a ‘technology-led’ approach, see paras 45-47); [iii] a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities (paras 48-54); and [iv] new central functions to support regulators to deliver the AI regulatory framework (paras 70-73). In reality, though, there will be only two ‘pillars’ of the regulatory framework and they do not involve any new institutions or rules. The AI WP vision thus largely seems to be that AI can be regulated in the UK in a world-leading manner without doing anything much at all.

AI Definition

The UK’s definition of AI will trigger substantive discussions, especially as it seeks to build it around ‘the two characteristics that generate the need for a bespoke regulatory response’: ‘adaptivity’ and ‘autonomy’ (para 39). Discussing the definitional issue is beyond the scope of this post but, on the specific identification of the ‘autonomy’ of AI, it is worth highlighting that this is an arguably flawed regulatory approach to AI (see Soh (2023)).

No new institutions

The AI WP makes clear that the UK Government has no plans to create any new AI regulator, either with a cross-sectoral (eg general AI authority) or sectoral remit (eg an ‘AI in the public sector authority’, as I advocate for). The Ministerial Foreword to the AI WP already stresses that ‘[t]o ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI’ (at p2). The AI WP further stresses that ‘[c]reating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators’ (para 47). This however seems to presume that a new cross-sector AI regulator would be unable to coordinate with existing regulators, despite the institutional architecture of the regulatory framework foreseen in the AI WP entirely relying on inter-regulator collaboration (!).

No new rules

There will also not be new legislation underpinning regulatory activity, although the Government claims that the WP AI, ‘alongside empowering regulators to take a lead, [is] also setting expectations‘ (at p3). The AI WP claims to develop a regulatory framework underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: [i] Safety, security and robustness; [ii] Appropriate transparency and explainability; [iii] Fairness; [iv] Accountability and governance; and [v] Contestability and redress (para 10). However, they will not be put on a statutory footing (initially); ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’ (para 11). While there is some detail on the intended meaning of these principles (see para 52 and Annex A), the principles necessarily lack precision and, worse, there is a conflation of the principles with other (existing) regulatory requirements.

For example, it is surprising that the AI WP describes fairness as implying that ‘AI systems should (sic) not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes‘ (emphasis added), and stresses the expectation ‘that regulators’ interpretations of fairness will include consideration of compliance with relevant law and regulation’ (para 52). This encapsulates the risks that principles-based AI regulation ends up eroding compliance with and enforcement of current statutory obligations. A principle of AI fairness cannot modify or exclude existing legal obligations, and it should not risk doing so either.

Moreover, the AI WP suggests that, even if the principles are supported by a statutory duty for regulators to have regard to them, ‘while the duty to have due regard would require regulators to demonstrate that they had taken account of the principles, it may be the case that not every regulator will need to introduce measures to implement every principle’ (para 58). This conflates two issues. On the one hand, the need for activity subjected to regulatory supervision to comply with all principles and, on the other, the need for a regulator to take corrective action in relation to any of the principles. It should be clear that regulators have a duty to ensure that all principles are complied with in their regulatory remit, which does not seem to entirely or clearly follow from the weaker duty to have due regard to the principles.

perpetuating regulatory gaps, in particular regarding public sector digitalisation

As a consequence of the lack of creation of new regulators and the absence of new legislation, it is unclear whether the ‘regulatory strategy’ in the AI WP will have any real world effects within existing regulatory frameworks, especially as the most ambitious intervention is to create ‘a statutory duty on regulators requiring them to have due regard to the principles’ (para 12)—but the Government may decide not to introduce it if ‘monitoring of the effectiveness of the initial, non-statutory framework suggests that a statutory duty is unnecessary‘ (para 59).

However, what is already clear that there is no new AI regulation in the horizon despite the fact that the AI WP recognises that ‘some AI risks arise across, or in the gaps between, existing regulatory remits‘ (para 27), that ‘there may be AI-related risks that do not clearly fall within the remits of the UK’s existing regulators’ (para 64), and the obvious and worrying existence of high risks to fundamental rights and values (para 4 and paras 22-25). The AI WP is naïve, to say the least, in setting out that ‘[w]here prioritised risks fall within a gap in the legal landscape, regulators will need to collaborate with government to identify potential actions. This may include identifying iterations to the framework such as changes to regulators’ remits, updates to the Regulators’ Code, or additional legislative intervention’ (para 65).

Hoping that such risk identification and gap analysis will take place without assigning specific responsibility for it—and seeking to exempt the Government from such responsibility—seems a bit too much to ask. In fact, this is at odds with the graphic depiction of how the AI WP expects the system to operate. As noted in (1) in the graph below, it is clear that the identification of risks that are cross-cutting or new (unregulated) risks that warrant intervention is assigned to a ‘central risk function’ (more below), not the regulators. Importantly, the AI WP indicates that such central function ‘will be provided from within government’ (para 15 and below). Which then raises two questions: (a) who will have the responsibility to proactively screen for such risks, if anyone, and (b) how has the Government not already taken action to close the gaps it recognises exists in the current legal landscape?

AI WP Figure 2: Central risks function activities.

This perpetuates the current regulatory gaps, in particular in sectors without a regulator or with regulators with very narrow mandates—such as the public sector and, to a large extent, public services. Importantly, this approach does not create any prohibition of impermissible AI uses, nor sets any (workable) set of minimum requirements for the deployment of AI in high-risk uses, specially in the public sector. The contrast with the EU AI Act could not be starker and, in this aspect in particular, UK citizens should be very worried that the UK Government is not committing to any safeguards in the way technology can be used in eg determining access to public services, or by the law enforcement and judicial system. More generally, it is very worrying that the AI WP does not foresee any safeguards in relation to the quickly accelerating digitalisation of the public sector.

Loose central coordination leading to ai regulation privatisation

Remarkably, and in a similar functional disconnect as that of the GCSA Report (above), the decision not to create any new regulator/s (para 15) is taken in the same breath as the AI WP recognises that the small coordination layer within the regulatory architecture proposed in the 2022 AI regulation policy paper (ie, largely, the approach underpinning the DRCF) has been heavily criticised (para 13). The AI WP recognises that ‘the DRCF was not created to support the delivery of all the functions we have identified or the implementation of our proposed regulatory framework for AI’ (para 74).

The AI WP also stresses how ‘[w]hile some regulators already work together to ensure regulatory coherence for AI through formal networks like the AI and digital regulations service in the health sector and the Digital Regulation Cooperation Forum (DRCF), other regulators have limited capacity and access to AI expertise. This creates the risk of inconsistent enforcement across regulators. There is also a risk that some regulators could begin to dominate and interpret the scope of their remit or role more broadly than may have been intended in order to fill perceived gaps in a way that increases incoherence and uncertainty’ (para 29), which points at a strong functional need for a centralised approach to AI regulation.

To try and mitigate those regulatory risks and shortcomings, the AI WP proposes the creation of ‘a number of central support functions’, such as [i} a central monitoring function of overall regulatory framework’s effectiveness and the implementation of the principles; [ii] central risk monitoring and assessment; [iii] horizon scanning; [iv] supporting testbeds and sandboxes; [v] advocacy, education and awareness-raising initiatives; or [vi] promoting interoperability with international regulatory frameworks (para 14, see also para 73). Cryptically, the AI WP indicates that ‘central support functions will initially be provided from within government but will leverage existing activities and expertise from across the broader economy’ (para 15). Quite how this can be effectively done outwith a clearly defined, adequately resourced and durable institutional framework is anybody’s guess. In fact, the AI WP recognises that this approach ‘needs to evolve’ and that Government needs to understand how ‘existing regulatory forums could be expanded to include the full range of regulators‘, what ‘additional expertise government may need’, and the ‘most effective way to convene input from across industry and consumers to ensure a broad range of opinions‘ (para 77).

While the creation of a regulator seems a rather obvious answer to all these questions, the AI WP has rejected it in unequivocal terms. Is the AI WP a U-turn waiting to happen? Is the mention that ‘[a]s we enter a new phase we will review the role of the AI Council and consider how best to engage expertise to support the implementation of the regulatory framework’ (para 78) a placeholder for an imminent project to rejig the AI Council and turn it into an AI regulator? What is the place and role of the Office for AI and the Centre for Data Ethics and Innovation in all this?

Moreover, the AI WP indicates that the ‘proposed framework is aligned with, and supplemented by, a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. Government will promote the use of such tools’ (para 16). Relatedly, the AI WP relies on those mechanisms to avoid addressing issues of accountability across AI life cycle, indicating that ‘[t]ools for trustworthy AI like assurance techniques and technical standards can support supply chain risk management. These tools can also drive the uptake and adoption of AI by building justified trust in these systems, giving users confidence that key AI-related risks have been identified, addressed and mitigated across the supply chain’ (para 84). Those tools are discussed in much more detail in part 4 of the AI WP (paras 106 ff). Annex A also creates a backdoor for technical standards to directly become the operationalisation of the general principles on which the regulatory framework is based, by explicitly identifying standards regulators may want to consider ‘to clarify regulatory guidance and support the implementation of risk treatment measures’.

This approach to the offloading of tricky regulatory issues to the emergence of private-sector led standards is simply an exercise in the transfer of regulatory power to those setting such standards, guidance and assurance techniques and, ultimately, a privatisation of AI regulation.

A different approach to sandboxes and testbeds?

The Government will take forward the GCSA recommendation to establish a regulatory sandbox for AI, which ‘will bring together regulators to support innovators directly and help them get their products to market. The sandbox will also enable us to understand how regulation interacts with new technologies and refine this interaction where necessary’ (p2). This thus is bound to hardwire some of the issues mentioned above in relation to the GCSA proposal, as well as being reflective of the general pro-industry approach of the AI WP, which is obvious in the framing that the regulators are expected to ‘support innovators directly and help them get their products to market’. Industrial policy seems to be shoehorned and mainstreamed across all areas of regulatory activity, at least in relation to AI (but it can then easily bleed into non-AI-related regulatory activities).

While the AI WP indicates the commitment to implement the AI sandbox recommended in the GCSA Report, it is by no means clear that the implementation will be in the way proposed in the report (ie a multi-regulator sandbox nested under DRCF, with an expectation that it would develop a crucial coordination and regulatory centralisation effect). The AI WP indicates that the Government still has to explore ‘what service focus would be most useful to industry’ in relation to AI sandboxes (para 96), but it sets out the intention to ‘focus an initial pilot on a single sector, multiple regulator sandbox’ (para 97), which diverges from the approach in the GCSA Report, which would be that of a sandbox for ‘multiple sectors, multiple regulators’. While the public consultation intends to gather feedback on which industry sector is the most appropriate, I would bet that the financial services sector will be chosen and that the ‘regulatory innovation’ will simply result in some closer cooperation between the ICO and FCA.

Regulator capabilities — ai regulation on a shoestring?

The AI WP turns to the issue of regulator capabilities and stresses that ‘While our approach does not currently involve or anticipate extending any regulator’s remit, regulating AI uses effectively will require many of our regulators to acquire new skills and expertise’ (para 102), and that the Government has ‘identified potential capability gaps among many, but not all, regulators’ (para 103).

To try to (start to) address this fundamental issue in the context of a devolved and decentralised regulatory framework, the AI WP indicates that the Government will explore, for example, whether it is ‘appropriate to establish a common pool of expertise that could establish best practice for supporting innovation through regulatory approaches and make it easier for regulators to work with each other on common issues. An alternative approach would be to explore and facilitate collaborative initiatives between regulators – including, where appropriate, further supporting existing initiatives such as the DRCF – to share skills and expertise’ (para 105).

While the creation of ‘common regulatory capacity’ has been advocated by the Alan Turing Institute, and while this (or inter-regulator secondments, for example) could be a short term fix, it seems that this tries to address the obvious challenge of adequately resourcing regulatory bodies without a medium and long-term strategy to build up the digital capability of the public sector, and to perpetuate the current approach to AI regulation on a shoestring. The governance and organisational implications arising from the creation of common pool of expertise need careful consideration, in particular as some of the likely dysfunctionalities are only marginally smaller than current over-reliance on external consultants, or the ‘salami-slicing’ approach to regulatory and policy interventions that seems to bleed from the ’agile’ management of technological projects into the realm of regulatory activity, which however requires institutional memory and the embedding of knowledge and expertise.

Digital procurement, PPDS and multi-speed datafication -- some thoughts on the March 2023 PPDS Communication

The 2020 European strategy for data ear-marked public procurement as a high priority area for the development of common European data spaces for public administrations. The 2020 data strategy stressed that

Public procurement data are essential to improve transparency and accountability of public spending, fighting corruption and improving spending quality. Public procurement data is spread over several systems in the Member States, made available in different formats and is not easily possible to use for policy purposes in real-time. In many cases, the data quality needs to be improved.

To address those issues, the European Commission was planning to ‘Elaborate a data initiative for public procurement data covering both the EU dimension (EU datasets, such as TED) and the national ones’ by the end of 2020, which would be ‘complemented by a procurement data governance framework’ by mid 2021.

With a 2+ year delay, details for the creation of the public procurement data space (PPDS) were disclosed by the European Commission on 16 March 2023 in the PPDS Communication. The procurement data governance framework is now planned to be developed in the second half of 2023.

In this blog post, I offer some thoughts on the PPDS, its functional goals, likely effects, and the quickly closing window of opportunity for Member States to support its feasibility through an ambitious implementation of the new procurement eForms at domestic level (on which see earlier thoughts here).

1. The PPDS Communication and its goals

The PPDS Communication sets some lofty ambitions aligned with those of the closely-related process of procurement digitalisation, which the European Commission in its 2017 Making Procurement Work In and For Europe Communication already saw as not only an opportunity ‘to streamline and simplify the procurement process’, but also ‘to rethink fundamentally the way public procurement, and relevant parts of public administrations, are organised … [to seize] a unique chance to reshape the relevant systems and achieve a digital transformation’ (at 11-12).

Following the same rhetoric of transformation, the PPDS Communication now stresses that ‘Integrated data combined with the use of state-of the-art and emerging analytics technologies will not only transform public procurement, but also give new and valuable insights to public buyers, policy-makers, businesses and interested citizens alike‘ (at 2). It goes further to suggest that ‘given the high number of ecosystems concerned by public procurement and the amount of data to be analysed, the impact of AI in this field has a potential that we can only see a glimpse of so far‘ (at 2).

The PPDS Communication claims that this data space ‘will revolutionise the access to and use of public procurement data:

  • It will create a platform at EU level to access for the first time public procurement data scattered so far at EU, national and regional level.

  • It will considerably improve data quality, availability and completeness, through close cooperation between the Commission and Member States and the introduction of the new eForms, which will allow public buyers to provide information in a more structured way.

  • This wealth of data will be combined with an analytics toolset including advanced technologies such as Artificial Intelligence (AI), for example in the form of Machine Learning (ML) and Natural Language Processing (NLP).’

A first comment or observation is that this rhetoric of transformation and revolution not only tends to create excessive expectations on what can realistically be delivered by the PPDS, but can also further fuel the ‘policy irresistibility’ of procurement digitalisation and thus eg generate excessive experimentation or investment into the deployment of digital technologies on the basis of such expectations around data access through PPDS (for discussion, see here). Policy-makers would do well to hold off on any investments and pilot projects seeking to exploit the data presumptively pooled in the PPDS until after its implementation. A closer look at the PPDS and the significant roadblocks towards its full implementation will shed further light on this issue.

2. What is the PPDS?

Put simply, the PPDS is a project to create a single data platform to bring into one place ‘all procurement data’ from across the EU—ie both data on above threshold contracts subjected to mandatory EU-wide publication through TED (via eForms from October 2023), and data on below threshold contracts, which publication may be required by the domestic laws of the Member States, or entirely voluntary for contracting authorities.

Given that above threshold procurement data is already (in the process of being) captured at EU level, the PPDS is very much about data on procurement not covered by the EU rules—which represents 80% of all public procurement contracts. As the PPDS Communication stresses

To unlock the full potential of public procurement, access to data and the ability to analyse it are essential. However, data from only 20% of all call for tenders as submitted by public buyers is available and searchable for analysis in one place [ie TED]. The remaining 80% are spread, in different formats, at national or regional level and difficult or impossible to re-use for policy, transparency and better spending purposes. In order (sic) words, public procurement is rich in data, but poor in making it work for taxpayers, policy makers and public buyers.

The PPDS thus intends to develop a ‘technical fix’ to gain a view on the below-threshold reality of procurement across the EU, by ‘pulling and pooling’ data from existing (and to be developed) domestic public contract registers and transparency portals. The PPDS is thus a mechanism for the aggregation of procurement data currently not available in (harmonised) machine-readable and structured formats (or at all).

As the PPDS Communication makes clear, it consists of four layers:
(1) A user interface layer (ie a website and/or app) underpinned by
(2) an analytics layer, which in turn is underpinned by (3) an integration layer that brings together and minimally quality-assures the (4) data layer sourced from TED, Member State public contract registers (including those at sub-national level), and data from other sources (eg data on beneficial ownership).

The two top layers condense all potential advantages of the PPDS, with the analytics layer seeking to develop a ‘toolset including emerging technologies (AI, ML and NLP)‘ to extract data insights for a multiplicity of purposes (see below 3), and the top user interface seeking to facilitate differential data access for different types of users and stakeholders (see below 4). The two bottom layers, and in particular the data layer, are the ones doing all the heavy lifting. Unavoidably, without data, the PPDS risks being little more than an empty shell. As always, ‘no data, no fun’ (see below 5).

Importantly, the top three layers are centralised and the European Commission has responsibility (and funding) for developing them, while the bottom data layer is decentralised, with each Member State retaining responsibility for digitalising its public procurement systems and connecting its data sources to the PPDS. Member States are also expected to bear their own costs, although there is EU funding available through different mechanisms. This allocation of responsibilities follows the limited competence of the EU in this area of inter-administrative cooperation, which unfortunately heightens the risks of the PPDS becoming little more than an empty shell, unless Member States really take the implementation of eForms and the collaborative approach to the construction of the PPDS seriously (see below 6).

The PPDS Communication foresees a progressive implementation of the PPDS, with the goal of having ‘the basic architecture and analytics toolkit in place and procurement data published at EU level available in the system by mid-2023. By the end of 2024, all participating national publication portals would be connected, historic data published at EU level integrated and the analytics toolkit expanded. As of 2025, the system could establish links with additional external data sources’ (at 2). It will most likely be delayed, but that is not very important in the long run—especially as the already accrued delays are the ones that pose a significant limitation on the adequate rollout of the PPDS (see below 6).

3. PPDS’ expected functionality

The PPDS Communication sets expectations around the functionality that could be extracted from the PPDS by different agents and stakeholders.

For public buyers, in addition to reducing the burden of complying with different types of (EU-mandated) reporting, the PPDS Communication expects that ‘insights gained from the PPDS will make it much easier for public buyers to

  • team up and buy in bulk to obtain better prices and higher quality;

  • generate more bids per call for tenders by making calls more attractive for bidders, especially for SMEs and start-ups;

  • fight collusion and corruption, as well as other criminal acts, by detecting suspicious patterns;

  • benchmark themselves more accurately against their peers and exchange knowledge, for instance with the aim of procuring more green, social and innovative products and services;

  • through the further digitalisation and emerging technologies that it brings about, automate tasks, bringing about considerable operational savings’ (at 2).

This largely maps onto my analysis of likely applications of digital technologies for procurement management, assuming the data is there (see here).

The PPDS Communication also expects that policy-makers will ‘gain a wealth of insights that will enable them to predict future trends‘; that economic operators, and SMEs in particular, ‘will have an easy-to-use portal that gives them access to a much greater number of open call for tenders with better data quality‘, and that ‘Citizens, civil society, taxpayers and other interested stakeholders will have access to much more public procurement data than before, thereby improving transparency and accountability of public spending‘ (at 2).

Of all the expected benefits or functionalities, the most important ones are those attributed to public buyers and, in particular, the possibility of developing ‘category management’ insights (eg potential savings or benchmarking), systems of red flags in relation to corruption and collusion risks, and the automation of some tasks. However, unlocking most of these functionalities is not dependent on the PPDS, but rather on the existence of procurement data at the ‘right’ level.

For example, category management or benchmarking may be more relevant or adequate (as well as more feasible) at national than at supra-national level, and the development of systems of red flags can also take place at below-EU level, as can automation. Importantly, the development of such functionalities using pan-EU data, or data concerning more than one Member State, could bias the tools in a way that makes them less suited, or unsuitable, for deployment at national level (eg if the AI is trained on data concerning solely jurisdictions other than the one where it would be deployed).

In that regard, the expected functionalities arising from PPDS require some further thought and it can well be that, depending on implementation (in particular in relation to multi-speed datafication, as below 5), Member States are better off solely using domestic data than that coming from the PPDS. This is to say that PPDS is not a solid reality and that its enabling character will fluctuate with its implementation.

4. Differential procurement data access through PPDS

As mentioned above, the PPDS Communication stresses that ‘Citizens, civil society, taxpayers and other interested stakeholders will have access to much more public procurement data than before, thereby improving transparency and accountability of public spending’ (at 2). However, this does not mean that the PPDS will be (entirely) open data.

The Communication itself makes clear that ‘Different user categories (e.g. Member States, public buyers, businesses, citizens, NGOs, journalists and researchers) will have different access rights, distinguishing between public and non-public data and between participating Member States that share their data with the PPDS (PPDS members, …) and those that need more time to prepare’ (at 8). Relatedly, ‘PPDS members will have access to data which is available within the PPDS. However, even those Member States that are not yet ready to participate in the PPDS stand to benefit from implementing the principles below, due to their value for operational efficiency and preparing for a more evidence-based policy’ (at 9). This raises two issues.

First, and rightly, the Communication makes clear that the PPDS moves away from a model of ‘fully open’ or ‘open by default’ procurement data, and that access to the PPDS will require differential permissioning. This is the correct approach. Regardless of the future procurement data governance framework, it is clear that the emerging thicket of EU data governance rules ‘requires the careful management of a system of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions’ (see here). This will however raise significant issues for the implementation of the PPDS, as it will generate some constraints or disincentives for an ambitions implementation of eForms at national level (see below 6).

Second, and less clearly, the PPDS Communication evidences that not all Member States will automatically have equal access to PPDS data. The design seems to be such that Member States that do not feed data into PPDS will not have access to it. While this could be conceived as an incentive for all Member States to join PPDS, this outcome is by no means guaranteed. As above (3), it is not clear that Member States will be better off—in terms of their ability to extract data insights or to deploy digital technologies—by having access to pan-EU data. The main benefit resulting from pan-EU data only accrues collectively and, primarily, by means of facilitating oversight and enforcement by the European Commission. From that perspective, the incentives for PPDS participation for any given Member State may be quite warped or internally contradictory.

Moreover, given that plugging into PPDS is not cost-free, a Member State that developed a data architecture not immediately compatible with PPDS may well wonder whether it made sense to shoulder the additional costs and risks. From that perspective, it can only be hoped that the existence of EU funding and technical support will be maximised by the European Commission to offload that burden from the (reluctant) Member States. However, even then, full PPDS participation by all Member States will still not dispel the risk of multi-speed datafication.

5. No data, no fun — and multi-speed datafication

Related to the risk that some EU Member States will become PPDS members and others not, there is a risk (or rather, a reality) that not all PPDS members will equally contribute data—thus creating multi-speed datafication, even within the Member States that opt in to the PPDS.

First, the PPDS Communication makes it clear that ‘Member States will remain in control over which data they wish to share with the PPDS (beyond the data that must be published on TED under the Public Procurement Directives)‘ (at 7), It further specifies that ‘With the eForms, it will be possible for the first time to provide data in notices that should not be published, or not immediately. This is important to give assurance to public buyers that certain data is not made publicly available or not before a certain point in time (e.g. prices)’ (at 7, fn 17).

This means that each Member State will only have to plug whichever data it captures and decides to share into PPDS. It seems plain to see that this will result in different approaches to data capture, multiple levels of granularity, and varying approaches to restricting access to the date in the different Member States, especially bearing in mind that ‘eForms are not an “off the shelf” product that can be implemented only by IT developers. Instead, before developers start working, procurement policy decision-makers have to make a wide range of policy decisions on how eForms should be implemented’ in the different Member States (see eForms Implementation Handbook, at 9).

Second, the PPDS Communication is clear (in a footnote) that ‘One of the conditions for a successful establishment of the PPDS is that Member States put in place automatic data capture mechanisms, in a first step transmitting data from their national portals and contract registers’ (at 4, fn 10). This implies that Member States may need to move away from manually inputted information and that those seeking to create new mechanisms for automatic procurement data capture can take an incremental approach, which is very much baked into the PPDS design. This relates, for example, to the distinction between pre- and post-award procurement data, with pre-award data subjected to higher demands under EU law. It also relates to above and below threshold data, as only above threshold data is subjected to mandatory eForms compliance.

In the end, the extent to which a (willing) Member State will contribute data to the PPDS depends on its decisions on eForms implementation, which should be well underway given the October 2023 deadline for mandatory use (for above threshold contracts). Crucially, Member States contributing more data may feel let down when no comparable data is contributed to PPDS by other Member States, which can well operate as a disincentive to contribute any further data, rather than as an incentive for the others to match up that data.

6. Ambitious eForms implementation as the PPDS’ Achilles heel

As the analysis above has shown, the viability of the PPDS and its fitness for purpose (especially for EU-level oversight and enforcement purposes) crucially depends on the Member States deciding to take an ambitious approach to the implementation of eForms, not solely by maximising their flexibility for voluntary uses (as discussed here) but, crucially, by extending their mandatory use (under national law) to all below threshold procurement. It is now also clear that there is a need for as much homogeneity as possible in the implementation of eForms in order to guarantee that the information plugged into PPDS is comparable—which is an aspect of data quality that the PPDS Communication does not seem to have at all considered).

It seems that, due to competing timings, this poses a bit of a problem for the rollout of the PPDS. While eForms need to be fully implemented domestically by October 2023, the PPDS Communication suggests that the connection of national portals will be a matter for 2024, as the first part of the project will concern the top two layers and data connection will follow (or, at best, be developed in parallel). Somehow, it feels like the PPDS is being built without a strong enough foundation. It would be a shame (to put it mildly) if Member States having completed a transition to eForms by October 2023 were dissuaded from a second transition into a more ambitious eForms implementation in 2024 for the purposes of the PPDS.

Given that the most likely approach to eForms implementation is rather minimalistic, it can well be that the PPDS results in not much more than an empty shell with fancy digital analytics limited to very superficial uses. In that regard, the two-year delay in progressing the PPDS has created a very narrow (and quickly dwindling) window of opportunity for Member States to engage with an ambitions process of eForms implementation

7. Final thoughts

It seems to me that limited and slow progress will be attained under the PPDS in coming years. Given the undoubted value of harnessing procurement data, I sense that Member States will progress domestically, but primarily in specific settings such as that of their central purchasing bodies (see here). However, whether they will be onboarded into PPDS as enthusiastic members seems less likely.

The scenario seems to resemble limited voluntary cooperation in other areas (eg interoperability; for discussion see here). It may well be that the logic of EU competence allocation required this tentative step as a first move towards a more robust and proactive approach by the Commission in a few years, on grounds that the goal of creating the European data space could not be achieved through this less interventionist approach.

However, given the speed at which digital transformation could take place (and is taking place in some parts of the EU), and the rhetoric of transformation and revolution that keeps being used in this policy area, I can’t but feel let down by the approach in the PPDS Communication, which started with the decision to build the eForms on the existing regulatory framework, rather than more boldly seeking a reform of the EU procurement rules to facilitate their digital fitness.

Should FTAs open and close (or only open) international procurement markets?

I have recently had some interesting discussions on the role of Free Trade Agreements (FTAs) in liberalising procurement-related international trade. The standard analysis is that FTAs serve the purpose of reciprocally opening up procurement markets to the economic operators of the signatory parties, and that the parties negotiate them on the basis of reciprocal concessions so that the market access given by A to economic operators from B roughly equates that given by B to economic operators from A (or is offset by other concessions from B in other chapters of the FTA with an imbalance in A’s favour).

Implicitly, this analysis assumes that A’s and B’s markets are (relatively) closed to third party economic operators, and that they will remain as such. The more interesting question is, though, whether FTAs should also close procurement markets to non-signatory parties in order to (partially) protect the concessions mutually granted, as well as to put pressure for further procurement-related trade liberalisation.

Let’s imagine that A, a party with several existing FTAs with third countries covering procurement, manages to negotiate the first FTA signed by B liberalising the latter’s procurement markets. It could seem that economic operators from A would have preferential access to B’s markets over any other economic operators (other than B’s, of course).

However, it can well be that, in practice, once the protectionist boat has sailed, B decides to entertain tenders coming from economic operators from C, D … Z for, once B’s domestic industries are not protected, B’s public buyers may well want to browse through the entire catalogue offered by the world market—especially if A does not have the most advanced industry for a specific type of good, service or technology (and I have a hunch this may well be a future scenario concerning digital technologies and AI in particular).

A similar issue can well arise where B already has other FTAs covering procurement and this generates a situation where it is difficult or complex for B’s public buyers to assess whether an economic operator from X does or not have guaranteed market access under the existing FTAs, which can well result in B’s public buyers granting access to economic operators from any origin to avoid legal risks resulting from an incorrect analysis of the existing legal commitments (once open for some, de facto open for all).

I am sure there are more situations where the apparent preferential access given by B to A in the notional FTA can be quickly eroded despite assumptions on how international trade liberalisation operates under FTAs. This thus begs the question whether A should include in its FTAs a clause binding B (and itself!) to unequal treatment (ie exclusion) of economic operators not covered by FTAs (either existing or future) or multilateral agreements. In that way, the concessions given by B to A may be more meaningful and long-lasting, or at least put pressure on third countries to participate in bilateral (and multilateral — looking at the WTO GPA) procurement-related liberalisation efforts.

In the EU’s specific case, the adoption of such requirements in its FTAs covering procurement would be aligned with the policy underlying the 2019 guidelines on third country access to procurement markets, the International Procurement Instrument, and the Foreign Subsidies Regulation.

It may be counter-intuitive that instruments of trade liberalisation should seek to close (or rather keep closed) some markets, but I think this is an issue where FTAs could be used more effectively not only to bilaterally liberalise trade, but also to generate further dynamics of trade liberalisation—or at least to avoid the erosion of bilateral commitments in situations of regulatory complexity or market dynamics pushing for ‘off-the-books’ liberalisation through the practical acceptance of tenders coming from anywhere.

This is an issue I would like to explore further after my digital tech procurement book, so I would be more than interested in thoughts and comments!

Revisiting the Fosen-Linjen Saga on threshold for procurement damages

I had the honour of being invited to contribute to a future publication to celebrate the EFTA Court’s 30th Anniversary in 2024. I was asked to revisit the Fosen-Linjen Saga on the EFTA Court’s interpretation of the threshold for liability in damages arising from breaches of EU/EEA procurement law.

The abstract of my chapter is as follows:

The 2017-2019 Fosen-Linjen Saga saw the EFTA Court issue diametrically opposed views on the threshold for damages liability arising from breaches of EEA/EU public procurement law. Despite the arguably clear position under EU law following the European Court of Justice’s 2010 Judgment in Spijker—ie that liability in damages under the Remedies Directive only arises when the breach is ‘sufficiently serious’—Fosen-Linjen I stated that a ‘simple breach of public procurement law is in itself sufficient to trigger the liability of the contracting authority’. Such an approach would have created divergence between EEA and EU procurement law and generated undesired effects on the administration of procurement procedures and excessive litigation. Moreover, Fosen-Linjen I showed significant internal and external inconsistencies, which rendered it an unsafe interpretation of the existing rules, tainted by judicial activism on the part of the EFTA Court under its then current composition. Taking the opportunity of a rare second referral, and under a different Court composition, Fosen-Linjen II U-turned and stated that the Remedies Directive ‘does not require that any breach of the rules governing public procurement in itself is sufficient to award damages’. This realigned EEA law with EU law in compliance with the uniform interpretation goal to foster legal homogeneity. This chapter revisits the Fosen-Linjen Saga and offers additional reflections on its implications, especially for a long-overdue review of the Remedies Directive.

The full chapter is available as: A Sanchez-Graells, ‘The Fosen-Linjen Saga: not so simple after all?’ in The EFTA Court and the EEA: 30 Years On (Oxford, Hart Publishing, forthcoming): https://ssrn.com/abstract=4388938.

Two roles of procurement in public sector digitalisation: gatekeeping and experimentation

In a new draft chapter for my monograph, I explore how, within the broader process of public sector digitalisation, and embroiled in the general ‘race for AI’ and ‘race for AI regulation’, public procurement has two roles. In this post, I summarise the main arguments (all sources, included for quoted materials, are available in the draft chapter).

This chapter frames the analysis in the rest of the book and will be fundamental in the review of the other drafts, so comments would be most welcome (a.sanchez-graells@bristol.ac.uk).

Public sector digitalisation is accelerating in a regulatory vacuum

Around the world, the public sector is quickly adopting digital technologies in virtually every area of its activity, including the delivery of public services. States are not solely seeking to digitalise their public sector and public services with a view to enhance their operation (internal goal), but are also increasingly willing to use the public sector and the construction of public infrastructure as sources of funding and spaces for digital experimentation, to promote broader technological development and boost national industries in a new wave of (digital) industrial policy (external goal). For example, the European Commission clearly seeks to make the ‘public sector a trailblazer for using AI’. This mirrors similar strategic efforts around the globe. The process of public sector digitalisation is thus embroiled in the broader race for AI.

Despite the fact that such dynamic of public sector digitalisation raises significant regulatory risks and challenges, well-known problems in managing uncertainty in technology regulation—ie the Collingridge dilemma or pacing problem (‘cannot effectively regulate early on, so will probably regulate too late’)—and different normative positions, interact with industrial policy considerations to create regulatory hesitation and side-line anticipatory approaches. This creates a regulatory gap —or rather a laissez faire environment—whereby the public sector is allowed to experiment with the adoption of digital technologies without clear checks and balances. The current strategy is by and large one of ‘experiment first, regulate later’. And while there is little to no regulation, there is significant experimentation and digital technology adoption by the public sector.

Despite the emergence of a ‘race for AI regulation’, there are very few attempts to regulate AI use in the public sector—with the EU’s proposed EU AI Act offering a (partial) exception—and general mechanisms (such as judicial review) are proving slow to adapt. The regulatory gap is thus likely to remain, at least partially, in the foreseeable future—not least, as the effective functioning of new rules such as the EU AI Act will not be immediate.

Procurement emerges as a regulatory gatekeeper to plug that gap

In this context, proposals have started to emerge to use public procurement as a tool of digital regulation. Or, in other words, to use the acquisition of digital technologies by the public sector as a gateway to the ‘regulation by contract’ of their use and governance. Think tanks, NGOs, and academics alike have stressed that the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’, and that procurement ‘is a central policy tool governments can deploy to catalyse innovation and influence the development of solutions aligned with government policy and society’s underlying values’. Public procurement is thus increasingly expected to play a crucial gatekeeping role in the adoption of digital technologies for public governance and the delivery of public services.

Procurement is thus seen as a mechanism of ‘regulation by contract’ whereby the public buyer can impose requirements seeking to achieve broad goals of digital regulation, such as transparency, trustworthiness, or explainability, or to operationalise more general ‘AI ethics’ frameworks. In more detail, the Council of Europe has recommended using procurement to: (i) embed requirements of data governance to avoid violations of human rights norms and discrimination stemming from faulty datasets used in the design, development, or ongoing deployment of algorithmic systems; (ii) ‘ensure that algorithmic design, development and ongoing deployment processes incorporate safety, privacy, data protection and security safeguards by design’; (iii) require ‘public, consultative and independent evaluations of the lawfulness and legitimacy of the goal that the [procured algorithmic] system intends to achieve or optimise, and its possible effects in respect of human rights’; (iv) require the conduct of human rights impact assessments; or (v) promote transparency of the ‘use, design and basic processing criteria and methods of algorithmic systems’.

Given the absence of generally applicable mandatory requirements in the development and use of digital technologies by the public sector in relation to some or all of the stated regulatory goals, the gatekeeping role of procurement in digital ‘regulation by contract’ would mostly involve the creation of such self-standing obligations—or at least the enforcement of emerging non-binding norms, such as those developed by (voluntary) standardisation bodies or, more generally, by the technology industry. In addition to creating risks of regulatory capture and commercial determination, this approach may overshadow the difficulties in using procurement for the delivery of the expected regulatory goals. A closer look at some selected putative goals of digital regulation by contract sheds light on the issue.

Procurement is not at all suited to deliver incommensurable goals of digital regulation

Some of the putative goals of digital regulation by contract are incommensurable. This is the case in particular of ‘trustworthiness’ or ‘responsibility’ in AI use in the public sector. Trustworthiness or responsibility in the adoption of AI can have several meanings, and defining what is ‘trustworthy AI’ or ‘responsible AI’ is in itself contested. This creates a risk of imprecision or generality, which could turn ‘trustworthiness’ or ‘responsibility’ into mere buzzwords—as well as exacerbate the problem of AI ethics-washing. As the EU approach to ‘trustworthy AI’ evidences, the overarching goals need to be broken down to be made operational. In the EU case, ‘trustworthiness’ is intended to cover three requirements for lawful, ethical, and robust AI. And each of them break down into more detailed or operationalizable requirements.

In turn, some of the goals into which ‘trustworthiness’ or ‘responsibility’ breaks down are also incommensurable. This is notably the case of ‘explainability’ or interpretability. There is no such thing as ‘the explanation’ that is required in relation to an algorithmic system, as explanations are (technically and legally) meant to serve different purposes and consequently, the design of the explainability of an AI deployment needs to take into account factors such as the timing of the explanation, its (primary) audience, the level of granularity (eg general or model level, group-based, or individual explanations), or the level of risk generated by the use of the technical solution. Moreover, there are different (and emerging) approaches to AI explainability, and their suitability may well be contingent upon the specific intended use or function of the explanation. And there are attributes or properties influencing the interpretability of a model (eg clarity) for which there are no evaluation metrics (yet?). Similar issues arise with other putative goals, such as the implementation of a principle of AI minimisation in the public sector.

Given the way procurement works, it is ill-suited for the delivery of incommensurable goals of digital regulation.

Procurement is not well suited to deliver other goals of digital regulation

There are other goals of digital regulation by contract that are seemingly better suited to delivery through procurement, such as those relating to ‘technical’ characteristics such as neutrality, interoperability, openness, or cyber security, or in relation to procurement-adjacent algorithmic transparency. However, the operationalisation of such requirements in a procurement context will be dependent on a range of considerations, such as judgements on the need to keep information confidential, judgements on the state of the art or what constitutes a proportionate and economically justified requirement, the generation of systemic effects that are hard to evaluate within the limits of a procurement procedure, or trade-offs between competing considerations. The extent to which procurement will be able to operationalise the desired goals of digital regulation will depend on its institutional embeddedness and on the suitability of procurement tools to impose specific regulatory approaches. Additional analysis conducted elsewhere (see here and here) suggests that, also in relation to these regulatory goals, the emerging approach to AI ‘regulation by contract’ cannot work well.

Procurement digitalisation offers a valuable case study

The theoretical analysis of the use of procurement as a tool of digital ‘regulation by contract’ (above) can be enriched and further developed with an in-depth case study of its practical operation in a discrete area of public sector digitalisation. To that effect, it is important to identify an area of public sector digitalisation which is primarily or solely left to ‘regulation by contract’ through procurement—to isolate it from the interaction with other tools of digital regulation (such as data protection, or sectoral regulation). It is also important for the chosen area to demonstrate a sufficient level of experimentation with digitalisation, so that the analysis is not a mere concretisation of theoretical arguments but rather grounded on empirical insights.

Public procurement is itself an area of public sector activity susceptible to digitalisation. The adoption of digital tools is seen as a potential source of improvement and efficiency in the expenditure of public funds through procurement, especially through the adoption of digital technology solutions developed in the context of supply chain management and other business operations in the private sector (or ‘ProcureTech’), but also through the adoption of digital tools tailored to the specific goals of procurement regulation, such as the prevention of corruption or collusion. There is emerging evidence of experimentation in procurement digitalisation, which is shedding light on regulatory risks and challenges.

In view of its strategic importance and the current pace of procurement digitalisation, it is submitted that procurement is an appropriate site of public sector experimentation in which to explore the shortcomings of the approach to AI ‘regulation by contract’. Procurement is an adequate case study because, being a ‘back-office’ function, it does not concern (likely) high-risk uses of AI or other digital technologies, and it is an area where data protection regulation is unlikely to provide a comprehensive regulatory framework (eg for decision automation) because the primary interactions are between public buyers and corporate institutions.

Procurement therefore currently represents an unregulated digitalisation space in which to test and further explore the effectiveness of the ‘regulation by contract’ approach to governing the transition to a new model of digital public governance.

* * * * * *

The full draft is available on SSRN as: Albert Sanchez-Graells, ‘The two roles of procurement in the transition towards digital public governance: procurement as regulatory gatekeeper and as site for public sector experimentation’ (March 10, 2023): https://ssrn.com/abstract=4384037.

Procurement centralisation, digital technologies and competition (new working paper)

Source: Wikipedia.

I have just uploaded on SSRN the new working paper ‘Competition Implications of Procurement Digitalisation and the Procurement of Digital Technologies by Central Purchasing Bodies’, which I will present at the conference on “Centralization and new trends" to be held at the University of Copenhagen on 25-26 April 2023 (there is still time to register!).

The paper builds on my ongoing research on digital technologies and procurement governance, and focuses on the interaction between the strategic goals of procurement centralisation and digitalisation set by the European Commission in its 2017 public procurement strategy.

The paper identifies different ways in which current trends of procurement digitalisation and the challenges in procuring digital technologies push for further procurement centralisation. This is in particular to facilitate the extraction of insights from big data held by central purchasing bodies (CPBs); build public sector digital capabilities; and boost procurement’s regulatory gatekeeping potential. The paper then explores the competition implications of this technology-driven push for further procurement centralisation, in both ‘standard’ and digital markets.

The paper concludes by stressing the need to bring CPBs within the remit of competition law (which I had already advocated eg here), the opportunity to consider allocating CPB data management to a separate competent body under the Data Governance Act, and the related need to develop an effective system of mandatory requirements and external oversight of public sector digitalisation processes, specially to constrain CPBs’ (unbridled) digital regulatory power.

The full working paper reference is: A Sanchez-Graells, Albert, ‘Competition Implications of Procurement Digitalisation and the Procurement of Digital Technologies by Central Purchasing Bodies’ (March 2, 2023), Available at SSRN: https://ssrn.com/abstract=4376037. As always, any feedback most welcome: a.sanchez-graells@bristol.ac.uk.