Procuring AI without understanding it. Way to go?

The UK’s Digital Regulation Cooperation Forum (DRCF) has published a report on Transparency in the procurement of algorithmic systems (for short, the ‘AI procurement report’). Some of DRCF’s findings in the AI procurement report are astonishing, and should attract significant attention. The one finding that should definitely not go unnoticed is that, according to DRCF, ‘Buyers can lack the technical expertise to effectively scrutinise the [algorithmic systems] they are procuring, whilst vendors may limit the information they share with buyers’ (at 9). While this is not surprising, the ‘normality’ with which this finding is reported evidences the simple fact that, at least in the UK, it is accepted that the AI field is dominated by technology providers, that all institutional buyers are ‘AI consumers’, and that regulators do not seem to see a need to intervene to rebalance the situation.

The report is not specifically about public procurement of AI, but its content is relevant to assessing the conditions surrounding the acquisition of AI by the public sector. First, the report covers algorithmic systems other than AI—that is, automation based on simpler statistical techniques—but the issues it raises can only be more acute in relation to AI than in relation to simpler algorithmic systems (as the report itself highlights, at 9). Second, the report does not make explicit whether the mix of buyers from which it draws evidence includes public as well as private buyers. However, given the public sector’s digital skills gap, there is no reason to believe that the limited knowledge and asymmetries of information documented in the AI procurement report are less acute for public buyers than private buyers.

Moreover, the AI procurement report goes as far as to suggest that public sector procurement is somewhat in a better position than private sector procurement of AI because there are multiple guidelines focusing on public procurement (notably, the Guidelines for AI procurement). Given the shortcomings in those guidelines (see here for earlier analysis), this can hardly provide any comfort.

The AI procurement report evidences that UK (public and private) buyers are procuring AI they do not understand and cannot adequately monitor. This is extremely worrying. The AI procurement report presents evidence gathered by DRCF in two workshops with 23 vendors and buyers of algorithmic systems in Autumn 2022. The evidence base is qualitative and draws from a limited sample, so it may need to be approached with caution. However, its findings are sufficiently worrying as to require a much more robust policy intervention that the proposals in the recently released White Paper ‘AI regulation: a pro-innovation approach’ (for discussion, see here). In this blog post, I summarise the findings of the AI procurement report I find more problematic and link this evidence to the failing attempt at using public procurement to regulate the acquisition of AI by the public sector in the UK.

Misinformed buyers with limited knowledge and no ability to oversee

In its report, DRCF stresses that ‘some buyers lacked understanding of [algorithmic systems] and could struggle to recognise where an algorithmic process had been integrated into a system they were procuring’, and that ‘[t]his issue may be compounded where vendors fail to note that a solution includes AI or its subset, [machine learning]’ (at 9). The report goes on to stress that ‘[w]here buyers have insufficient information about the development or testing of an [algorithmic system], there is a risk that buyers could be deploying an [algorithmic system] that is unlawful or unethical. This risk is particularly acute for high-risk applications of [algorithmic systems], for example where an [algorithmic system] determines a person's access to employment or housing or where the application is in a highly regulated sector such as finance’ (at 10). Needless to say, however, this applies to a much larger set of public sector areas of activity, and the problems are not limited to high-risk applications involving individual rights, but also to those that involve high stakes from a public governance perspective.

Similarly, DRCF stresses that while ‘vendors use a range of performance metrics and testing methods … without appropriate technical expertise or scrutiny, these metrics may give buyers an incomplete picture of the effectiveness of an [algorithmic system]’; ‘vendors [can] share performance metrics that overstate the effectiveness of their [algorithmic system], whilst omitting other metrics which indicate lower effectiveness in other areas. Some vendors raised concerns that their competitors choose the most favourable (i.e., the highest) performance metric to win procurement contracts‘, while ‘not all buyers may have the technical knowledge to understand which performance metrics are most relevant to their procurement decision’ (at 10). This demolishes any hope that buyers facing this type of knowledge gap and asymmetry of information can compare algorithmic systems in a meaningful way.

The issue is further compounded by the lack of standards and metrics. The report stresses this issue: ‘common or standard metrics do not yet exist within industry for the evaluation of [algorithmic systems]. For vendors, this can make it more challenging to provide useful information, and for buyers, this lack of consistency can make it difficult to compare different [algorithmic systems]. Buyers also told us that they would find more detail on the performance of the [algorithmic system] being procured helpful - including across a range of metrics. The development of more consistent performance metrics could also help regulators to better understand how accurate an [algorithmic system] is in a specific context’ (at 11).

Finally, the report also stresses that vendors have every incentive to withhold information from buyers, both because ‘sharing too much technical detail or knowledge could allow buyers to re-develop their product’ and because ‘they remain concerned about revealing commercially sensitive information to buyers’ (at 10). In that context, given the limited knowledge and understanding documented above, it can even be difficult for a buyer to ascertain which information it has not been given.

The DRCF AI procurement report then focuses on mechanisms that could alleviate some of the issues it identifies, such as standardisation, certification and audit mechanisms, as well as AI transparency registers. However, these mechanisms raise significant questions, not only in relation to their practical implementation, but also regarding the continued reliance on the AI industry (and thus, AI vendors) for the development of some of its foundational elements—and crucially, standards and metrics. To a large extent, the AI industry would be setting the benchmark against which their processes, practices and performance is to be measured. Even if a third party is to carry out such benchmarking or compliance analysis in the context of AI audits, the cards can already be stacked against buyers.

Not the way forward for the public sector (in the UK)

The DRCF AI procurement report should give pause to anyone hoping that (public) buyers can drive the process of development and adoption of these technologies. The AI procurement report clearly evidences that buyers with knowledge disadvantages and information asymmetries are at the merci of technology providers—and/or third-party certifiers (in the future). The evidence in the report clearly suggests that this a process driven by technology providers and, more worryingly, that (most) buyers are in no position to critically assess and discipline vendor behaviour.

The question arises why would any buyer acquire and deploy a technology it does not understand and is in no position to adequately assess. But the hype and hard-selling surrounding AI, coupled with its abstract potential to generate significant administrative and operational advantages seem to be too hard to resist, both for private sector entities seeking to gain an edge (or at least not lag behind competitors) in their markets, and by public sector entities faced with AI’s policy irresistibility.

In the public procurement context, the insights from DRCF’s AI procurement report stress that the fundamental imbalance between buyers and vendors of digital technologies undermines the regulatory role that public procurement is expected to play. Only a buyer that had equal or superior technical ability and that managed to force full disclosure of the relevant information from the technology provider would be in a position to (try to) dictate the terms of the acquisition and deployment of the technology, including through the critical assessment and, if needed, modification of emerging technical standards that could well fall short of the public interest embedded in the process of public sector digitalisation—though it would face significant limitations.

This is an ideal to which most public buyers cannot aspire. In fact, in the UK, the position is the reverse and the current approach is to try to facilitate experimentation with digital technologies for public buyers with no knowledge or digital capability whatsoever—see the Crown Commercial Service’s Artificial Intelligence Dynamic Purchasing System (CCS AI DPS), explicitly targeting inexperienced and digitally novice, to put it politely, public buyers by stressing that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation’.

Given the evidence in the DRCF AI report, this approach can only inflate the number of public sector buyers at the merci of technology providers. Especially because, while the CCS AI DPS tries to address some issues, such as ethical risks (though the effectiveness of this can also be queried), it makes clear that ‘quality, price and cultural fit (including social value) can be assessed based on individual customer requirements’. With ‘AI quality’ capturing all the problematic issues mentioned above (and, notably, AI performance), the CCS AI DPS is highly problematic.

If nothing else, the DRCF AI procurement report gives further credence to the need to change regulatory tack. Most importantly, the report evidences that there is a very real risk that public sector entities are currently buying AI they do not understand and are in no position to effectively control post-deployment. This risk needs to be addressed if the UK public is to trust the accelerating process of public sector digitalisation. As formulated elsewhere, this calls for a series of policy and regulatory interventions.

Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens requires new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector, to address the issue of lack of standards and metrics but without reliance on their development by and within the AI industry. Primary legislation would need to be developed by statutory guidance of a much more detailed and actionable nature than eg the current Guidelines for AI procurement. These developed requirements can then be embedded into public contracts by reference, and thus protect public buyers from vendor standard cherry-picking, as well as providing a clear benchmark against which to assess tenders.

Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence. The primary role of AIPSA would be to constrain the process of adoption of AI by the public sector, especially where the public buyer lacks digital capacity and is thus at risk of capture or overpowering by technological vendors.

In that regard, and until sufficient in-house capability is built to ensure adequate understanding of the technologies being procured (especially in the case of complex AI), and adequate ability to manage digital procurement governance requirements independently, AIPSA would have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases, and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification in accordance with benchmarks set by AIPSA, or certification by AIPSA itself.

In parallel, it would also be necessary for the Government to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

None of this features in the recently released White Paper ‘AI regulation: a pro-innovation approach’. However, DRCF’s AI procurement report further evidences that these policy interventions are necessary. Else, the UK will be a jurisdiction where the public sector acquires and deploys technology it does not understand and cannot control. Surely, this is not the way to go.

UK's 'pro-innovation approach' to AI regulation won't do, particularly for public sector digitalisation

Regulating artificial intelligence (AI) has become the challenge of the time. This is a crucial area of regulatory development and there are increasing calls—including from those driving the development of AI—for robust regulatory and governance systems. In this context, more details have now emerged on the UK’s approach to AI regulation.

Swimming against the tide, and seeking to diverge from the EU’s regulatory agenda and the EU AI Act, the UK announced a light-touch ‘pro-innovation approach’ in its July 2022 AI regulation policy paper. In March 2023, the same approach was supported by a Report of the Government Chief Scientific Adviser (the ‘GCSA Report’), and is now further developed in the White Paper ‘AI regulation: a pro-innovation approach’ (the ‘AI WP’). The UK Government has launched a public consultation that will run until 21 June 2023.

Given the relevance of the issue, it can be expected that the public consultation will attract a large volume of submissions, and that the ‘pro-innovation approach’ will be heavily criticised. Indeed, there is an on-going preparatory Parliamentary Inquiry on the Governance of AI that has already collected a wealth of evidence exploring the pros and cons of the regulatory approach outlined there. Moreover, initial reactions eg by the Public Law Project, the Ada Lovelace Institute, or the Royal Statistical Society have been (to different degrees) critical of the lack of regulatory ambition in the AI WP—while, as could be expected, think tanks closely linked to the development of the policy, such as the Alan Turing Institute, have expressed more positive views.

Whether the regulatory approach will shift as a result of the expected pushback is unclear. However, given that the AI WP follows the same deregulatory approach first suggested in 2018 and is strongly politically/policy entrenched—for the UK Government has self-assessed this approach as ‘world leading’ and claims it will ‘turbocharge economic growth’—it is doubtful that much will necessarily change as a result of the public consultation.

That does not mean we should not engage with the public consultation, but the opposite. In the face of the UK Government’s dereliction of duty, or lack of ideas, it is more important than ever that there is a robust pushback against the deregulatory approach being pursued. Especially in the context of public sector digitalisation and the adoption of AI by the public administration and in the provision of public services, where the Government (unsurprisingly) is unwilling to create regulatory safeguards to protect citizens from its own action.

In this blogpost, I sketch my main areas of concern with the ‘pro-innovation approach’ in the GCSA Report and AI WP, which I will further develop for submission to the public consultation, building on earlier views. Feedback and comments would be gratefully received: a.sanchez-graells@bristol.ac.uk.

The ‘pro-innovation approach’ in the GCSA Report — squaring the circle?

In addition to proposals on the intellectual property (IP) regulation of generative AI, the opening up of public sector data, transport-related, or cyber security interventions, the GCSA Report focuses on ‘core’ regulatory and governance issues. The report stresses that regulatory fragmentation is one of the key challenges, as is the difficulty for the public sector in ‘attracting and retaining individuals with relevant skills and talent in a competitive environment with the private sector, especially those with expertise in AI, data analytics, and responsible data governance‘ (at 5). The report also further hints at the need to boost public sector digital capabilities by stressing that ‘the government and regulators should rapidly build capability and know-how to enable them to positively shape regulatory frameworks at the right time‘ (at 13).

Although the rationale is not very clearly stated, to bridge regulatory fragmentation and facilitate the pooling of digital capabilities from across existing regulators, the report makes a central proposal to create a multi-regulator AI sandbox (at 6-8). The report suggests that it could be convened by the Digital Regulatory Cooperation Forum (DRCF)—which brings together four key regulators (the Information Commissioner’s Office (ICO), Office of Communications (Ofcom), the Competition and Markets Authority (CMA) and the Financial Conduct Authority (FCA))—and that DRCF should look at ways of ‘bringing in other relevant regulators to encourage join up’ (at 7).

The report recommends that the AI sandbox should operate on the basis of a ‘commitment from the participant regulators to make joined-up decisions on regulations or licences at the end of each sandbox process and a clear feedback loop to inform the design or reform of regulatory frameworks based on the insights gathered. Regulators should also collaborate with standards bodies to consider where standards could act as an alternative or underpin outcome-focused regulation’ (at 7).

Therefore, the AI sandbox would not only be multi-regulator, but also encompass (in some way) standard-setting bodies (presumably UK ones only, though), without issues of public-private interaction in decision-making implying the exercise of regulatory public powers, or issues around regulatory capture and risks of commercial determination, being considered at all. The report in general is extremely industry-orientated, eg in stressing in relation to the overarching pacing problem that ‘for emerging digital technologies, the industry view is clear: there is a greater risk from regulating too early’ (at 5), without this being in any way balanced with clear (non-industry) views that the biggest risk is actually in regulating too late and that we are collectively frog-boiling into a ‘runaway AI’ fiasco.

Moreover, confusingly, despite the fact that the sandbox would be hosted by DRCF (of which the ICO is a leading member), the GCSA Report indicates that the AI sandbox ‘could link closely with the ICO sandbox on personal data applications’ (at 8). The fact that the report is itself unclear as to whether eg AI applications with data protection implications should be subjected to one or two sandboxes, or the extent to which the general AI sandbox would need to be integrated with sectoral sandboxes for non-AI regulatory experimentation, already indicates the complexity and dubious practical viability of the suggested approach.

It is also unclear why multiple sector regulators should be involved in any given iteration of a single AI sandbox where there may be no projects within their regulatory remit and expertise. The alternative approach of having an open or rolling AI sandbox mechanism led by a single AI authority, which would then draw expertise and work in collaboration with the relevant sector regulator as appropriate on a per-project basis, seems preferable. While some DRCF members could be expected to have to participate in a majority of sandbox projects (eg CMA and ICO), others would probably have a much less constant presence (eg Ofcom, or certainly the FCA).

Remarkably, despite this recognition of the functional need for a centralised regulatory approach and a single point of contact (primarily for industry’s convenience), the GCSA Report implicitly supports the 2022 AI regulation policy paper’s approach to not creating an overarching cross-sectoral AI regulator. The GCSA Report tries to create a ‘non-institutionalised centralised regulatory function’, nested under DRCF. In practice, however, implementing the recommendation for a single AI sandbox would create the need for the further development of the governance structures of the DRCF (especially if it was to grow by including many other sectoral regulators), or whichever institution ‘hosted it’, or else risk creating a non-institutional AI regulator with the related difficulties in ensuring accountability. This would add a layer of deregulation to the deregulatory effect that the sandbox itself creates (see eg Ranchordas (2021)).

The GCSA Report seems to try to square the circle of regulatory fragmentation by relying on cooperation as a centralising regulatory device, but it does this solely for the industry’s benefit and convenience, without paying any consideration to the future effectiveness of the regulatory framework. This is hard to understand, given the report’s identification of conflicting regulatory constraints, or in its terminology ‘incentives’: ‘The rewards for regulators to take risks and authorise new and innovative products and applications are not clear-cut, and regulators report that they can struggle to trade off the different objectives covered by their mandates. This can include delivery against safety, competition objectives, or consumer and environmental protection, and can lead to regulator behaviour and decisions that prioritise further minimising risk over supporting innovation and investment. There needs to be an appropriate balance between the assessment of risk and benefit’ (at 5).

This not only frames risk-minimisation as a negative regulatory outcome (and further feeds into the narrative that precautionary regulatory approaches are somehow not legitimate because they run against industry goals—which deserves strong pushback, see eg Kaminski (2022)), but also shows a main gap in the report’s proposal for the single AI sandbox. If each regulator has conflicting constraints, what evidence (if any) is there that collaborative decision-making will reduce, rather than exacerbate, such regulatory clashes? Are decisions meant to be arrived at by majority voting or in any other way expected to deactivate (some or most) regulatory requirements in view of (perceived) gains in relation to other regulatory goals? Why has there been no consideration of eg the problems encountered by concurrency mechanisms in the application of sectoral and competition rules (see eg Dunne (2014), (2020) and (2021)), as an obvious and immediate precedent of the same type of regulatory coordination problems?

The GCSA report also seems to assume that collaboration through the AI sandbox would be resource neutral for participating regulators, whereas it seems reasonable to presume that this additional layer of regulation (even if not institutionalised) would require further resources. And, in any case, there does not seem to be much consideration as to the viability of asking of resource-strapped regulators to create an AI sandbox where they can (easily) be out-skilled and over-powered by industry participants.

In my view, the GCSA Report already points at significant weaknesses in the resistance to creating any new authorities, despite the obvious functional need for centralised regulation, which is one of the main weaknesses, or the single biggest weakness, in the AI WP—as well as in relation to a lack of strategic planning around public sector digital capabilities, despite well-recognised challenges (see eg Committee of Public Accounts (2021)).

The ‘pro-innovation approach’ in the AI WP — a regulatory blackhole, privatisation of ai regulation, or both

The AI WP envisages an ‘innovative approach to AI regulation [that] uses a principles-based framework for regulators to interpret and apply to AI within their remits’ (para 36). It expects the framework to ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ (para 37). As will become clear, however, such ‘innovative approach’ solely amounts to the formulation of high-level, broad, open-textured and incommensurable principles to inform a soft law push to the development of regulatory practices aligned with such principles in a highly fragmented and incomplete regulatory landscape.

The regulatory framework would be built on four planks (para 38): [i] an AI definition (paras 39-42); [ii] a context-specific approach (ie a ‘used-based’ approach, rather than a ‘technology-led’ approach, see paras 45-47); [iii] a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities (paras 48-54); and [iv] new central functions to support regulators to deliver the AI regulatory framework (paras 70-73). In reality, though, there will be only two ‘pillars’ of the regulatory framework and they do not involve any new institutions or rules. The AI WP vision thus largely seems to be that AI can be regulated in the UK in a world-leading manner without doing anything much at all.

AI Definition

The UK’s definition of AI will trigger substantive discussions, especially as it seeks to build it around ‘the two characteristics that generate the need for a bespoke regulatory response’: ‘adaptivity’ and ‘autonomy’ (para 39). Discussing the definitional issue is beyond the scope of this post but, on the specific identification of the ‘autonomy’ of AI, it is worth highlighting that this is an arguably flawed regulatory approach to AI (see Soh (2023)).

No new institutions

The AI WP makes clear that the UK Government has no plans to create any new AI regulator, either with a cross-sectoral (eg general AI authority) or sectoral remit (eg an ‘AI in the public sector authority’, as I advocate for). The Ministerial Foreword to the AI WP already stresses that ‘[t]o ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI’ (at p2). The AI WP further stresses that ‘[c]reating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators’ (para 47). This however seems to presume that a new cross-sector AI regulator would be unable to coordinate with existing regulators, despite the institutional architecture of the regulatory framework foreseen in the AI WP entirely relying on inter-regulator collaboration (!).

No new rules

There will also not be new legislation underpinning regulatory activity, although the Government claims that the WP AI, ‘alongside empowering regulators to take a lead, [is] also setting expectations‘ (at p3). The AI WP claims to develop a regulatory framework underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: [i] Safety, security and robustness; [ii] Appropriate transparency and explainability; [iii] Fairness; [iv] Accountability and governance; and [v] Contestability and redress (para 10). However, they will not be put on a statutory footing (initially); ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’ (para 11). While there is some detail on the intended meaning of these principles (see para 52 and Annex A), the principles necessarily lack precision and, worse, there is a conflation of the principles with other (existing) regulatory requirements.

For example, it is surprising that the AI WP describes fairness as implying that ‘AI systems should (sic) not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes‘ (emphasis added), and stresses the expectation ‘that regulators’ interpretations of fairness will include consideration of compliance with relevant law and regulation’ (para 52). This encapsulates the risks that principles-based AI regulation ends up eroding compliance with and enforcement of current statutory obligations. A principle of AI fairness cannot modify or exclude existing legal obligations, and it should not risk doing so either.

Moreover, the AI WP suggests that, even if the principles are supported by a statutory duty for regulators to have regard to them, ‘while the duty to have due regard would require regulators to demonstrate that they had taken account of the principles, it may be the case that not every regulator will need to introduce measures to implement every principle’ (para 58). This conflates two issues. On the one hand, the need for activity subjected to regulatory supervision to comply with all principles and, on the other, the need for a regulator to take corrective action in relation to any of the principles. It should be clear that regulators have a duty to ensure that all principles are complied with in their regulatory remit, which does not seem to entirely or clearly follow from the weaker duty to have due regard to the principles.

perpetuating regulatory gaps, in particular regarding public sector digitalisation

As a consequence of the lack of creation of new regulators and the absence of new legislation, it is unclear whether the ‘regulatory strategy’ in the AI WP will have any real world effects within existing regulatory frameworks, especially as the most ambitious intervention is to create ‘a statutory duty on regulators requiring them to have due regard to the principles’ (para 12)—but the Government may decide not to introduce it if ‘monitoring of the effectiveness of the initial, non-statutory framework suggests that a statutory duty is unnecessary‘ (para 59).

However, what is already clear that there is no new AI regulation in the horizon despite the fact that the AI WP recognises that ‘some AI risks arise across, or in the gaps between, existing regulatory remits‘ (para 27), that ‘there may be AI-related risks that do not clearly fall within the remits of the UK’s existing regulators’ (para 64), and the obvious and worrying existence of high risks to fundamental rights and values (para 4 and paras 22-25). The AI WP is naïve, to say the least, in setting out that ‘[w]here prioritised risks fall within a gap in the legal landscape, regulators will need to collaborate with government to identify potential actions. This may include identifying iterations to the framework such as changes to regulators’ remits, updates to the Regulators’ Code, or additional legislative intervention’ (para 65).

Hoping that such risk identification and gap analysis will take place without assigning specific responsibility for it—and seeking to exempt the Government from such responsibility—seems a bit too much to ask. In fact, this is at odds with the graphic depiction of how the AI WP expects the system to operate. As noted in (1) in the graph below, it is clear that the identification of risks that are cross-cutting or new (unregulated) risks that warrant intervention is assigned to a ‘central risk function’ (more below), not the regulators. Importantly, the AI WP indicates that such central function ‘will be provided from within government’ (para 15 and below). Which then raises two questions: (a) who will have the responsibility to proactively screen for such risks, if anyone, and (b) how has the Government not already taken action to close the gaps it recognises exists in the current legal landscape?

AI WP Figure 2: Central risks function activities.

This perpetuates the current regulatory gaps, in particular in sectors without a regulator or with regulators with very narrow mandates—such as the public sector and, to a large extent, public services. Importantly, this approach does not create any prohibition of impermissible AI uses, nor sets any (workable) set of minimum requirements for the deployment of AI in high-risk uses, specially in the public sector. The contrast with the EU AI Act could not be starker and, in this aspect in particular, UK citizens should be very worried that the UK Government is not committing to any safeguards in the way technology can be used in eg determining access to public services, or by the law enforcement and judicial system. More generally, it is very worrying that the AI WP does not foresee any safeguards in relation to the quickly accelerating digitalisation of the public sector.

Loose central coordination leading to ai regulation privatisation

Remarkably, and in a similar functional disconnect as that of the GCSA Report (above), the decision not to create any new regulator/s (para 15) is taken in the same breath as the AI WP recognises that the small coordination layer within the regulatory architecture proposed in the 2022 AI regulation policy paper (ie, largely, the approach underpinning the DRCF) has been heavily criticised (para 13). The AI WP recognises that ‘the DRCF was not created to support the delivery of all the functions we have identified or the implementation of our proposed regulatory framework for AI’ (para 74).

The AI WP also stresses how ‘[w]hile some regulators already work together to ensure regulatory coherence for AI through formal networks like the AI and digital regulations service in the health sector and the Digital Regulation Cooperation Forum (DRCF), other regulators have limited capacity and access to AI expertise. This creates the risk of inconsistent enforcement across regulators. There is also a risk that some regulators could begin to dominate and interpret the scope of their remit or role more broadly than may have been intended in order to fill perceived gaps in a way that increases incoherence and uncertainty’ (para 29), which points at a strong functional need for a centralised approach to AI regulation.

To try and mitigate those regulatory risks and shortcomings, the AI WP proposes the creation of ‘a number of central support functions’, such as [i} a central monitoring function of overall regulatory framework’s effectiveness and the implementation of the principles; [ii] central risk monitoring and assessment; [iii] horizon scanning; [iv] supporting testbeds and sandboxes; [v] advocacy, education and awareness-raising initiatives; or [vi] promoting interoperability with international regulatory frameworks (para 14, see also para 73). Cryptically, the AI WP indicates that ‘central support functions will initially be provided from within government but will leverage existing activities and expertise from across the broader economy’ (para 15). Quite how this can be effectively done outwith a clearly defined, adequately resourced and durable institutional framework is anybody’s guess. In fact, the AI WP recognises that this approach ‘needs to evolve’ and that Government needs to understand how ‘existing regulatory forums could be expanded to include the full range of regulators‘, what ‘additional expertise government may need’, and the ‘most effective way to convene input from across industry and consumers to ensure a broad range of opinions‘ (para 77).

While the creation of a regulator seems a rather obvious answer to all these questions, the AI WP has rejected it in unequivocal terms. Is the AI WP a U-turn waiting to happen? Is the mention that ‘[a]s we enter a new phase we will review the role of the AI Council and consider how best to engage expertise to support the implementation of the regulatory framework’ (para 78) a placeholder for an imminent project to rejig the AI Council and turn it into an AI regulator? What is the place and role of the Office for AI and the Centre for Data Ethics and Innovation in all this?

Moreover, the AI WP indicates that the ‘proposed framework is aligned with, and supplemented by, a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. Government will promote the use of such tools’ (para 16). Relatedly, the AI WP relies on those mechanisms to avoid addressing issues of accountability across AI life cycle, indicating that ‘[t]ools for trustworthy AI like assurance techniques and technical standards can support supply chain risk management. These tools can also drive the uptake and adoption of AI by building justified trust in these systems, giving users confidence that key AI-related risks have been identified, addressed and mitigated across the supply chain’ (para 84). Those tools are discussed in much more detail in part 4 of the AI WP (paras 106 ff). Annex A also creates a backdoor for technical standards to directly become the operationalisation of the general principles on which the regulatory framework is based, by explicitly identifying standards regulators may want to consider ‘to clarify regulatory guidance and support the implementation of risk treatment measures’.

This approach to the offloading of tricky regulatory issues to the emergence of private-sector led standards is simply an exercise in the transfer of regulatory power to those setting such standards, guidance and assurance techniques and, ultimately, a privatisation of AI regulation.

A different approach to sandboxes and testbeds?

The Government will take forward the GCSA recommendation to establish a regulatory sandbox for AI, which ‘will bring together regulators to support innovators directly and help them get their products to market. The sandbox will also enable us to understand how regulation interacts with new technologies and refine this interaction where necessary’ (p2). This thus is bound to hardwire some of the issues mentioned above in relation to the GCSA proposal, as well as being reflective of the general pro-industry approach of the AI WP, which is obvious in the framing that the regulators are expected to ‘support innovators directly and help them get their products to market’. Industrial policy seems to be shoehorned and mainstreamed across all areas of regulatory activity, at least in relation to AI (but it can then easily bleed into non-AI-related regulatory activities).

While the AI WP indicates the commitment to implement the AI sandbox recommended in the GCSA Report, it is by no means clear that the implementation will be in the way proposed in the report (ie a multi-regulator sandbox nested under DRCF, with an expectation that it would develop a crucial coordination and regulatory centralisation effect). The AI WP indicates that the Government still has to explore ‘what service focus would be most useful to industry’ in relation to AI sandboxes (para 96), but it sets out the intention to ‘focus an initial pilot on a single sector, multiple regulator sandbox’ (para 97), which diverges from the approach in the GCSA Report, which would be that of a sandbox for ‘multiple sectors, multiple regulators’. While the public consultation intends to gather feedback on which industry sector is the most appropriate, I would bet that the financial services sector will be chosen and that the ‘regulatory innovation’ will simply result in some closer cooperation between the ICO and FCA.

Regulator capabilities — ai regulation on a shoestring?

The AI WP turns to the issue of regulator capabilities and stresses that ‘While our approach does not currently involve or anticipate extending any regulator’s remit, regulating AI uses effectively will require many of our regulators to acquire new skills and expertise’ (para 102), and that the Government has ‘identified potential capability gaps among many, but not all, regulators’ (para 103).

To try to (start to) address this fundamental issue in the context of a devolved and decentralised regulatory framework, the AI WP indicates that the Government will explore, for example, whether it is ‘appropriate to establish a common pool of expertise that could establish best practice for supporting innovation through regulatory approaches and make it easier for regulators to work with each other on common issues. An alternative approach would be to explore and facilitate collaborative initiatives between regulators – including, where appropriate, further supporting existing initiatives such as the DRCF – to share skills and expertise’ (para 105).

While the creation of ‘common regulatory capacity’ has been advocated by the Alan Turing Institute, and while this (or inter-regulator secondments, for example) could be a short term fix, it seems that this tries to address the obvious challenge of adequately resourcing regulatory bodies without a medium and long-term strategy to build up the digital capability of the public sector, and to perpetuate the current approach to AI regulation on a shoestring. The governance and organisational implications arising from the creation of common pool of expertise need careful consideration, in particular as some of the likely dysfunctionalities are only marginally smaller than current over-reliance on external consultants, or the ‘salami-slicing’ approach to regulatory and policy interventions that seems to bleed from the ’agile’ management of technological projects into the realm of regulatory activity, which however requires institutional memory and the embedding of knowledge and expertise.

Two roles of procurement in public sector digitalisation: gatekeeping and experimentation

In a new draft chapter for my monograph, I explore how, within the broader process of public sector digitalisation, and embroiled in the general ‘race for AI’ and ‘race for AI regulation’, public procurement has two roles. In this post, I summarise the main arguments (all sources, included for quoted materials, are available in the draft chapter).

This chapter frames the analysis in the rest of the book and will be fundamental in the review of the other drafts, so comments would be most welcome (a.sanchez-graells@bristol.ac.uk).

Public sector digitalisation is accelerating in a regulatory vacuum

Around the world, the public sector is quickly adopting digital technologies in virtually every area of its activity, including the delivery of public services. States are not solely seeking to digitalise their public sector and public services with a view to enhance their operation (internal goal), but are also increasingly willing to use the public sector and the construction of public infrastructure as sources of funding and spaces for digital experimentation, to promote broader technological development and boost national industries in a new wave of (digital) industrial policy (external goal). For example, the European Commission clearly seeks to make the ‘public sector a trailblazer for using AI’. This mirrors similar strategic efforts around the globe. The process of public sector digitalisation is thus embroiled in the broader race for AI.

Despite the fact that such dynamic of public sector digitalisation raises significant regulatory risks and challenges, well-known problems in managing uncertainty in technology regulation—ie the Collingridge dilemma or pacing problem (‘cannot effectively regulate early on, so will probably regulate too late’)—and different normative positions, interact with industrial policy considerations to create regulatory hesitation and side-line anticipatory approaches. This creates a regulatory gap —or rather a laissez faire environment—whereby the public sector is allowed to experiment with the adoption of digital technologies without clear checks and balances. The current strategy is by and large one of ‘experiment first, regulate later’. And while there is little to no regulation, there is significant experimentation and digital technology adoption by the public sector.

Despite the emergence of a ‘race for AI regulation’, there are very few attempts to regulate AI use in the public sector—with the EU’s proposed EU AI Act offering a (partial) exception—and general mechanisms (such as judicial review) are proving slow to adapt. The regulatory gap is thus likely to remain, at least partially, in the foreseeable future—not least, as the effective functioning of new rules such as the EU AI Act will not be immediate.

Procurement emerges as a regulatory gatekeeper to plug that gap

In this context, proposals have started to emerge to use public procurement as a tool of digital regulation. Or, in other words, to use the acquisition of digital technologies by the public sector as a gateway to the ‘regulation by contract’ of their use and governance. Think tanks, NGOs, and academics alike have stressed that the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’, and that procurement ‘is a central policy tool governments can deploy to catalyse innovation and influence the development of solutions aligned with government policy and society’s underlying values’. Public procurement is thus increasingly expected to play a crucial gatekeeping role in the adoption of digital technologies for public governance and the delivery of public services.

Procurement is thus seen as a mechanism of ‘regulation by contract’ whereby the public buyer can impose requirements seeking to achieve broad goals of digital regulation, such as transparency, trustworthiness, or explainability, or to operationalise more general ‘AI ethics’ frameworks. In more detail, the Council of Europe has recommended using procurement to: (i) embed requirements of data governance to avoid violations of human rights norms and discrimination stemming from faulty datasets used in the design, development, or ongoing deployment of algorithmic systems; (ii) ‘ensure that algorithmic design, development and ongoing deployment processes incorporate safety, privacy, data protection and security safeguards by design’; (iii) require ‘public, consultative and independent evaluations of the lawfulness and legitimacy of the goal that the [procured algorithmic] system intends to achieve or optimise, and its possible effects in respect of human rights’; (iv) require the conduct of human rights impact assessments; or (v) promote transparency of the ‘use, design and basic processing criteria and methods of algorithmic systems’.

Given the absence of generally applicable mandatory requirements in the development and use of digital technologies by the public sector in relation to some or all of the stated regulatory goals, the gatekeeping role of procurement in digital ‘regulation by contract’ would mostly involve the creation of such self-standing obligations—or at least the enforcement of emerging non-binding norms, such as those developed by (voluntary) standardisation bodies or, more generally, by the technology industry. In addition to creating risks of regulatory capture and commercial determination, this approach may overshadow the difficulties in using procurement for the delivery of the expected regulatory goals. A closer look at some selected putative goals of digital regulation by contract sheds light on the issue.

Procurement is not at all suited to deliver incommensurable goals of digital regulation

Some of the putative goals of digital regulation by contract are incommensurable. This is the case in particular of ‘trustworthiness’ or ‘responsibility’ in AI use in the public sector. Trustworthiness or responsibility in the adoption of AI can have several meanings, and defining what is ‘trustworthy AI’ or ‘responsible AI’ is in itself contested. This creates a risk of imprecision or generality, which could turn ‘trustworthiness’ or ‘responsibility’ into mere buzzwords—as well as exacerbate the problem of AI ethics-washing. As the EU approach to ‘trustworthy AI’ evidences, the overarching goals need to be broken down to be made operational. In the EU case, ‘trustworthiness’ is intended to cover three requirements for lawful, ethical, and robust AI. And each of them break down into more detailed or operationalizable requirements.

In turn, some of the goals into which ‘trustworthiness’ or ‘responsibility’ breaks down are also incommensurable. This is notably the case of ‘explainability’ or interpretability. There is no such thing as ‘the explanation’ that is required in relation to an algorithmic system, as explanations are (technically and legally) meant to serve different purposes and consequently, the design of the explainability of an AI deployment needs to take into account factors such as the timing of the explanation, its (primary) audience, the level of granularity (eg general or model level, group-based, or individual explanations), or the level of risk generated by the use of the technical solution. Moreover, there are different (and emerging) approaches to AI explainability, and their suitability may well be contingent upon the specific intended use or function of the explanation. And there are attributes or properties influencing the interpretability of a model (eg clarity) for which there are no evaluation metrics (yet?). Similar issues arise with other putative goals, such as the implementation of a principle of AI minimisation in the public sector.

Given the way procurement works, it is ill-suited for the delivery of incommensurable goals of digital regulation.

Procurement is not well suited to deliver other goals of digital regulation

There are other goals of digital regulation by contract that are seemingly better suited to delivery through procurement, such as those relating to ‘technical’ characteristics such as neutrality, interoperability, openness, or cyber security, or in relation to procurement-adjacent algorithmic transparency. However, the operationalisation of such requirements in a procurement context will be dependent on a range of considerations, such as judgements on the need to keep information confidential, judgements on the state of the art or what constitutes a proportionate and economically justified requirement, the generation of systemic effects that are hard to evaluate within the limits of a procurement procedure, or trade-offs between competing considerations. The extent to which procurement will be able to operationalise the desired goals of digital regulation will depend on its institutional embeddedness and on the suitability of procurement tools to impose specific regulatory approaches. Additional analysis conducted elsewhere (see here and here) suggests that, also in relation to these regulatory goals, the emerging approach to AI ‘regulation by contract’ cannot work well.

Procurement digitalisation offers a valuable case study

The theoretical analysis of the use of procurement as a tool of digital ‘regulation by contract’ (above) can be enriched and further developed with an in-depth case study of its practical operation in a discrete area of public sector digitalisation. To that effect, it is important to identify an area of public sector digitalisation which is primarily or solely left to ‘regulation by contract’ through procurement—to isolate it from the interaction with other tools of digital regulation (such as data protection, or sectoral regulation). It is also important for the chosen area to demonstrate a sufficient level of experimentation with digitalisation, so that the analysis is not a mere concretisation of theoretical arguments but rather grounded on empirical insights.

Public procurement is itself an area of public sector activity susceptible to digitalisation. The adoption of digital tools is seen as a potential source of improvement and efficiency in the expenditure of public funds through procurement, especially through the adoption of digital technology solutions developed in the context of supply chain management and other business operations in the private sector (or ‘ProcureTech’), but also through the adoption of digital tools tailored to the specific goals of procurement regulation, such as the prevention of corruption or collusion. There is emerging evidence of experimentation in procurement digitalisation, which is shedding light on regulatory risks and challenges.

In view of its strategic importance and the current pace of procurement digitalisation, it is submitted that procurement is an appropriate site of public sector experimentation in which to explore the shortcomings of the approach to AI ‘regulation by contract’. Procurement is an adequate case study because, being a ‘back-office’ function, it does not concern (likely) high-risk uses of AI or other digital technologies, and it is an area where data protection regulation is unlikely to provide a comprehensive regulatory framework (eg for decision automation) because the primary interactions are between public buyers and corporate institutions.

Procurement therefore currently represents an unregulated digitalisation space in which to test and further explore the effectiveness of the ‘regulation by contract’ approach to governing the transition to a new model of digital public governance.

* * * * * *

The full draft is available on SSRN as: Albert Sanchez-Graells, ‘The two roles of procurement in the transition towards digital public governance: procurement as regulatory gatekeeper and as site for public sector experimentation’ (March 10, 2023): https://ssrn.com/abstract=4384037.

Some further thoughts on setting procurement up to fail in 'AI regulation by contract'

The next bit of my reseach project concerns the leveraging of procurement to achieve ‘AI regulation by contract’ (ie to ensure in the use of AI by the public sector: trustworthiness, safety, explainability, human rights compliance, legality especially in data protection terms, ethical use, etc), so I have been thinking about it for the last few weeks to build on my previous views (see here).

In this post, I summarise my further thoughts — which have been prompted by the rich submissions to the House of Commons Science and Technology Committee [ongoing] inquiry on the ‘Governance of Artificial Intelligence’.

Let’s do it via procurement

As a starting point, it is worth stressing that the (perhaps unsurprising) increasingly generalised position is that procurement has a key role to play in regulating the adoption of digital technologies (and AI in particular) by the public sector—which consolidates procurement’s gatekeeping role in this regulatory space (see here).

More precisely, the generalised view is not that procurement ought to play such a role, but that it can do so (effectively and meaningfully). ‘AI regulation by contract’ via procurement is seen as an (easily?) actionable policy and governance mechanism despite the more generalised reluctance and difficulties in regulating AI through general legislative and policy measures, and in creating adequate governance architectures (more below).

This is very clear in several submissions to the ongoing Parliamentary inquiry (above). Without seeking to be exhaustive (I have read most, but not all submissions yet), the following points have been made in written submissions (liberally grouped by topics):

Procurement as (soft) AI regulation by contract & ‘Market leadership’

  • Procurement processes can act as a form of soft regulation Government should use its purchasing power in the market to set procurement requirements that ensure private companies developing AI for the public sector address public standards. ’ (Committee on Standards in Public Life, at [25]-[26], emphasis added).

  • For public sector AI projects, two specific strategies could be adopted [to regulate AI use]. The first … is the use of strategic procurement. This approach utilises government funding to drive change in how AI is built and implemented, which can lead to positive spill-over effects in the industry’ (Oxford Internet Institute, at 5, emphasis added).

  • Responsible AI Licences (“RAILs”) utilise the well-established mechanisms of software and technology licensing to promote self-governance within the AI sector. RAILs allow developers, researchers, and companies to publish AI innovations while specifying restrictions on the use of source code, data, and models. These restrictions can refer to high-level restrictions (e.g., prohibiting uses that would discriminate against any individual) as well as application-specific restrictions (e.g., prohibiting the use of a facial recognition system without consent) … The adoption of such licenses for AI systems funded by public procurement and publicly-funded AI research will help support a pro-innovation culture that acknowledges the unique governance challenges posed by emerging AI technologies’ (Trustworthy Autonomous Systems Hub, at 4, emphasis added).

Procurement and AI explainability

  • public bodies will need to consider explainability in the early stages of AI design and development, and during the procurement process, where requirements for transparency could be stipulated in tenders and contracts’ (Committee on Standards in Public Life, at [17], emphasis added).

  • In the absence of strong regulations, the public sector may use strategic procurement to promote equitable and transparent AI … mandating various criteria in procurement announcements and specifying design criteria, including explainability and interpretability requirements. In addition, clear documentation on the function of a proposed AI system, the data used and an explanation of how it works can help. Beyond this, an approved vendor list for AI procurement in the public sector is useful, to which vendors that agree to meet the defined transparency and explainability requirements may be added’ (Oxford Internet Institute, at 2, referring to K McBride et al (2021) ‘Towards a Systematic Understanding on the Challenges of Procuring Artificial Intelligence in the Public Sector’, emphasis added).

Procurement and AI ethics

  • For example, procurement processes should be designed so products and services that facilitate high standards are preferred and companies that prioritise ethical practices are rewarded. As part of the commissioning process, the government should set out the ethical principles expected of companies providing AI services to the public sector. Adherence to ethical standards should be given an appropriate weighting as part of the evaluation process, and companies that show a commitment to them should be scored more highly than those that do not (Committee on Standards in Public Life, at [26], emphasis added).

Procurement and algorithmic transparency

  • … unlike public bodies, the private sector is not bound by the same safeguards – such as the Public Sector Equality Duty within the Equality Act 2010 (EA) – and is able to shield itself from criticisms regarding transparency behind the veil of ‘commercial sensitivity’. In addition to considering the private company’s purpose, AI governance itself must cover the private as well as public sphere, and be regulated to the same, if not a higher standard. This could include strict procurement rules – for example that private companies need to release certain information to the end user/public, and independent auditing of AI systems’ (Liberty, at [20]).

  • … it is important that public sector agencies are duly empowered to inspect the technologies they’re procuring and are not prevented from doing so by the intellectual property rights. Public sector buyers should use their purchasing power to demand access to suppliers’ systems to test and prove their claims about, for example, accuracy and bias’ (BILETA, at 6).

Procurement and technical standards

  • Standards hold an important role in any potential regulatory regime for AI. Standards have the potential to improve transparency and explainability of AI systems to detail data provenance and improve procurement requirements’ (Ada Lovelace Institute, at 10)

  • The speed at which the technology can develop poses a challenge as it is often faster than the development of both regulation and standards. Few mature standards for autonomous systems exist and adoption of emerging standards need to be encouraged through mechanisms such as regulation and procurement, for example by including the requirement to meet certain standards in procurement specification’ (Royal Academy of Engineering, at 8).

Can procurement do it, though?

Implicit in most views about the possibility of using procurement to regulate public sector AI adoption (and to generate broader spillover effects through market-based propagation mechanisms) is an assumption that the public buyer does (or can get to) know and can (fully, or sufficiently) specify the required standards of explainability, transparency, ethical governance, and a myriad other technical requirements (on auditability, documentation, etc) for the use of AI to be in the public interest and fully legally compliant. Or, relatedly, that such standards can (and will) be developed and readily available for the public buyer to effectively refer to and incorporate them into its public contracts.

This is a BIG implicit assumption, at least in relation with non trivial/open-ended proceduralised requirements and in relation to most of the complex issues raised by (advanced) forms of AI deployment. A sobering and persuasive analysis has shown that, at least for some forms of AI (based on neural networks), ‘it appears unlikely that anyone will be able to develop standards to guide development and testing that give us sufficient confidence in the applications’ respect for health and fundamental rights. We can throw risk management systems, monitoring guidelines, and documentation requirements around all we like, but it will not change that simple fact. It may even risk giving us a false sense of confidence’ [H Pouget, ‘The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist’ (Lawfare.com, 12 Jan 2023)].

Even for less complex AI deployments, the development of standards will be contested and protracted. This not only creates a transient regulatory gap that forces public buyers to ‘figure it out’ by themselves in the meantime, but can well result in a permanent regulatory gap that leaves procurement as the only safeguard (on paper) in the process of AI adoption in the public sector. If more general and specialised processes of standard setting are unlikely to plug that gap quickly or ever, how can public buyers be expected to do otherwise?

seriously, can procurement do it?

Further, as I wrote in my own submission to the Parliamentary inquiry, ‘to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards’ (at [4]).

Even optimistically ignoring the issues above and adopting the presumption that standards will emerge or the public buyer will be able to (eventually) figure it out (so we park requirement (i) for now), and also assuming that the public sector will be able to develop the required level of eg digital capability (so we also park (iii), but see here)), does however not overcome other obstacles to leveraging procurement for ‘AI regulation by contract’. In particular, it does not address the issue of whether there can be effective enforcement mechanisms within the contractual relationship resulting from a procurement process to impose compliance with the required standards (of explainability, transparency, ethical use, non-discrimination, etc).

I approach this issue as the challenge of enforcing not entirely measurable contractual obligations (ie obligations to comply with a contractual standard rather than a contractual rule), and the closest parallel that comes to my mind is the issue of enforcing quality requirements in public contracts, especially in the provision of outsourced or contracted-out public services. This is an issue on which there is a rich literature (on ‘regulation by contract’ or ‘government by contract’).

Quality-related enforcement problems relate to the difficulty of using contract law remedies to address quality shortcomings (other than perhaps price reductions or contractual penalties where those are permissible) that can do little to address the quality issues in themselves. Major quality shortcomings could lead to eg contractual termination, but replacing contractors can be costly and difficult (especially in a technological setting affected by several sources of potential vendor and technology lock in). Other mechanisms, such as leveraging past performance evaluations to eg bar access to future procurements can also do too little too late to control quality within a specific contract.

An illuminating analysis of the ‘problem of quality’ concluded that the ‘structural problem here is that reliable assurance of quality in performance depends ultimately not on contract terms but on trust and non-legal relations. Relations of trust and powerful non-legal sanctions depend upon the establishment of long-term … relations … The need for a governance structure and detailed monitoring in order to achieve co-operation and quality seems to lead towards the creation of conflictual relations between government and external contractors’ [see H Collins, Regulating Contracts (OUP 1999) 314-15].

To me, this raises important questions about the extent to which procurement and public contracts more generally can effectively deliver the expected safeguards and operate as an adequate sytem of ‘AI regulation by contract’. It seems to me that price clawbacks or financial penalties, even debarment decisions, are unilkely to provide an acceptable safety net in some (or most) cases — eg high-risk uses of complex AI. Not least because procurement disputes can take a long time to settle and because the incentives will not always be there to ensure strict enforcement anyway.

More thoughts to come

It seems increasingly clear to me that the expectations around the leveraging of procurement to ‘regulate AI by contract’ need reassessing in view of its likely effectiveness. Such effectiveness is constrained by the rules on the design of tenders for the award of public contracts, as well as those public contracts, and mechanisms to resolve disputes emerging from either tenders or contracts. The effectiveness of this approach is, of course, also constrained by public sector (digital) capability and by the broader difficulties in ascertaining the appropriate approach to (standards-based) AI regulation, which cannot so easily be set aside. I will keep thinking about all this in the process of writing my monograph. If this is of interested, keep an eye on this blog fior further thougths and analysis.

"Tech fixes for procurement problems?" [Recording]

The recording and slides for yesterday’s webinar on ‘Tech fixes for procurement problems?’ co-hosted by the University of Bristol Law School and the GW Law Government Procurement Programme are now available for catch up if you missed it.

I would like to thank once again Dean Jessica Tillipman (GW Law), Professor Sope Williams (Stellenbosch), and Eliza Niewiadomska (EBRD) for really interesting discussion, and to all participants for their questions. Comments most welcome, as always.

AI regulation by contract: submission to UK Parliament

In October 2022, the Science and Technology Committee of the House of Commons of the UK Parliament (STC Committee) launched an inquiry on the ‘Governance of Artificial Intelligence’. This inquiry follows the publication in July 2022 of the policy paper ‘Establishing a pro-innovation approach to regulating AI’, which outlined the UK Government’s plans for light-touch AI regulation. The inquiry seeks to examine the effectiveness of current AI governance in the UK, and the Government’s proposals that are expected to follow the policy paper and provide more detail. The STC Committee has published 98 pieces of written evidence, including submissions from UK regulators and academics that will make for interesting reading. Below is my submission, focusing on the UK’s approach to ‘AI regulation by contract’.

A. Introduction

01. This submission addresses two of the questions formulated by the House of Commons Science and Technology Committee in its inquiry on the ‘Governance of artificial intelligence (AI)’. In particular:

  • How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

  • To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

    • Is more legislation or better guidance required?

02. This submission focuses on the process of AI adoption in the public sector and, particularly, on the acquisition of AI solutions. It evidences how the UK is consolidating an inadequate approach to ‘AI regulation by contract’ through public procurement. Given the level of abstraction and generality of the current guidelines for AI procurement, major gaps in public sector digital capabilities, and potential structural conflicts of interest, procurement is currently an inadequate tool to govern the process of AI adoption in the public sector. Flanking initiatives, such as the pilot algorithmic transparency standard, are unable to address and mitigate governance risks. Contrary to the approach in the AI Regulation Policy Paper,[1] plugging the regulatory gap will require (i) new legislation supported by a new mechanism of external oversight and enforcement (an ‘AI in the Public Sector Authority’ (AIPSA)); (ii) a well-funded strategy to boost in-house public sector digital capabilities; and (iii) the introduction of a (temporary) mechanism of authorisation of AI deployment in the public sector. The Procurement Bill would not suffice to address the governance shortcomings identified in this submission.

B. ‘AI Regulation by Contract’ through Procurement

03. Unless the public sector develops AI solutions in-house, which is extremely rare, the adoption of AI technologies in the public sector requires a procurement procedure leading to their acquisition. This places procurement at the frontline of AI governance because the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’.[2] In that vein, the Committee on Standards in Public Life stressed that the ‘Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements’.[3] Procurement is thus erected as a public interest gatekeeper in the process of adoption of AI by the public sector.

04. However, to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards.

05. On a superficial reading, it could seem that the National AI Strategy tackled this by highlighting the importance of the public sector’s role as a buyer and stressing that the Government had already taken steps ‘to inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens’.[4] The National AI Strategy referred, in particular, to the setting up of the Crown Commercial Service’s AI procurement framework (the ‘CCS AI Framework’),[5] and the adoption of the Guidelines for AI procurement (the ‘Guidelines’)[6] as enabling tools. However, a close look at these instruments will show their inadequacy to provide clarity on the content of procedural and contractual obligations aimed at ensuring the goals stated above (para 03), as well as their potential to widen the existing public sector digital capability gap. Ultimately, they do not enable procurement to carry out the expected gatekeeping role.

C. Guidelines and Framework for AI procurement

06. Despite setting out to ‘provide a set of guiding principles on how to buy AI technology, as well as insights on tackling challenges that may arise during procurement’, the Guidelines provide high-level recommendations that cannot be directly operationalised by inexperienced public buyers and/or those with limited digital capabilities. For example, the recommendation to ‘Try to address flaws and potential bias within your data before you go to market and/or have a plan for dealing with data issues if you cannot rectify them yourself’ (guideline 3) not only requires a thorough understanding of eg the Data Ethics Framework[7] and the Guide to using Artificial Intelligence in the public sector,[8] but also detailed insights on data hazards.[9] This leads the Guidelines to stress that it may be necessary ‘to seek out specific expertise to support this; data architects and data scientists should lead this process … to understand the complexities, completeness and limitations of the data … available’.

07. Relatedly, some of the recommendations are very open ended in areas without clear standards. For example, the effectiveness of the recommendation to ‘Conduct initial AI impact assessments at the start of the procurement process, and ensure that your interim findings inform the procurement. Be sure to revisit the assessments at key decision points’ (guideline 4) is dependent on the robustness of such impact assessments. However, the Guidelines provide no further detail on how to carry out such assessments, other than a list of some generic areas for consideration (eg ‘potential unintended consequences’) and a passing reference to emerging guidelines in other jurisdictions. This is problematic, as the development of algorithmic impact assessments is still at an experimental stage,[10] and emerging evidence shows vastly diverging approaches, eg to risk identification.[11] In the absence of clear standards, algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also require access to specialist expertise to design and carry out the assessments.

08. Ultimately, understanding and operationalising the Guidelines requires advanced digital competency, including in areas where best practices and industry standards are still developing.[12] However, most procurement organisations lack such expertise, as a reflection of broader digital skills shortages across the public sector,[13] with recent reports placing civil service vacancies for data and tech roles throughout the civil service alone close to 4,000.[14] This not only reduces the practical value of the Guidelines to facilitate responsible AI procurement by inexperienced buyers with limited capabilities, but also highlights the role of the CCS AI Framework for AI adoption in the public sector.

09. The CCS AI Framework creates a procurement vehicle[15] to facilitate public buyers’ access to digital capabilities. CCS’ description for public buyers stresses that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation.’[16] The Framework thus seeks to enable contracting authorities, especially those lacking in-house expertise, to carry out AI procurement with the support of external providers. While this can foster the uptake of AI in the public sector in the short term, it is highly unlikely to result in adequate governance of AI procurement, as this approach focuses at most on the initial stages of AI adoption but can hardly be sustainable throughout the lifecycle of AI use in the public sector—and, crucially, would leave the enforcement of contractualised AI governance obligations in a particularly weak position (thus failing to meet the enforcement requirement at para 04). Moreover, it would generate a series of governance shortcomings which avoidance requires an alternative approach.

D. Governance Shortcomings

10. Despite claims to the contrary in the National AI Strategy (above para 05), the approach currently followed by the Government does not empower public buyers to responsibly procure AI. The Guidelines are not susceptible of operationalisation by inexperienced public buyers with limited digital capabilities (above paras 06-08). At the same time, the Guidelines are too generic to support sophisticated approaches by more advanced digital buyers. The Guidelines do not reduce the uncertainty and complexity of procuring AI and do not include any guidance on eg how to design public contracts to perform the regulatory functions expected under the ‘AI regulation by contract’ approach.[17] This is despite existing recommendations on eg the development of ‘model contracts and framework agreements for public sector procurement to incorporate a set of minimum standards around ethical use of AI, with particular focus on expected levels transparency and explainability, and ongoing testing for fairness’.[18] The guidelines thus fail to address the first requirement for effective regulation by contract in relation to clarifying the relevant obligations (para 04).

11. The CCS Framework would also fail to ensure the development of public sector capacity to establish, monitor, and enforce AI governance obligations (para 04). Perhaps counterintuitively, the CCS AI Framework can generate a further disempowerment of public buyers seeking to rely on external capabilities to support AI adoption. There is evidence that reliance on outside providers and consultants to cover immediate needs further erodes public sector capability in the long term,[19] as well as creating risks of technical and intellectual debt in the deployment of AI solutions as consultants come and go and there is no capture of institutional knowledge and memory.[20] This can also exacerbate current trends of pilot AI graveyard spirals, where most projects do not reach full deployment, at least in part due to insufficient digital capabilities beyond the (outsourced) pilot phase. This tends to result in self-reinforcing institutional weaknesses that can limit the public sector’s ability to drive digitalisation, not least because technical debt quickly becomes a significant barrier.[21] It also runs counter to best practices towards building public sector digital maturity,[22] and to the growing consensus that public sector digitalisation first and foremost requires a prioritised investment in building up in-house capabilities.[23] On this point, it is important to note the large size of the CCS AI Framework, which was initially pre-advertised with a £90 mn value,[24] but this was then revised to £200 mn over 42 months.[25] Procuring AI consultancy services under the Framework can thus facilitate the funnelling of significant amounts of public funds to the private sector, rather than using those funds to build in-house capabilities. It can result in multiple public buyers entering contracts for the same expertise, which thus duplicates costs, as well as in a cumulative lack of institutional learning by the public sector because of atomised and uncoordinated contractual relationships.

12. Beyond the issue of institutional dependency on external capabilities, the cumulative effect of the Guidelines and the Framework would be to outsource the role of ‘AI regulation by contract’ to unaccountable private providers that can then introduce their own biases on the substantive and procedural obligations to be embedded in the relevant contracts—which would ultimately negate the effectiveness of the regulatory approach as a public interest safeguard. The lack of accountability of external providers would not only result from the weakness (or absolute inability) of the public buyer to control their activities and challenge important decisions—eg on data governance, or algorithmic impact assessments, as above (paras 06-07)—but also from the potential absence of effective and timely external checks. Market mechanisms are unlikely to deliver adequate checks due market concentration and structural conflicts of interest affecting both providers that sometimes provide consultancy services and other times are involved in the development and deployment of AI solutions,[26] as well as a result of insufficiently effective safeguards on conflicts of interest resulting from quickly revolving doors. Equally, broader governance controls are unlikely to be facilitated by flanking initiatives, such as the pilot algorithmic transparency standard.

13. To try to foster accountability in the adoption of AI by the public sector, the UK is currently piloting an algorithmic transparency standard.[27] While the initial six examples of algorithmic disclosures published by the Government provide some details on emerging AI use cases and the data and types of algorithms used by publishing organisations, and while this information could in principle foster accountability, there are two primary shortcomings. First, completing the documentation requires resources and, in some respects, advanced digital capabilities. Organisations participating in the pilot are being supported by the Government, which makes it difficult to assess to what extent public buyers would generally be able to adequately prepare the documentation on their own. Moreover, the documentation also refers to some underlying requirements, such as algorithmic impact assessments, that are not yet standardised (para 07). In that, the pilot standard replicates the same shortcomings discussed above in relation to the Guidelines. Algorithmic disclosure will thus only be done by entities with high capabilities, or it will be outsourced to consultants (thus reducing the scope for the revelation of governance-relevant information).

14. Second, compliance with the standard is not mandatory—at least while the pilot is developed. If compliance with the algorithmic transparency standard remains voluntary, there are clear governance risks. It is easy to see how precisely the most problematic uses may not be the object of adequate disclosures under a voluntary self-reporting mechanism. More generally, even if the standard was made mandatory, it would be necessary to implement an external quality control mechanism to mitigate problems with the quality of self-reported disclosures that are pervasive in other areas of information-based governance.[28] Whether the Central Digital and Data Office (currently in charge of the pilot) would have capacity (and powers) to do so remains unclear, and it would in any case lack independence.

15. Finally, it should be stressed that the current approach to transparency disclosure following the adoption of AI (ex post) can be problematic where the implementation of the AI is difficult to undo and/or the effects of malicious or risky AI are high stakes or impossible to revert. It is also problematic in that the current approach places the burden of scrutiny and accountability outside the public sector, rather than establishing internal, preventative (ex ante) controls on the deployment of AI technologies that could potentially be very harmful for fundamental and individual socio-economic rights—as evidenced by the inclusion of some fields of application of AI in the public sector as ‘high risk’ in the EU’s proposed EU AI Act.[29] Given the particular risks that AI deployment in the public sector poses to fundamental and individual rights, the minimalistic and reactive approach outlined in the AI Regulation Policy Paper is inadequate.

E. Conclusion: An Alternative Approach

16. Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens will require new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector. Such legislation would then need to be developed in statutory guidance of a much more detailed and actionable nature than the current Guidelines. These developed requirements can then be embedded into public contracts by reference. Without such clarification of the relevant substantive obligations, the approach to ‘AI regulation by contract’ can hardly be effective other than in exceptional cases.

17. Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence.

18. It would also be necessary to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

19. Until sufficient in-house capability is built to ensure adequate understanding and ability to manage digital procurement governance requirements independently, the current reactive approach should be abandoned, and AIPSA should have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification.

20. The new legislation and statutory guidance would need to be self-standing, as the Procurement Bill would not provide the required governance improvements. First, the Procurement Bill pays limited to no attention to artificial intelligence and the digitalisation of procurement.[30] An amendment (46) that would have created minimum requirements on automated decision-making and data ethics was not moved at the Lords Committee stage, and it seems unlikely to be taken up again at later stages of the legislative process. Second, even if the Procurement Bill created minimum substantive requirements, it would lack adequate enforcement mechanisms, not least due to the limited powers and lack of independence of the foreseen Procurement Review Unit (to also sit within Cabinet Office).

_______________________________________
Note: all websites last accessed on 25 October 2022.

[1] Department for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI. An overview of the UK’s emerging approach (CP 728, 2022).

[2] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector (August 2021) 33.

[3] Committee on Standards in Public Life, Intelligence and Public Standards (2020) 51.

[4] Department for Digital, Culture, Media and Sport, National AI Strategy (CP 525, 2021) 47.

[5] AI Dynamic Purchasing System < https://www.crowncommercial.gov.uk/agreements/RM6200 >.

[6] Office for Artificial Intelligence, Guidelines for AI Procurement (2020) < https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement >.

[7] Central Digital and Data Office, Data Ethics Framework (Guidance) (2020) < https://www.gov.uk/government/publications/data-ethics-framework >.

[8] Central Digital and Data Office, A guide to using artificial intelligence in the public sector (2019) < https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector >.

[9] See eg < https://datahazards.com/index.html >.

[10] Ada Lovelace Institute, Algorithmic impact assessment: a case study in healthcare (2022) < https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ >.

[11] A Sanchez-Graells, ‘Algorithmic Transparency: Some Thoughts On UK's First Four Published Disclosures and the Standards’ Usability’ (2022) < https://www.howtocrackanut.com/blog/2022/7/11/algorithmic-transparency-some-thoughts-on-uk-first-disclosures-and-usability >.

[12] A Sanchez-Graells, ‘“Experimental” WEF/UK Guidelines for AI Procurement: Some Comments’ (2019) < https://www.howtocrackanut.com/blog/2019/9/25/wef-guidelines-for-ai-procurement-and-uk-pilot-some-comments >.

[13] See eg Public Accounts Committee, Challenges in implementing digital change (HC 2021-22, 637).

[14] S Klovig Skelton, ‘Public sector aims to close digital skills gap with private sector’ (Computer Weekly, 4 Oct 2022) < https://www.computerweekly.com/news/252525692/Public-sector-aims-to-close-digital-skills-gap-with-private-sector >.

[15] It is a dynamic purchasing system, or a list of pre-screened potential vendors public buyers can use to carry out their own simplified mini-competitions for the award of AI-related contracts.

[16] Above (n 5).

[17] This contrasts with eg the EU project to develop standard contractual clauses for the procurement of AI by public organisations. See < https://living-in.eu/groups/solutions/ai-procurement >.

[18] Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making (2020) < https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making >.

[19] V Weghmann and K Sankey, Hollowed out: The growing impact of consultancies in public administrations (2022) < https://www.epsu.org/sites/default/files/article/files/EPSU%20Report%20Outsourcing%20state_EN.pdf >.

[20] A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ in idem, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming) < https://ssrn.com/abstract=4254931 >.

[21] M E Nielsen and C Østergaard Madsen, ‘Stakeholder influence on technical debt management in the public sector: An embedded case study’ (2022) 39 Government Information Quarterly 101706.

[22] See eg Kevin C Desouza, ‘Artificial Intelligence in the Public Sector: A Maturity Model’ (2021) IBM Centre for the Business of Government < https://www.businessofgovernment.org/report/artificial-intelligence-public-sector-maturity-model >.

[23] A Clarke and S Boots, A Guide to Reforming Information Technology Procurement in the Government of Canada (2022) < https://govcanadacontracts.ca/it-procurement-guide/ >.

[24] < https://ted.europa.eu/udl?uri=TED:NOTICE:600328-2019:HTML:EN:HTML&tabId=1&tabLang=en >.

[25] < https://ted.europa.eu/udl?uri=TED:NOTICE:373610-2020:HTML:EN:HTML&tabId=1&tabLang=en >.

[26] See S Boots, ‘“Charbonneau Loops” and government IT contracting’ (2022) < https://sboots.ca/2022/10/12/charbonneau-loops-and-government-it-contracting/ >.

[27] Central Digital and Data Office, Algorithmic Transparency Standard (2022) < https://www.gov.uk/government/collections/algorithmic-transparency-standard >.

[28] Eg in the context of financial markets, there have been notorious ongoing problems with ensuring adequate quality in corporate and investor disclosures.

[29] < https://artificialintelligenceact.eu/ >.

[30] P Telles, ‘The lack of automation ideas in the UK Gov Green Paper on procurement reform’ (2021) < http://www.telles.eu/blog/2021/1/13/the-lack-of-automation-ideas-in-the-uk-gov-green-paper-on-procurement-reform >.

Digital procurement governance: drawing a feasibility boundary

In the current context of generalised quick adoption of digital technologies across the public sector and strategic steers to accelerate the digitalisation of public procurement, decision-makers can be captured by techno hype and the ‘policy irresistibility’ that can ensue from it (as discussed in detail here, as well as here).

To moderate those pressures and guide experimentation towards the successful deployment of digital solutions, decision-makers must reassess the realistic potential of those technologies in the specific context of procurement governance. They must also consider which enabling factors must be put in place to harness the potential of the digital technologies—which primarily relate to an enabling big data architecture (see here). Combined, the data requirements and the contextualised potential of the technologies will help decision-makers draw a feasibility boundary for digital procurement governance, which should inform their decisions.

In a new draft chapter (num 7) for my book project, I draw such a technology-informed feasibility boundary for digital procurement governance. This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Revisiting the promise: A feasibility boundary for digital procurement governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4232973.

Data as the main constraint

It will hardly be surprising to stress again that high quality big data is a pre-requisite for the development and deployment of digital technologies. All digital technologies of potential adoption in procurement governance are data-dependent. Therefore, without adequate data, there is no prospect of successful adoption of the technologies. The difficulties in generating an enabling procurement data architecture are detailed here.

Moreover, new data rules only regulate the capture of data for the future. This means that it will take time for big data to accumulate. Accessing historical data would be a way of building up (big) data and speeding up the development of digital solutions. Moreover, in some contexts, such as in relation with very infrequent types of procurement, or in relation to decisions concerning previous investments and acquisitions, historical data will be particularly relevant (eg to deploy green policies seeking to extend the use life of current assets through programmes of enhanced maintenance or refurbishment; see here). However, there are significant challenges linked to the creation of backward-looking digital databases, not only relating to the cost of digitisation of the information, but also to technical difficulties in ensuring the representativity and adequate labelling of pre-existing information.

An additional issue to consider is that a number of governance-relevant insights can only be extracted from a combination of procurement and other types of data. This can include sources of data on potential conflict of interest (eg family relations, or financial circumstances of individuals involved in decision-making), information on corporate activities and offerings, including detailed information on products, services and means of production (eg in relation with licensing or testing schemes), or information on levels of utilisation of public contracts and satisfaction with the outcomes by those meant to benefit from their implementation (eg users of a public service, or ‘internal’ users within the public administration).

To the extent that the outside sources of information are not digitised, or not in a way that is (easily) compatible or linkable with procurement information, some data-based procurement governance solutions will remain undeliverable. Some developments in digital procurement governance will thus be determined by progress in other policy areas. While there are initiatives to promote the availability of data in those settings (eg the EU’s Data Governance Act, the Guidelines on private sector data sharing, or the Open Data Directive), the voluntariness of many of those mechanisms raises important questions on the likely availability of data required to develop digital solutions.

Overall, there is no guarantee that the data required for the development of some (advanced) digital solutions will be available. A careful analysis of data requirements must thus be a point of concentration for any decision-maker from the very early stages of considering digitalisation projects.

Revised potential of selected digital technologies

Once (or rather, if) that major data hurdle is cleared, the possibilities realistically brought by the functionality of digital technologies need to be embedded in the procurement governance context, which results in the following feasibility boundary for the adoption of those technologies.

Robotic Process Automation (RPA)

RPA can reduce the administrative costs of managing pre-existing digitised and highly structured information in the context of entirely standardised and repetitive phases of the procurement process. RPA can reduce the time invested in gathering and cross-checking information and can thus serve as a basic element of decision-making support. However, RPA cannot increase the volume and type of information being considered (other than in cases where some available information was not being taken into consideration due to eg administrative capacity constraints), and it can hardly be successfully deployed in relation to open-ended or potentially contradictory information points. RPA will also not change or improve the processes themselves (unless they are redesigned with a view to deploying RPA).

This generates a clear feasibility boundary for RPA deployment, which will generally have as its purpose the optimisation of the time available to the procurement workforce to engage in information analysis rather than information sourcing and basic checks. While this can clearly bring operational advantages, it will hardly transform procurement governance.

Machine Learning (ML)

Developing ML solutions will pose major challenges, not only in relation to the underlying data architecture (as above), but also in relation to specific regulatory and governance requirements specific to public procurement. Where the operational management of procurement does not diverge from the equivalent function in the (less regulated) private sector, it will be possible to see the adoption or adaptation of similar ML solutions (eg in relation to category spend management). However, where there are regulatory constraints on the conduct of procurement, the development of ML solutions will be challenging.

For example, the need to ensure the openness and technical neutrality of procurement procedures will limit the possibilities of developing recommender systems other than in pre-procured closed lists or environments based on framework agreements or dynamic purchasing systems underpinned by electronic catalogues. Similarly, the intended use of the recommender system may raise significant legal issues concerning eg the exercise of discretion, which can limit their deployment to areas of information exchange or to merely suggestion-based tasks that could hardly replace current processes and procedures. Given the limited utility (or acceptability) of collective filtering recommender solutions (which is the predominant type in consumer-facing private sector uses, such as Netflix or Amazon), there are also constraints on the generality of content-based recommender systems for procurement applications, both at tenderer and at product/service level. This raises a further feasibility issue, as the functional need to develop a multiplicity of different recommenders not only reopens the issue of data sufficiency and adequacy, but also raises questions of (economic and technical) viability. Recommender systems would mostly only be susceptible of feasible adoption in highly centralised procurement settings. This could create a push for further procurement centralisation that is not neutral from a governance perspective, and that can certainly generate significant competition issues of a similar nature, but perhaps a different order of magnitude, than procurement centralisation in a less digitally advanced setting. This should be carefully considered, as the knock-on effects of the implementation of some ML solutions may only emerge down the line.

Similarly, the development and deployment of chatbots is constrained by specific regulatory issues, such as the need to deploy closed domain chatbots (as opposed to open domain chatbots, ie chatbots connected to the Internet, such as virtual assistants built into smartphones), so that the information they draw from can be controlled and quality assured in line with duties of good administration and other legal requirements concerning the provision of information within tender procedures. Chatbots are suited to types of high-volume information-based queries only. They would have limited applicability in relation to the specific characteristics of any given procurement procedure, as preparing the specific information to be used by the chatbot would be a challenge—with the added functionality of the chatbot being marginal. Chatbots could facilitate access to pre-existing and curated simple information, but their functionality would quickly hit a ceiling as the complexity of the information progressed. Chatbots would only be able to perform at a higher level if they were plugged to a knowledge base created as an expert system. But then, again, in that case their added functionality would be marginal. Ultimately, the practical space for the development of chatbots is limited to low added value information access tasks. Again, while this can clearly bring operational advantages, it will hardly transform procurement governance.

ML could facilitate the development and deployment of ‘advanced’ automated screens, or red flags, which could identify patterns of suspicious behaviour to then be assessed against the applicable rules (eg administrative and criminal law in case of corruption, or competition law, potentially including criminal law, in case of bid rigging) or policies (eg in relation to policy requirements to comply with specific targets in relation to a broad variety of goals). The trade off in this type of implementation is between the potential (accuracy) of the algorithmic screening and legal requirements on the explainability of decision-making (as discussed in detail here). Where the screens were not used solely for policy analysis, but acting on the red flag carried legal consequences (eg fines, or even criminal sanctions), the suitability of specific types of ML solutions (eg unsupervised learning solutions tantamount to a ‘black box’) would be doubtful, challenging, or altogether excluded. In any case, the development of ML screens capable of significantly improving over RPA-based automation of current screens is particularly dependent on the existence of adequate data, which is still proving an insurmountable hurdle in many an intended implementation (as above).

Distributed ledger technology (DLT) systems and smart contracts

Other procurement governance constraints limit the prospects of wholesale adoption of DLT (or blockchain) technologies, other than for relatively limited information management purposes. The public sector can hardly be expected to adopt DLT solutions that are not heavily permissioned, and that do not include significant safeguards to protect sensitive, commercially valuable, and other types of information that cannot be simply put in the public domain. This means that the public sector is only likely to implement highly centralised DLT solutions, with the public sector granting permissions to access and amend the relevant information. While this can still generate some (degrees of) tamper-evidence and permanence of the information management system, the net advantage is likely to be modest when compared to other types of secure information management systems. This can have an important bearing on decisions whether DLT solutions meet cost effectiveness or similar criteria of value for money controlling their piloting and deployment.

The value proposition of DLT solutions could increase if they enabled significant procurement automation through smart contracts. However, there are massive challenges in translating procurement procedures to a strict ‘if/when ... then’ programmable logic, smart contracts have limited capability that is not commensurate with the volumes and complexity of procurement information, and their development would only be justified in contexts where a given smart contract (ie specific programme) could be used in a high number of procurement procedures. This limits its scope of applicability to standardised and simple procurement exercises, which creates a functional overlap with some RPA solutions. Even in those settings, smart contracts would pose structural problems in terms of their irrevocability or automaticity. Moreover, they would be unable to generate off-chain effects, and this would not be easily sorted out even with the inclusion of internet of things (IoT) solutions or software oracles. This comes to largely restrict smart contracts to an information exchange mechanism, which does not significantly increase the value added by DLT plus smart contract solutions for procurement governance.

Conclusion

To conclude, there are significant and difficult to solve hurdles in generating an enabling data architecture, especially for digital technologies that require multiple sources of information or data points regarding several phases of the procurement process. Moreover, the realistic potential of most technologies primarily concerns the automation of tasks not involving data analysis of the exercise of procurement discretion, but rather relatively simple information cross-checks or exchanges. Linking back to the discussion in the earlier broader chapter (see here), the analysis above shows that a feasibility boundary emerges whereby the adoption of digital technologies for procurement governance can make contributions in relation to its information intensity, but not easily in relation to its information complexity, at least not in the short to medium term and not in the absence of a significant improvement of the required enabling data architecture. Perhaps in more direct terms, in the absence of a significant expansion in the collection and curation of data, digital technologies can allow procurement governance to do more of the same or to do it quicker, but it cannot enable better procurement driven by data insights, except in relatively narrow settings. Such settings are characterised by centralisation. Therefore, the deployment of digital technologies can be a further source of pressure towards procurement centralisation, which is not a neutral development in governance terms.

This feasibility boundary should be taken into account in considering potential use cases, as well as serve to moderate the expectations that come with the technologies and that can fuel ‘policy irresistibility’. Further, it should be stressed that those potential advantages do not come without their own additional complexities in terms of new governance risks (eg data and data systems integrity, cybersecurity, skills gaps) and requirements for their mitigation. These will be explored in the next stage of my research project.

Public procurement governance as an information-intensive exercise, and the allure of digital technologies

I have just started a 12-month Mid-Career Fellowship funded by the British Academy with the purpose of writing up the monograph Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming).

In the process of writing up, I will be sharing some draft chapters and other thought pieces. I would warmly welcome feedback that can help me polish the final version. As always, please feel free to reach out: a.sanchez-graells@bristol.ac.uk.

In this first draft chapter (num 6), I explore the technological promise of digital governance and use public procurement as a case study of ‘policy irresistibility’. The main ideas in the chapter are as follows:

This Chapter takes a governance perspective to reflect on the process of horizon scanning and experimentation with digital technologies. The Chapter stresses how aspirations of digital transformation can drive policy agendas and make them vulnerable to technological hype, despite technological immaturity and in the face of evidence of the difficulty of rolling out such transformation programmes—eg regarding the still ongoing wave of transition to e-procurement. Delivering on procurement’s goals of integrity, efficiency and transparency requires facing challenges derived from the information intensity and complexity of procurement governance. Digital technologies promise to bring solutions to such informational burden and thus augment decisionmakers’ ability to deal with that complexity and with related uncertainty. The allure of the potential benefits of deploying digital technologies generates ‘policy irresistibility’ that can capture decision-making by policymakers overly exposed to the promise of technological fixes to recalcitrant governance challenges. This can in turn result in excessive experimentation with digital technologies for procurement governance in the name of transformation. The Chapter largely focuses on the EU policy framework, but the insights derived from this analysis are easily exportable.

Another draft chapter (num 7) will follow soon with more detailed analysis of the feasibility boundary for the adoption of digital technologies for procurement governance purposes. The full details of this draft chapter are as follows: A Sanchez-Graells, ‘The technological promise of digital governance: procurement as a case study of “policy irresistibility”’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4216825.

Interesting legislative proposal to make procurement of AI conditional on external checks

Procurement is progressively put in the position of regulating what types of artificial intelligence (AI) are deployed by the public sector (ie taking a gatekeeping function; see here and here). This implies that the procurement function should be able to verify that the intended AI (and its use/foreseeable misuse) will not cause harms—or, where harms are unavoidable, come up with a system to weigh, and if appropriate/possible manage, that risk. I am currently trying to understand the governance implications of this emerging gatekeeping role to assess whether procurement is best placed to carry it out.

In the context of this reflection, I found a very useful recent paper: M E Kaminski, ‘Regulating the Risks of AI’ (2023) 103 Boston University Law Review forthcoming. In addition to providing a useful critique of the treatment of AI harms as risk and of the implications in terms of the regulatory baggage that (different types of) risk regulation implies, Kaminski provides an overview of a very interesting legislative proposal: Washington State’s Bill SB 5116.

Bill SB 5116 is a proposal for new legislation ‘establishing guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability'. The governance approach underpinning the Bill is interesting in two respects.

First, the Bill includes a ban on certain uses of AI in the public sector. As Kaminski summarises: ‘Sec. 4 of SB 5116 bans public agencies from engaging in (1) the use of an automated decision system that discriminates, (2) the use of an “automated final decision system” to “make a decision impacting the constitutional or legal rights… of any Washington resident” (3) the use of an “automated final decision system…to deploy or trigger any weapon;” (4) the installation in certain public places of equipment that enables AI-enabled profiling, (5) the use of AI-enabled profiling “to make decisions that produce legal effects or similarly significant effects concerning individuals’ (at 66, fn 398).

Second, the Bill subjects the procurement of the AI to approval by the director of the office of the chief information officer. As Kaminski clarifies: ‘The bill’s assessment process is thus more like a licensing scheme than many proposed impact assessments in that it envisions a central regulator serving a gatekeeping function (albeit probably not an intensive one, and not over private companies, which aren’t covered by the bill at all). In fact, the bill is more protective than the GDPR in that the state CIO must make the algorithmic accountability report public and invite public comment before approving it’ (at 66, references omitted).

What the Bill does, then, is to displace the gatekeeping role from the procurement function itself to the data protection regulator. It also sets the specific substantive criteria the regulator has to apply in deciding whether to authorise the procurement of the AI.

Without getting into the detail of the Washington Bill, this governance approach seems to have two main strengths over the current emerging model of procurement self-regulation of the gatekeeping role (in the EU).

First, it facilitates a standardisation of the substantive criteria to be applied in assessing the potential harms resulting from AI adoption in the public sector, with a concentration on the specific characteristics of decision-making in this context. Importantly, it creates a clear area of illegality. Some of it is in line with eg the prohibition of certain AI uses in the Draft EU AI Act (profiling), or in the GDPR (prohibition of solely automated individual-decision making, including profiling — although it may go beyond it). Moreover, such an approach would allow for an expansion of prohibited uses in the specific context of the public sector, which the EU AI Act mostly fails to tackle (see here). It would also allow for the specification of constraints applicable to the use of AI by the public sector, such as a heightened obligation to provide reasons (see M Fink & M Finck, ‘Reasoned A(I)dministration: Explanation Requirements in EU Law and the Automation of Public Administration‘ (2022) 47(3) European Law Review 376-392).

Second, it introduces an element of external (independent) verification of the assessment of potential AI harms. I think this is a crucial governance point because most proposals relying on the internal (self) assessment by the procurement team fail to consider the extent to which such approach ensures (a) adequate resourcing (eg specialism and experience in the type of assessment) and (b) sufficient objectivity in the assessment. On the second point, with procurement teams often being told to ‘just go and procure what is needed’, moving to a position of gatekeeper or controller could be too big an ask (depending on institutional aspects that require closer consideration). Moreover, this would be different from other aspects of gatekeeping that procurement has progressively been asked to carry out (also excessively, in my view: see here).

When the procurement function is asked to screen for eg potential contractors’ social or environmental compliance track record, it is usually at arms’ length from those being reviewed (and the rules on conflict of interest are there to strengthen that position). Conversely, when the procurement function is asked to screen for the likely impact on citizens and/or users of public services of an initiative promoted by the operational part of the organisation to which it belongs, things are much more complicated.

That is why some systems (like the US FAR) create elements of separation between the procurement team and those in charge of reviewing eg competition issues (by means of the competition advocate). This is a model reflected in the Washington Bill’s approach to requiring external (even if within the public administration) verification and approval of the AI impact assessment. If procurement is to become a properly functioning gatekeeper of the adoption of AI by the public sector, this regulatory approach (ie having an ‘AI Harms Controller’) seems promising. Definitely a model worth thinking about for a little longer.

Public procurement and [AI] source code transparency, a (downstream) competition issue (re C-796/18)

Two years ago, in its Judgment of 28 May 2020 in case C-796/18, Informatikgesellschaft für Software-Entwicklung, EU:C:2020:395 (the ‘ISE case’), the Court of Justice of the European Union (CJEU) answered a preliminary ruling that can have very significant impacts in the artificial intelligence (AI) space, despite it being concerned with ‘old school’ software. More generally, the ISE case set the requirements to ensure that a contracting authority does not artificially distort competition for public contracts concerning (downstream) software services generally, and I argue AI services in particular.

The case risks going unnoticed because it concerned a relatively under-discussed form of self-organisation by the public administration that is exempted from the procurement rules (i.e. public-public cooperation; on that dimension of the case, see W Janssen, ‘Article 12’ in R Caranta and A Sanchez-Graells, European Public Procurement. Commentary on Directive 2014/24/EU (EE 2021) 12.57 and ff). It is thus worth revisiting the case and considering how it squares with regulatory developments concerning the procurement of AI, such as the development of standard clauses under the auspices of the European Commission.

The relevant part of the ISE case

In the ISE case, one of the issues at stake concerned whether a contracting authority would be putting an economic operator (i.e. the software developer) in a position of advantage vis-à-vis its competitors by accepting the transfer of software free of charge from another contracting authority, conditional on undertaking to further develop that software and to share (also free of charge) those developments of the software with the entity from which it had received it.

The argument would be that by simply accepting the software, the receiving contracting authority would be advantaging the software publisher because ‘in practice, the contracts for the adaptation, maintenance and development of the base software are reserved exclusively for the software publisher since its development requires not only the source code for the software but also other knowledge relating to the development of the source code’ (C-796/18, para 73).

This is an important issue because it primarily concerns how to deal with incumbency (and IP) advantages in software-related procurement. The CJEU, in the context of the exemption for public-public cooperation regulated in Article 12 of Directive 2014/24/EU, established that

in order to ensure compliance with the principles of public procurement set out in Article 18 of Directive 2014/24 … first [the collaborating contracting authorities must] have the source code for the … software, second, that, in the event that they organise a public procurement procedure for the maintenance, adaptation or development of that software, those contracting authorities communicate that source code to potential candidates and tenderers and, third, that access to that source code is in itself a sufficient guarantee that economic operators interested in the award of the contract in question are treated in a transparent manner, equally and without discrimination (para 75).

Functionally, in my opinion, there is no reason to limit that three-pronged test to the specific context of public-public cooperation and, in my view, the CJEU position is generalisable as the relevant test to ensure that there is no artificial narrowing of competition in the tendering of software contracts due to incumbency advantage.

Implications of the ISE case

What this means is that, functionally, contracting authorities are under an obligation to ensure that they have access and dissemination rights over the source code, at the very least for the purposes of re-tendering the contract, or tendering ancillary contracts. More generally, they also need to have a sufficient understanding of the software — or technical documentation enabling that knowledge — so that they can share it with potential tenderers and in that manner ensure that competition is not artificially distorted.

All of this is of high relevance and importance in the context of emerging practices of AI procurement. The debates around AI transparency are in large part driven by issues of commercial opacity/protection of business secrets, in particular of the source code, which both makes it difficult to justify the deployment of the AI in the public sector (for, let’s call them, due process and governance reasons demanding explainability) and also to manage its procurement and its propagation within the public sector (e.g. as a result of initiatives such as ‘buy once, use many times’ or collaborative and joint approaches to the procurement of AI, which are seen as strategically significant).

While there is a movement towards requiring source code transparency (e.g. but not necessarily by using open source solutions), this is not at all mainstreamed in policy-making. For example, the pilot UK algorithmic transparency standard does not mention source code. Short of future rules demanding source code transparency, which seem unlikely (see e.g. the approach in the proposed EU AI Act, Art 70), this issue will remain one for contractual regulation and negotiations. And contracts are likely to follow the approach of the general rules.

For example, in the proposal for standard contractual clauses for the procurement of AI by public organisations being developed under the auspices of the European Commission and on the basis of the experience of the City of Amsterdam, access to source code is presented as an optional contractual requirement on transparency (Art 6):

<optional> Without prejudice to Article 4, the obligations referred to in article 6.2 and article 6.3 [on assistance to explain an AI-generated decision] include the source code of the AI System, the technical specifications used in developing the AI System, the Data Sets, technical information on how the Data Sets used in developing the AI System were obtained and edited, information on the method of development used and the development process undertaken, substantiation of the choice for a particular model and its parameters, and information on the performance of the AI System.

For the reasons above, I would argue that a clause such as that one is not at all voluntary, but a basic requirement in the procurement of AI if the contracting authority is to be able to legally discharge its obligations under EU public procurement law going forward. And given the uncertainty on the future development, integration or replacement of AI solutions at the time of procuring them, this seems an unavoidable issue in all cases of AI procurement.

Let’s see if the CJEU is confronted with a similar issue, or the need to ascertain the value of access to data as ‘pecuniary interest’ (which I think, on the basis of a different part of the ISE case, is clearly to be answered in the positive) any time soon.

The importance of procurement for public sector AI uptake

In case there was any question on the importance and central role of public procurement for the uptake of artificial intelligence (AI) by the public sector (there wasn’t, though), two recent policy reports confirm that this is the case, at the very least in the European context.

AI Watch’s must-read ‘European landscape on the use of Artificial Intelligence by the Public Sector’ (1 June 2022) makes the point very clearly by reference to the analysis of AI strategies adopted by 24 EU Member States: ‘the procurement of AI technologies or the increased collaboration with innovative private partners is seen as an important way to facilitate the introduction of AI within the public sector. Guidance on how to stimulate and organise AI procurement by civil servants should potentially be strengthened and shared among Member States’ (at 26). Concerning guidance, the report refers to the European Commission’s supported process of developing standard contractual clauses for the procurement of AI (see here), and there is also a twin AI Watch Handbook for the adoption of AI by the public sector (25 May 2022) that includes a recommendation on procurement guidance (‘Promote the development of multilingual guidelines, criteria and tools for public procurement of AI solutions in the public sector throughout Europe‘, recommendation 2.5, and details at 34-35).

The European landscape report provides some more interesting detail on national strategies considering AI procurement adaptations.

The need to work together with the private sector in this area is repeatedly stressed. However, strategies mention that historically it has been difficult for innovative companies to work together with government authorities due to cumbersome procurement regulations. In this area, several strategies (12, 50%) [though note the table below indicates 13, rather than 12 strategies] come up with new policy initiatives to improve the procurement processes. The Spanish strategy, for example, mentions that new innovative public procurement mechanisms will be introduced to help the procurement of new solutions from the market, while the Maltese government describes how existing public procurement processes will be changed to facilitate the procurement of emerging technologies such as AI. The Dutch and Czech strategies mention that hackathons for public sector AI will be introduced to assist in the procurement of AI. Civil servants will be given training and awareness in procurement to assist them in this process, something that is highlighted in the Estonian strategy. The French strategy stresses that current procurement regulation already provides a lot of freedom for innovative procurement but that because of risk aversion present within public administrations all possibilities are not taken into consideration (at 25-26, emphasis in the original).

Own elaboration, based on Table 7 in the AI Watch report.

There is also an interesting point on the need to create internal public sector AI capabilities: “Some strategies say that the public organisations should work more together with private organisations (where the missing skillsets are present), either through partnerships or by procurement. On the one hand, this is an extremely important and promising shift in the public sector that more and more must move towards a networking perspective. In fact, the complexity and variety of skills required by AI cannot be always completely internalised. On the other hand, such partnerships and procurement still require a baseline in expertise in AI within the public sector staff to avoid common mistakes or dependency on external parties” (at 23, emphasis added).

Given the strategic importance of procurement, as well as the need to upskill the procurement workforce and to build additional AI capacity in the public sector to manage procurement process, this is an area of research and policy that will only increase in relevance in the near and longer term.

This same direction of travel is reflected in the also recent UK's Central Digital and Data Office ‘Transforming for a digital future: 2022 to 2025 roadmap for digital and data’ (9 June 2022). One of its main aspirations is to generate ‘Significant savings by leveraging government’s combined purchasing power and reducing duplicative procurement, to shift to a “buy once, use many times” approach to technology’. This should be achieved by the horizontal promotion of ‘a “buy once, use many times” approach to technology, including by making use of a common code, pattern and architecture repository for government’. Implicitly, this will also require a review of procurement policies and practices.

Importantly—and potentially problematically—it will also raise the stakes of AI procurement, in particular if the roll-out of the ‘bought once’ technology is rushed and its negative impacts or implications can only be identified once it has already been propagated, or in relation to some implementations only. Avoiding this will require very careful IA impact assessments, as well as piloting and scalability approaches that have strong risk-management systems embedded by design.

As always, this will be an area fun to keep an eye on.

New paper on procurement corruption and AI

I have just uploaded a new paper on SSRN: ‘Procurement corruption and artificial intelligence: between the potential of enabling data architectures and the constraints of due process requirements’, to be published in S. Williams-Elegbe & J. Tillipman (eds), Routledge Handbook of Public Procurement Corruption (forthcoming). In this paper, I reflect on the potential improvements that using AI for anti-corruption purposes can practically have in the current (and foreseeable) context of AI development, (lack of) procurement and other data, and existing due process constraints on the automation or AI-support of corruption-related procurement decision-making (such as eg debarment/exclusion or the imposition of fines). The abstract is as follows:

This contribution argues that the expectations around the deployment of AI as an anti-corruption tool in procurement need to be tamed. It explores how the potential applications of AI replicate anti-corruption interventions by human officials and, as such, can only provide incremental improvements but not a significant transformation of anti-corruption oversight and enforcement architectures. It also stresses the constraints resulting from existing procurement data and the difficulties in creating better, unbiased datasets and algorithms in the future, which would also generate their own corruption risks. The contribution complements this technology-centred analysis with a critical assessment of the legal constraints based on due process rights applicable even when AI supports continued human intervention. This in turn requires a close consideration of the AI-human interaction, as well as a continuation of the controls applicable to human decision-making in corruption-prone activities. The contribution concludes, first, that prioritising improvements in procurement data capture, curation and interconnection is a necessary but insufficient step; and second, that investments in AI-based anti-corruption tools cannot substitute, but only complement, current anti-corruption approaches to procurement.

As always, feedback more than welcome. Not least, because I somehow managed to write this ahead of the submission deadline, so I would have time to adjust things ahead of publication. Thanks in advance: a.sanchez-graells@bristol.ac.uk.

Open Contracting: Where is the UK and What to Expect?

I had the pleasure of delivering a webinar on ‘Open Contracting Data: Where Are We & What Could We Expect?‘ for the Gloucester branch of the Chartered Institute of Procurement & Supply. The webinar assessed the current state of development and implementation of open contracting data initiatives in the UK. It also considered the main principles and goals of open contracting, as well as its practical implementation, and the specific challenges posed by the disclosure of business sensitive information. The webinar also mapped potential future developments and, more generally, reflected on the relevance of an adequate procurement data infrastructure for the deployment of digital technologies and, in particular, AI. The slides are available (via dropbox) and the recording is also accessible through the image below (as well as via dropbox).

As always, feedback most welcome: a.sanchez-graells@bristol.ac.uk.

PS. For some an update on recent EBRD/EU sponsored open contracting initiatives in Greece and Poland, see here.

3 priorities for policy-makers thinking of AI and machine learning for procurement governance

138369750_9f3b5989f9_w.jpg

I find that carrying out research in the digital technologies and governance field can be overwhelming. And that is for an academic currently having the luxury of full-time research leave… so I can only imagine how much more overwhelming it must be for policy-makers thinking about the adoption of artificial intelligence (AI) and machine learning for procurement governance, to identify potential use cases and to establish viable deployment strategies.

Prioritisation seems particularly complicated, as managing such a significant change requires careful planning and paying attention to a wide variety of potential issues. However, getting prioritisation right is probably the best way of increasing the chances of success for the deployment of digital technologies for procurement governance — as well as in other areas of Regtech, such as financial supervision.

This interesting speech by James Proudman (Executive Director of UK Deposit Takers Supervision, Bank of England) on 'Managing Machines: the governance of artificial intelligence', precisely focuses on such issues. And I find the conclusions particularly enlightening:

First, the observation that the introduction of AI/ML poses significant challenges around the proper use of data, suggests that boards should attach priority to the governance of data – what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.

Second, the observation that the introduction of AI/ML does not eliminate the role of human incentives in delivering good or bad outcomes, but transforms them, implies that boards should continue to focus on the oversight of human incentives and accountabilities within AI/ML-centric systems.

And third, the acceleration in the rate of introduction of AI/ML will create increased execution risks during the transition that need to be overseen. Boards should reflect on the range of skill sets and controls that are required to mitigate these risks both at senior level and throughout the organisation.

These seem to me directly transferable to the context of procurement governance and the design of strategies for the deployment of AI and machine learning, as well as other digital technologies.

First, it is necessary to create an enabling data architecture and to put significant thought into how to extract value from the increasingly available data. In that regard, there are two opportunities that should not be missed. One concerns the treatment of procurement datasets as high-value datasets for the purposes of the special regime of the Open Data Directive (for more details, see section 6 here), which will require careful consideration of the content and level of openness of procurement data in the context of the domestic transpositions that need to be in place by 17 July 2021. The other, related opportunity concerns the implementation of the new rules on eForms for procurement data publications, which Member States need to adopt by 14 November 2022. Building on the data architecture that will result from both sets of changes—which should be coordinated—will allow for the deployment of data analytics and machine learning techniques. The purposes and goals of such deployments also need to be considered carefully, as well as their potential implications.

Second, it seems clear that the changes in the management of procurement data and the quick development of analytics that can support procurement decision-making pile some additional training and upskilling needs on the already existing (and partially unaddressed?) current challenges of full consolidation of eProcurement across the EU. Moreover, it should be clear that there is no such thing as an objective and value neutral implementation of technological governance solutions and that all levels of accountability need to be provided with adequate data skills and digital literacy upgrades in order to check what is being done at the technical level (for crystal-clear discussion, see van der Voort et al, 'Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?' (2019) 36(1) Government Information Quarterly 27-38). Otherwise, governance mechanism would be at risk of failure due to techno-capture and/or techno-blindness, whether intended or accidental.

Third, there is an increasing need to manage change and the risks that come with it. In a notoriously risk averse policy field such as procurement, this is no minor challenge. This should also prompt some rethinking of the way the procurement function is organised and its risk-management mechanisms.

Addressing these priorities will not be easy or cheap, but these are the fundamental building blocks required to enable the public procurement sector to benefit from the benefits of digital technologies as they mature. In consultancy jargon, these are the priorities to ‘future-proof’ procurement strategies. Will they be adopted?

Postscript

It is worth adding that, in particular the first and second issues, lend themselves to strong collaborations between policy-makers and academics. As rightly pointed out by Pencheva et al, 'Big Data and AI – A transformational shift for government: So, what next for research?' (2018) Public Policy and Administration, advanced access at 16:

... governments should also support the efforts for knowledge creation and analysis by opening up their data further, collaborating with – and actively seeking inputs from – researchers to understand how Big Data can be utilised in the public sector. Ultimately, the supporting field of academic thought will only be as strong as the public administration practice allows it to be.

Digital technologies, public procurement and sustainability: some exploratory thoughts

download.jpeg

** This post is based on the seminar given at the Law Department of Pompeu Fabra University in Barcelona, on 7 November 2019. The slides for the seminar are available here. Please note that some of the issues have been rearranged. I am thankful to participants for the interesting discussion, and to Dr Lela Mélon and Prof Carlos Gómez Ligüerre for the kind invitation to participate in this activitity of their research group on patrimonial law. I am also grateful to Karolis Granickas for comments on an earlier draft. The standard disclaimer applies.**

Digital technologies, public procurement and sustainability: some exploratory thoughts

1. Introductory detour

The use of public procurement as a tool to further sustainability goals is not a new topic, but rather the object of a long-running discussion embedded in the broader setting of the use of procurement for the pursuit of horizontal or secondary goals—currently labelled smart or strategic procurement. The instrumentalisation of procurement for (quasi)regulatory purposes gives rise to a number of issues, such as: regulatory transfer; the distortion of the very market mechanisms on which procurement rules rely as a result of added regulatory layers and constraints; legitimacy and accountability issues; complex regulatory impact assessments; professionalisation issues; etc.

Discussions in this field are heavily influenced by normative and policy positions, which are not always clearly spelled out but still drive most of the existing disagreement. My own view is that the use of procurement for horizontal policies is not per se desirable. The simple fact that public expenditure can act as a lever/incentive to affect private (market) behaviour does not mean that it should be used for that purpose at every opportunity and/or in an unconstrained manner. Procurement should not be used in lieu of legislation or administrative regulation where it is a second-best regulatory tool. Embedding regulatory elements that can also achieve horizontal goals in the procurement process should only take place where it has clear synergies with the main goal of procurement: the efficient satisfaction of public sector needs and/or needs in the public interest. This generates a spectrum of potential uses of procurement of a different degree of desirability.

At one end, and at its least desirable, procurement can and is used as a trade barrier for economic protectionism. In my view, this should not happen. At the other end of the spectrum, at its most desirable, procurement can and is (sometimes) used in a manner that supports environmental sustainability and technical innovation. In my view, this should happen, and more than it currently does. In between these two ends, there are uses of procurement for the promotion of labour and social standards, as well as for the promotion of human rights. Controversial as this position is, in my view, the use of procurement for the pursuit of those goals should be subjected to strict proportionality analysis in order to make sure that the secondary goal does not prevent the main purpose of the efficient satisfaction of public sector needs and/or needs in the public interest.

From a normative perspective, thus, I think that there is a wide space of synergy between procurement and environmental sustainability—which goes beyond green procurement and extends to the use of procurement to support a more circular economy—and that this can be used more effectively than is currently the case, due to emerging innovative uses of digital technologies for procurement governance.

This is the topic in which I would like to concentrate, to formulate some exploratory thoughts. The following reflections are focused on the EU context, but hopefully they are of a broader relevance. I first zoom in on the strategic priorities of fostering sustainability through procurement (2) and the digitalisation of procurement (3), as well as critically assess the current state of development of digital technologies for procurement governance (4). I then look at the interaction between both strategic goals, in terms of the potential for sustainable digital procurement (5), which leads to specific discussion of the need for an enabling data architecture (6), the potential for AI and sustainable procurement (7), the potential for the implementation of blockchains for sustainable procurement (8) and the need to refocus the emerging guidelines on the procurement of digital technologies to stress their sustainability dimension (9). Some final thoughts conclude (10).

2. Public procurement and sustainability

As mentioned above, the use of public procurement to promote sustainability is not a new topic. However, it has been receiving increasing attention in recent policy-making and legislative efforts (see eg this recent update)—though they are yet to translate in the level of practical change required to make a relevant contribution to pressing challenges, such as the climate emergency (for a good critique, see this recent post by Lela Mélon).

Facilitating the inclusion of sustainability-related criteria in procurement was one of the drivers for the new rules in the 2014 EU Public Procurement Package, which create a fairly flexible regulatory framework. Most remaining problems are linked to the implementation of such a framework, not its regulatory design. Cost, complexity and institutional inertia are the main obstacles to a broader uptake of sustainable procurement.

The European Commission is alive to these challenges. In its procurement strategy ‘Making Procurement work in and for Europe’ [COM(2017) 572 final; for a critical assessment, see here], the Commission stressed the need to facilitate and to promote the further uptake of strategic procurement, including sustainable procurement.

However, most of its proposals are geared towards the publication of guidance (such as the Buying Green! Handbook), standardised solutions (such as the library of EU green public procurement criteria) and the sharing of good practices (such as in this library of use cases) and training materials (eg this training toolkit). While these are potentially useful interventions, the main difficulty remains in their adoption and implementation at Member State level.

EcoInno.png

While it is difficult to have a good view of the current situation (see eg the older studies available here, and the terrible methodology used for this 2015 PWC study for the Commission), it seems indisputable that there are massive differences across EU Member States in terms of sustainability-oriented innovation in procurement.

Taking as a proxy the differences that emerge from the Eco-Innovation Scoreboard, it seems clear that this very different level of adoption of sustainability-related eco-innovation is likely reflective of the different approaches followed by the contracting authorities of the different Member States.

Such disparities create difficulties for policy design and coordination, as is acknowledged by the Commission and the limitations of its procurement strategy. The main interventions are thus dependent on Member States (and their sub-units).

3. Public procurement digitalisation beyond e-Procurement

Similarly to the discussion above, the bidirectional relationship between the use of procurement as a tool to foster innovation, and the adaptation of procurement processes in light of technological innovations is not a new issue. In fact, the transition to electronic procurement (eProcurement) was also one of the main drivers for the revision of the EU rules that resulted in the 2014 Public Procurement Package, as well as the flanking regulation of eInvoicing and the new rules on eForms. eProcurement (broadly understood) is thus an area where further changes will come to fruition within the next 5 years (see timeline below).

Picture 1.png

However, even a maximum implementation of the EU-level eProcurement rules would still fall short of creating a fully digitalised procurement system. There are, indeed, several aspects where current technological solutions can enable a more advanced and comprehensive eProcurement system. For example, it is possible to automate larger parts of the procurement process and to embed compliance checks (eg in solutions such as the Prozorro system developed in Ukraine). It is also possible to use the data automatically generated by the eProcurement system (or otherwise consolidated in a procurement register) to develop advanced data analytics to support procurement decision-making, monitoring, audit and the deployment of additional screens, such as on conflicts of interest or competition checks.

Progressing the national eProcurement systems to those higher levels of functionality would already represent progress beyond the mandatory eProcurement baseline in the 2014 EU Public Procurement Package and the flanking initiatives listed above; and, crucially, enabling more advanced data analytics is one of the effects sought with the new rules on eForms, which aim to significantly increase the availability of (better) procurement data for transparency purposes.

Although it is an avenue mainly explored in other jurisdictions, and currently in the US context, it is also possible to create public marketplaces akin to Amazon/eBay/etc to generate a more user-friendly interface for different types of catalogue-based eProcurement systems (see eg this recent piece by Chris Yukins).

Beyond that, the (further) digitalisation of procurement is another strategic priority for the European Commission; not only for procurement’s sake, but also in the context of the wider strategy to create an AI-friendly regulatory environment and to use procurement as a catalyst for innovations of broader application – along lines of the entrepreneurial State (Mazzucato, 2013; see here for an adapted shorter version).

Indeed, the Commission has formulated a bold(er) vision for future procurement systems based on emerging digital technologies, in which it sees a transformative potential: “New technologies provide the possibility to rethink fundamentally the way public procurement, and relevant parts of public administrations, are organised. There is a unique chance to reshape the relevant systems and achieve a digital transformation” (COM(2017) 572 fin at 11).

Even though the Commission has not been explicit, it may be worth trying to map which of the currently emerging digital technologies could be of (more direct) application to procurement governance and practice. Based on the taxonomy included in a recent OECD report (2019a, Annex C), it is possible to identify the following types and specific technologies with potential procurement application:

AI solutions

  • Virtual Assistants (Chat bots or Voice bots): conversational, computer-generated characters that simulate a conversation to deliver voice- or text-based information to a user via a Web, kiosk or mobile interface. A VA incorporates natural-language processing, dialogue control, domain knowledge and a visual appearance (such as photos or animation) that changes according to the content and context of the dialogue. The primary interaction methods are text-to-text, text-to-speech, speech-to-text and speech-to-speech;

  • Natural language processing: technology involves the ability to turn text or audio speech into encoded, structured information, based on an appropriate ontology. The structured data may be used simply to classify a document, as in “this report describes a laparoscopic cholecystectomy,” or it may be used to identify findings, procedures, medications, allergies and participants;

  • Machine Learning: the goal is to devise learning algorithms that do the learning automatically without human intervention or assistance;

  • Deep Learning: allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction;

  • Robotics: deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing;

  • Recommender systems: subclass of information filtering system that seeks to predict the "rating" or "preference" that a user would give to an item;

  • Expert systems: is a computer system that emulates the decision-making ability of a human expert;

Digital platforms

  • Distributed ledger technology (DLT): is a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions. There is no central administrator or centralised data storage. A peer-to-peer network is required as well as consensus algorithms to ensure replication across nodes is undertaken; Blockchain is one of the most common implementation of DLT;

  • Smart contracts: is a computer protocol intended to digitally facilitate, verify, or enforce the negotiation or performance of a contract;

  • IoT Platform: platform on which to create and manage applications, to run analytics, and to store and secure your data in order to get value from the Internet of Things (IoT);

Not all technologies are equally relevant to procurement—and some of them are interrelated in a manner that requires concurrent development—but these seem to me to be those with a higher potential to support the procurement function in the future. Their development needs not take place solely, or primarily, in the context of procurement. Therefore, their assessment should be carried out in the broader setting of the adoption of digital technologies in the public sector.

4. Digital technologies & the public sector, including procurement

The emergence of the above mentioned digital technologies is now seen as a potential solution to complex public policy problems, such as the promotion of more sustainable public procurement. Keeping track of all the potential use cases in the public sector is difficult and the hype around buzzwords such as AI, blockchain or the internet of things (IoT) generates inflated claims of potential solutions to even some of the most wicked public policy problems (eg corruption).

This is reflective of the same hype in private markets, and in particular in financial and consumer markets, where AI is supposed to revolutionise the way we live, almost beyond recognition. There also seems to be an emerging race to the top (or rather, a copy-cat effect) in policy-making circles, as more and more countries adopt AI strategies in the hope of harnessing the potential of these technologies to boost economic growth.

In my view, digital technologies are receiving excessive attention. These are immature technologies and their likely development and usefulness is difficult to grasp beyond a relatively abstract level of potentiality. As such, I think these technologies may be receiving excessive attention from policy-makers and possibly also disproportionate levels of investment (diversion).

The implementation of digital technologies in the public sector faces a number of specific difficulties—not least, around data availability and data skills, as stressed in a recent OECD report (2019b). While it is probably beyond doubt that they will have an impact on public governance and the delivery of public services, it is more likely to be incremental rather than disruptive or revolutionary. Along these lines, another recent OECD report (2019c) stresses the need to take a critical look at the potential of artificial intelligence, in particular in relation to public sector use cases.

The OECD report (2019a) mentioned above shows how, despite these general strategies and the high levels of support at the top levels of policy-making, there is limited evidence of significant developments on the ground. This is the case, in particular, regarding the implementation of digital technologies in public procurement, where the OECD documents very limited developments (see table below).

Picture 1.png

Of course, this does not mean that we will not see more and more widespread developments in the coming years, but a note of caution is necessary if we are to embrace realistic expectations about the potential for significant changes resulting from procurement digitalisation. The following sections concentrate on the speculative analysis of such potential use of digital technologies to support sustainable procurement.

5. Sustainable digital procurement

Bringing together the scope for more sustainable public procurement (2), the progressive digitalisation of procurement (3), and the emergence of digital technologies susceptible of implementation in the public sector (4); the combined strategic goal (or ideal) would be to harness the potential of digital technologies to promote (more) sustainable procurement. This is a difficult exercise, surrounded by uncertainty, so the rest of this post is all speculation.

In my view, there are different ways in which digital technologies can be used for sustainability purposes. The contribution that each digital technology (DT) can make depends on its core functionality. In simple functional terms, my understanding is that:

  • AI is particularly apt for the massive processing of (big) data, as well as for the implementation of data-based machine learning (ML) solutions and the automation of some tasks (through so-called robotic process automation, RPA);

  • Blockchain is apt for the implementation of tamper-resistant/evident decentralised data management;

  • The internet of things (IoT) is apt to automate the generation of some data and (could be?) apt to breach the virtual/real frontier through oracle-enabled robotics

The timeline that we could expect for the development of these solutions is also highly uncertain, although there are expectations for some technologies to mature within the next four years, whereas others may still take closer to ten years.

© Gartner, Aug 2018.

© Gartner, Aug 2018.

Each of the core functionalities or basic strengths of these digital technologies, as well as their rate of development, will determine a higher or lower likelihood of successful implementation in the area of procurement, which is a highly information/data-sensitive area of public policy and administration. Therefore, it seems unavoidable to first look at the need to create an enabling data architecture as a priority (and pre-condition) to the deployment of any digital technologies.

6. An enabling data architecture as a priority

The importance of the availability of good quality data in the context of digital technologies cannot be over-emphasised (see eg OECD, 2019b). This is also clear to the European Commission, as it has also included the need to improve the availability of good quality data as a strategic priority. Indeed, the Commission stressed that “Better and more accessible data on procurement should be made available as it opens a wide range of opportunities to assess better the performance of procurement policies, optimise the interaction between public procurement systems and shape future strategic decisions” (COM(2017) 572 fin at 10-11).

However, despite the launch of a set of initiatives that seek to improve the existing procurement data architecture, there are still significant difficulties in the generation of data [for discussion and further references, see A Sanchez-Graells, “Data-driven procurement governance: two well-known elephant tales” (2019) 24(4) Communications Law 157-170; idem, “Some public procurement challenges in supporting and delivering smart urban mobility: procurement data, discretion and expertise”, in M Finck, M Lamping, V Moscon & H Richter (eds), Smart Urban Mobility – Law, Regulation, and Policy, MPI Studies on Intellectual Property and Competition Law (Springer 2020) forthcoming; and idem, “EU Public Procurement Policy and the Fourth Industrial Revolution: Pushing and Pulling as One?”, Working Paper for the YEL Annual Conference 2019 ‘EU Law in the era of the Fourth Industrial Revolution’].

To be sure, there are impending advances in the availability of quality procurement data as a result of the increased uptake of the Open Contracting Data Standards (OCDS) developed by the Open Contracting Partnership (OCP); the new rules on eForms; the development of eGovernment Application Programming Interfaces (APIs); the 2019 Open Data Directive; the principles of business to government data sharing (B2G data sharing); etc. However, it seems to me that the European Commission needs to exercise clearer leadership in the development of an EU-wide procurement data architecture. There is, in particular, one measure that could be easily adopted and would make a big difference.

The 2019 Open Data Directive (Directive 2019/1024/EU, ODD) establishes a special regime for high-value datasets, which need to be available free of charge (subject to some exceptions); machine readable; provided via APIs; and provided as a bulk download, where relevant (Art 14(1) ODD). Those high-value datasets are yet to be identified by the European Commission through implementing acts aimed at specifying datasets within a list of thematic categories included in Annex I, which includes the following datasets: geospatial; Earth observation and environment; meteorological; statistics; companies and company ownership; and mobility. In my view, most relevant procurement data can clearly fit within the category of statistical information.

More importantly, the directive specifies that the ‘identification of specific high-value datasets … shall be based on the assessment of their potential to: (a) generate significant socioeconomic or environmental benefits and innovative services; (b) benefit a high number of users, in particular SMEs; (c) assist in generating revenues; and (d) be combined with other datasets’ (Art 14(2) ODD). Given the high-potential of procurement data to unlock (a), (b) and (d), as well as, potentially, generate savings analogous to (c), the inclusion of datasets of procurement information in the future list of high-value datasets for the purposes of the Open Data Directive seems like an obvious choice.

Of course, there will be issues to iron out, as not all procurement information is equally susceptible of generating those advantages and there is the unavoidable need to ensure an appropriate balance between the publication of the data and the protection of legitimate (commercial) interests, as recognised by the Directive itself (Art 2(d)(iii) ODD) [for extended discussion, see here]. However, this would be a good step in the direction of ensuring the creation of a forward-looking data architecture.

At any rate, this is not really a radical idea. At least half of the EU is already publishing some public procurement open data, and many Eastern Partnership countries publish procurement data in OCDS (eg Moldova, Ukraine, Georgia). The suggestion here would bring more order into this bottom-up development and would help Member States understand what is expected, where to get help from, etc, as well as ensure the desirable level of uniformity, interoperability and coordination in the publication of the relevant procurement data.

Beyond that, in my view, more needs to be done to also generate backward-looking databases that enable the public sector to design and implement adequate sustainability policies, eg in relation to the repair and re-use of existing assets.

Only when the adequate data architecture is in place, will it be possible to deploy advanced digital technologies. Therefore, this should be given the highest priority by policy-makers.

7. Potential AI uses for sustainable public procurement

If/when sufficient data is available, there will be scope for the deployment of several specific implementations of artificial intelligence. It is possible to imagine the following potential uses:

  • Sustainability-oriented (big) data analytics: this should be relatively easy to achieve and it would simply be the deployment of big data analytics to monitor the extent to which procurement expenditure is pursuing or achieving specified sustainability goals. This could support the design and implementation of sustainability-oriented procurement policies and, where appropriate, it could generate public disclosure of that information in order to foster civic engagement and to feedback into political processes.

  • Development of sustainability screens/indexes: this would be a slight variation of the former and could facilitate the generation of synthetic data visualisations that reduced the burden of understanding the data analytics.

  • Machine Learning-supported data analysis with sustainability goals: this could aim to train algorithms to establish eg the effectiveness of sustainability-oriented procurement policies and interventions, with the aim of streamlining existing policies and to update them at a pace and level of precision that would be difficult to achieve by other means.

  • Sustainability-oriented procurement planning: this would entail the deployment of algorithms aimed at predictive analytics that could improve procurement planning, in particular to maximise the sustainability impact of future procurements.

Moreover, where clear rules/policies are specified, there will be scope for:

  • Compliance automation: it is possible to structure procurement processes and authorisations in such a way that compliance with pre-specified requirements is ensured (within the eProcurement system). This facilitates ex ante interventions that could minimise the risk of and the need for ex post contractual modifications or tender cancellations.

  • Recommender/expert systems: it would be possible to use machine learning to assist in the design and implementation of procurement processes in a way that supported the public buyer, in an instance of cognitive computing that could accelerate the gains that would otherwise require more significant investments in professionalisation and specialisation of the workforce.

  • Chatbot-enabled guidance: similarly to the two applications above, the use of procurement intelligence could underpin chatbot-enabled systems that supported the public buyers.

A further open question is whether AI could ever autonomously generate new sustainability policies. I dare not engage in such exercise in futurology…

8. Limited use of blockchain/DLTs for sustainable public procurement

Picture 1.png

By contrast with the potential for big data and the AI it can enable, the potential for blockchain applications in the context of procurement seems to me much more limited (for further details, see here, here and here). To put it simply, the core advantages of distributed ledger technologies/blockchain derive from their decentralised structure.

Whereas there are several different potential configurations of DLTs (see eg Rauchs et al, 2019 and Alessie et al, 2019, from where the graph is taken), the configuration of the blockchain affects its functionalities—with the highest levels of functionality being created by open and permissionless blockchains.

However, such a structure is fundamentally uninteresting to the public sector, which is unlikely to give up control over the system. This has been repeatedly stressed and confirmed in an overview of recent implementations (OECD, 2019a:16; see also OECD, 2018).

Moreover, even beyond the issue of public sector control, it should be stressed that existing open and permissionless blockchains operate on the basis of a proof-of-work (PoW) consensus mechanism, which has a very high carbon footprint (in particular in the case of Bitcoin). This also makes such systems inapt for sustainable digital procurement implementations.

Therefore, sustainable blockchain solutions (ie private & permissioned, based on proof-of-stake (PoS) or a similar consensus mechanisms), are likely to present very limited advantages for procurement implementation over advanced systems of database management—and, possibly, even more generally (see eg this interesting critical paper by Low & Mik, 2019).

Moreover, even if there was a way to work around those constraints and design a viable technical solution, that by itself would still not fix underlying procurement policy complexity, which will necessarily impose constraints on technologies that require deterministic coding, eg

  • Tenders on a blockchain - the proposals to use blockchain for the implementation of the tender procedure itself are very limited, in my opinion, by the difficulty in structuring all requirements on the basis of IF/THEN statements (see here).

  • Smart (public) contracts - the same constraints apply to smart contracts (see here and here).

  • Blockchain as an information exchange platform (Mélon, 2019, on file) - the proposals to use blockchain mechanisms to exchange information on best practices and tender documentation of successful projects could serve to address some of the confidentiality issues that could arise with ‘standard’ databases. However, regardless of the technical support to the exchange of information, the complexity in identifying best practices and in ensuring their replicability remains. This is evidenced by the European Commission’s Initiative for the exchange of information on the procurement of Large Infrastructure Projects (discussed here when it was announced), which has not been used at all in its first two years (as of 6 November 2019, there were no publicly-available files in the database).

9. Sustainable procurement of digital technologies

A final issue to take into consideration is that the procurement of digital technologies needs to itself incorporate sustainability considerations. However, this does not seem to be the case in the context of the hype and over-excitement with the experimentation/deployment of those technologies.

Indeed, there are emerging guidelines on procurement of some digital technologies, such as AI (UK, 2019) (WEF, 2019) (see here for discussion). However, as could be expected, these guidelines are extremely technology-centric and their interaction with broader procurement policies is not necessarily straightforward.

I would argue that, in order for these technologies to enable a more sustainable procurement, sustainability considerations need to be embedded not only in their application, but may well require eg an earlier analysis of whether the life-cycle of existing solutions warrants replacement, or the long-term impacts of the implementation of digital technologies (eg in terms of life-cycle carbon footprint).

Pursuing technological development for its own sake can have significant environmental impacts that must be assessed.

10. Concluding thoughts

This (very long…) blog post has structured some of my thoughts on the interaction of sustainability and digitalisation in the context of public procurement. By way of conclusion, I would just try to translate this into priorities for policy-making (and research). Overall, I believe that the main area of effort for policy-makers should now be in creating an enabling data architecture. Its regulation can thus focus research in the short term. In the medium-term, and as use cases become clearer in the policy-making sphere, research should be moving towards the design of digital technology-enabled solutions (for sustainable public procurement, but not only) and their regulation, governance and social impacts. The long-term is too difficult for me to foresee, as there is too much uncertainty. I can only guess that we will cross that bridge when/if we get there…

AI & sustainable procurement: the public sector should first learn what it already owns

ⓒ Christophe Benoit (Flickr).

ⓒ Christophe Benoit (Flickr).

[This post was first published at the University of Bristol Law School Blog on 14 October 2019].

While carrying out research on the impact of digital technologies for public procurement governance, I have realised that the deployment of artificial intelligence to promote sustainability through public procurement holds some promise. There are many ways in which machine learning can contribute to enhance procurement sustainability.

For example, new analytics applied to open transport data can significantly improve procurement planning to support more sustainable urban mobility strategies, as well as the emergence of new models for the procurement of mobility as a service (MaaS). Machine learning can also be used to improve the logistics of public sector supply chains, as well as unlock new models of public ownership of eg cars. It can also support public buyers in identifying the green or sustainable public procurement criteria that will deliver the biggest improvements measured against any chosen key performance indicator, such as CO2 footprint, as well as support the development of robust methodologies for life-cycle costing.

However, it is also evident that artificial intelligence can only be effectively deployed where the public sector has an adequate data architecture. While advances in electronic procurement and digital contract registers are capable of generating that data architecture for the future, there is a significant problem concerning the digitalisation of information on the outcomes of past procurement exercises and the current stock of assets owned and used by the public sector. In this blog, I want to raise awareness about this gap in public sector information and to advocate for the public sector to invest in learning what it already owns as a potential major contribution to sustainability in procurement, in particular given the catalyst effect this could have for a more circular procurement economy.

Backward-looking data as a necessary evidence base

It is notorious that the public sector’s management of procurement-related information is lacking. It is difficult enough to have access to information on ‘live’ tender procedures. Accessing information on contract execution and any contractual modifications has been nigh impossible until the very recent implementation of the increased transparency requirements imposed by the EU’s 2014 Public Procurement Package. Moreover, even where that information can be identified, there are significant constraints on the disclosure of competition-sensitive information or business secrets, which can also restrict access. This can be compounded in the case of procurement of assets subject to outsourced maintenance contracts, or in assets procured under mechanisms that do not transfer property to the public sector.

Accessing information on the outcomes of past procurement exercises is thus a major challenge. Where the information is recorded, it is siloed and compartmentalised. And, in any case, this is not public information and it is oftentimes only held by the private firms that supplied the goods or provided the services—with information on public works more likely to be, at least partially, under public sector control. This raises complex issues of business to government (B2G) data sharing, which is only a nascent area of practice and where the guidance provided by the European Commission in 2018 leaves many questions unanswered.

I will not argue here that all that information should be automatically and unrestrictedly publicly disclosed, as that would require some careful considerations of the implications of such disclosures. However, I submit that the public sector should invest in tracing back information on procurement outcomes for all its existing stock of assets (either owned, or used under other contractual forms)—or, at least, in the main categories of buildings and real estate, transport systems and IT and communications hardware. Such database should then be made available to data scientists tasked with seeking all possible ways of optimising the value of that information for the design of sustainable procurement strategies.

In other words, in my opinion, if the public sector is to take procurement sustainability seriously, it should invest in creating a single, centralised database of the durable assets it owns as the necessary evidence base on which to seek to build more sustainable procurement policies. And it should then put that evidence base to good use.

More circular procurement economy based on existing stocks

In my view, some of the main advantages of creating such a database in the short-, medium- and long-term would be as follows.

In the short term, having comprehensive data on existing public sector assets would allow for the deployment of different machine learning solutions to seek, for example, to identify redundant or obsolete assets that could be reassigned or disposed of, or to reassess the efficiency of the existing investments eg in terms of levels of use and potential for increased sharing of assets, or in terms of the energy (in)efficiency derived from their use. It would also allow for a better understanding of potential additional improvements in eg maintenance strategies, as services could be designed having the entirety of the relevant stock into consideration.

In the medium term, this would also provide better insights on the whole life cycle of the assets used by the public sector, including the possibility of deploying machine learning to plan for timely maintenance and replacement, as well as to improve life cycle costing methodologies based on public-sector specific conditions. It would also facilitate the creation of a ‘public sector second-hand market’, where entities with lower levels of performance requirements could acquire assets no longer fit for their original purpose, eg computers previously used in more advanced tasks that still have sufficient capacity could be repurposed for routine administrative tasks. It would also allow for the planning and design of recycling facilities in ways that minimised the carbon footprint of the disposal.

In the long run, in particular post-disposal, the existence of the database of assets could unlock a more circular procurement economy, as the materials of disposed assets could be reused for the building of other assets. In that regard, there seem to be some quick wins to be had in the construction sector, but having access to more and better information would probably also serve as a catalyst for similar approaches in other sectors.

Conclusion

Building a database on existing public sector-used assets as the outcome of earlier procurement exercises is not an easy or cheap task. However, in my view, it would have transformative potential and could generate sustainability gains not only aimed at reducing the carbon footprint of future public expenditure but, more importantly, at correcting or somehow compensating for the current environmental impacts of the way the public sector operates. This could make a major difference in accelerating emissions reductions and should consequently be a matter of sufficient priority for the public sector to engage in this exercise. In my view, it should be a matter of high priority.

'Experimental' WEF/UK Guidelines for AI Procurement: some comments

ⓒ Scott Richard, Liquid painting (2015).

ⓒ Scott Richard, Liquid painting (2015).

On 20 September 2019, and as part of its ‘Unlocking Public Sector Artificial Intelligence’ project, the World Economic Forum (WEF) published the White Paper Guidelines for AI Procurement (see also press release), with which it seeks to help governments accelerate efficiencies through responsible use of artificial intelligence and prepare for future risks. WEF indicated that over the next six months, governments around the world will test and pilot these guidelines (for now, there are indications of adoption in the UK, the United Arab Emirates and Colombia), and that further iterations will be published based on feedback learned on the ground.

Building on previous work on the Data Ethics Framework and the Guide to using AI in the Public Sector, the UK’s Office for Artificial Intelligence has decided to adopt its own draft version of the Guidelines for AI Procurement with substantially the same content, but with modified language and a narrower scope of some principles, in order to link them to the UK’s legislative and regulatory framework (and, in particular, the Data Ethics Framework). The UK will be the first country to trial the guidelines in pilot projects across several departments. The UK Government hopes that the new Guidelines for AI Procurement will help inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens.

In this post, I offer some first thoughts about the Guidelines for AI Procurement, based on the WEF’s version, which is helpfully summarised in the table below.

Source: WEF, White Paper: ‘Guidelines for AI Procurement’ at 6.

Source: WEF, White Paper: ‘Guidelines for AI Procurement’ at 6.

Some Comments

Generally, it is worth being mindful that the ‘guidelines provide fundamental considerations that a government should address before acquiring and deploying AI solutions and services. They apply once it has been determined that the solution needed for a problem could be AI-driven’ (emphasis in original). As the UK’s version usefully stresses, many of the important decisions take place at the preparation and planning stages, before publishing a contract notice. Therefore, more than guidance for AI procurement, this is guidance on the design of a framework for the governance of innovative digital technologies procurement, including AI (but easily extendable to eg blockchain-based solutions), which will still require a second tier of (future/additional) guidance on the implementation of procurement procedures for the acquisition of AI-based solutions.

It is also worth stressing from the outset that the guidelines assume both the availability and a deep understanding by the contracting authority of the data that can be used to train and deploy the AI solutions, which is perhaps not fully reflective of the existing difficulties concerning the availability and quality of procurement data, and public sector data more generally [for discussion, see A Sanchez-Graells, 'Data-Driven and Digital Procurement Governance: Revisiting Two Well-Known Elephant Tales' (2019) Communications Law, forthcoming]. Where such knowledge is not readily available, it seems likely that the contracting authority may require the prior engagement of data consultants that could carry out an assessment of the data that is or could be available and its potential uses. This creates the need to roll-back some of the considerations included in the guidelines to that earlier stage, much along the lines of the issues concerning preliminary market consultations and the neutralisation of any advantages or conflicts of interest of undertakings involved in pre-tender discussions, which are also common issues with non-AI procurement of innovation. This can be rather tricky, in particular if there is a significant imbalance in expertise around data science and/or a shortfall in those skills in the contracting authority. Therefore, perhaps as a prior recommendation (or an expansion of guideline 7), it may be worth bearing in mind that the public sector needs to invest significant resources in hiring and retaining the necessary in-house capacities before engaging in the acquisition of complex (digital) technologies.

1. Use procurement processes that focus not on prescribing a specific solution, but rather on outlining problems and opportunities and allow room for iteration.

The fit of this recommendation with the existing regulation of procurement procedures seems to point towards either innovation partnerships (for new solutions) or dynamic purchasing systems (for existing or relatively off-the-shelf solutions). The reference to dynamic purchasing systems is slightly odd here, as solutions are unlikely to be susceptible of automatic deployment in any given context.

Moreover, this may not necessarily be the only possible approach under EU law and there seems to be significant scope to channel technology contests under the rules for design contests (Arts 78 and ff of Directive 2014/24/EU). The limited appetite of innovative start-ups for procurement systems that do not provide them with ‘market exposure’ (such as large framework agreements, but likely also dynamic purchasing systems) may be relevant, depending on market conditions (see eg PUBLIC, Buying into the Future. How to Deliver Innovation through Public Procurement (2019) 23). This could create opportunities for broader calls for technological innovation, perhaps as a phase prior to conducting a more structured (and expensive) procurement procedure for an innovation partnership.

All in all, it would seem like—at least at UK level, or in any other jurisdictions seeking to pilot the guidance—it could be advisable to design a standard procurement procedure for AI-related market engagement, in order to avoid having each willing contracting authority having to reinvent the wheel.

2. Define the public benefit of using AI while assessing risks.

Like with many other aspects of the guidelines, one of the difficulties here is to try to establish actionable measures to deal with ‘unknown unknowns’ that may emerge only in the implementation phase, or well into the deployment of the solution. It would be naive to assume that the contracting authority—or the potential tenderers—can anticipate all possible risks and design adequate mitigating strategies. It would thus perhaps be wise to recommend the use of AI solutions for public sector / public service use cases that have a limited impact on individual rights, as a way to gain much necessary expertise and know-how before proceeding to deployment in more sensitive areas.

Moreover, this is perhaps the recommendation that is more difficult to instrument in procurement terms (under the EU rules), as the consideration of ‘public benefit’ seems to be a matter for the contracting authority’s sole assessment, which could eventually lead to a cancellation—with or without retendering—of the procurement. It is difficult to see how to design evaluation tools (in terms of both technical specifications and award criteria) capable of capturing the insight that ‘public benefit extends beyond value for money and also includes considerations about transparency of the decision-making process and other factors that are included in these guidelines’. This should thus likely be built into the procurement process through opportunities for the contracting authority to discontinue the project (with no or limited compensation), which also points towards the structure of the innovation partnership as the regulated procedure most likely to fit.

3. Aim to include your procurement within a strategy for AI adoption across government and learn from others.

This is mainly aimed at ensuring cross-sharing of experiences and at concentrating the need for specific AI-based solutions, which makes sense. The difficulty will be in the practical implementation of this in a quickly-changing setting, which could be facilitated by the creation of a mandatory (not necessarily public) centralised register of AI-based projects, as well as the consideration of the creation and mandatory involvement of a specialised administrative unit. This would be linked to the general comment on the need to invest in skills, but could alleviate the financial impact by making the resources available across Government rather than having each contracting authority create its own expert team.

4. Ensure that legislation and codes of practice are incorporated in the RFP.

Both aspects of this guideline are problematic to a lawyer’s eyes. It is not a matter of legal imperialism to simply consider that there have to be more general mechanisms to ensure that procurement procedures (not only for digital technologies) are fully legally compliant.

The recommendation to carry out a comprehensive review of the legal system to identify all applicable rules and then ‘Incorporate those rules and norms into the RFP by referring to the originating laws and regulations’ does not make a lot of sense, since the inclusion or not in the RFP does not affect the enforceability of those rules, and given the practical impossibility for a contracting authority to assess the entirety of rules applicable to different tenderers, in particular if they are based in other jurisdictions. It would also create all sorts of problems in terms of potential claims of legitimate expectations by tenderers. Moreover, under EU law, there is case law (such as Pizzo and Connexxion Taxi Services) that creates conflicting incentives for the inclusion of specific references to rules and their interpretation in tender documents.

The recommendation on balancing trade secret protection and public interest, including data privacy compliance, is just insufficient and falls well short of the challenge of addressing these complex issues. The tension between general duties of administrative law and the opacity of algorithms (in particular where they are protected by IP or trade secrets protections) is one of the most heated ongoing debates in legal and governance scholarship. It also obviates the need to distinguish between the different rules applicable to the data and to the algorithms, as well as the paramount relevance of the General Data Protection Regulation in this context (at least where EU data is concerned).

5. Articulate the technical feasibility and governance considerations of obtaining relevant data.

This is, in my view, the strongest part of the guidelines. The stress on the need to ensure access to data as a pre-requisite for any AI project and the emphasis and detail put in the design of the relevant data governance structure ahead of the procurement could not be clearer. The difficulty, however, will be in getting most contracting authorities to this level of data-readiness. As mentioned above, the guidelines assume a level of competence that seems too advanced for most contracting authorities potentially interested in carrying out AI-based projects, or that could benefit from them.

6. Highlight the technical and ethical limitations of using the data to avoid issues such as bias.

This guideline is also premised on advanced knowledge and understanding of the data by the contracting authority, and thus creates the same challenges (as further discussed below).

7. Work with a diverse, multidisciplinary team.

Once again, this will be expensive and create some organisational challenges (as also discussed below).

8. Focus throughout the procurement process on mechanisms of accountability and transparency norms.

This is another rather naive and limited aspect of the guidelines, in particular the final point that ‘If an algorithm will be making decisions that affect people’s rights and public benefits, describe how the administrative process would preserve due process by enabling the contestability of automated decision-making in those circumstances.' This is another of the hotly-debated issues surrounding the deployment of AI in the public sector and it seems unlikely that a contracting authority will be able to provide the necessary answers to issues that are yet to be determined—eg the difficult interpretive issues surrounding solely automated processing of personal data under the General Data Protection Regulation, as discussed in eg M Finck, ‘Automated Decision-Making and Administrative Law’ (2019) Max Planck Institute for Innovation and Competition Research Paper No. 19-10.

9. Implement a process for the continued engagement of the AI provider with the acquiring entity for knowledge transfer and long-term risk assessment.

This is another area of general strength in the guidelines, which under EU procurement law should be channeled through stringent contract performance conditions (Art 70 Directive 2014/24/EU) or, perhaps even better, by creating secondary regulation on mandatory on-going support and knowledge transfer for all AI-based implementations in the public sector.

The only aspect of this guideline that is problematic concerns the mention that, in relation to ethical considerations, ‘Bidders should be able not only to describe their approach to the above, but also to provide examples of projects, complete with client references, where these considerations have been followed.’ This would clearly be a problem for new entrants, as well as generate rather significant first-mover advantages for undertakings with prior experience (likely in the private sector). In my view, this should be removed from the guidelines.

10. Create the conditions for a level and fair playing field among AI solution providers.

This section includes significant challenges concerning issues related to the ownership of IP on AI-based solutions. Most of the recommendations seem rather complicated to implement in practice, such as the reference to the need to ‘Consider strategies to avoid vendor lock-in, particularly in relation to black-box algorithms. These practices could involve the use of open standards, royalty-free licensing and public domain publication terms’, or to ‘'consider whether [the] department should own that IP and how it would control it [in particular in the context of evolution or new design of the algorithms]. The arrangements should be mutually beneficial and fair, and require royalty-free licensing when adopting a system that includes IP controlled by a vendor’. These are also extremely complex and debated issues and, once again, it seems unlikely that a contracting authority will be able to provide all relevant answers.

Overall assessment

The main strength of the guidelines lies in its recommendations concerning the evaluation of data availability and quality, as well as the need to create robust data governance frameworks and the need to have a deep insight into data limitations and biases (guidelines 5 and 6). There are also some useful, although rather self-explanatory reminders of basic planning issues concerning the need to ensure the relevant skillset and the unavoidable multidisciplinarity of teams working in AI (guidelines 3 and 7). Similarly, the guidelines provide some very high-level indications on how to structure the procurement process (guidelines 1, 2 and 9), which will however require much more detailed (future/additional) guidance before they can be implemented by a contracting authority.

However, in all other aspects, the guidelines work as an issue-spotting instrument rather than as a guidance tool. This is clearly the case concerning the tensions between data privacy, good administration and proprietary protection of the IP and trade secrets underlying AI-based solutions (guidelines 4, 8 and 10). In my view, rather than taking the naive—and potentially misleading—approach of indicating the issues that contracting authorities need to address (in the RFP, or elsewhere) as if they were currently (easily, or at all) addressable at that level of administrative practice, the guidelines should provide sufficiently precise and goal-oriented recommendations on how to do so if they are to be useful. This is not an easy task and much more work seems necessary before the document can provide useful support to contracting authorities seeking to implement procedures for the procurement of AI-based solutions. I thus wonder how much learning can the guidelines generate in the pilots to be conducted in the UK and elsewhere. For now, I would recommend other governments to wait and see before ‘adopting’ the guidelines or treating them as a useful policy tool, in particular if that discouraged them from carrying out their own efforts in developing actionable guidance on how to procure AI-based solutions.

Finally, it does not take much reading between the lines to realise that the challenges of developing an enabling data architecture and upskilling the public sector (not solely the procurement workforce, and perhaps through specialised units, as a first step) so that it is able to identify the potential for AI-based solutions and to adequately govern their design and implementation remain as very likely stumbling blocks in the road towards deployment of public sector AI. In that regard, general initiatives concerning the availability of quality procurement data and the necessary reform of public procurement teams to fill the data science and programming gaps that currently exist should remain the priority—at least in the EU, as discussed in A Sanchez-Graells, EU Public Procurement Policy and the Fourth Industrial Revolution: Pushing and Pulling as One? (2019) SSRN working paper, and in idem, 'Some public procurement challenges in supporting and delivering smart urban mobility: procurement data, discretion and expertise', in M Finck, M Lamping, V Moscon & H Richter (eds), Smart Urban Mobility – Law, Regulation, and Policy, MPI Studies on Intellectual Property and Competition Law (Berlin, Springer, 2020) forthcoming.

Legal text analytics: some thoughts on where (I think) things stand

Researching the area of artificial intelligence and the law (AI & Law) has currently taken me to the complexities of natural language processing (NLP) applied to legal texts (aka legal text analytics). Trying to understand the extent to which AI can be used to perform automated legal analysis—or, more modestly, to support humans in performing legal analysis—requires (at least) a view of the current possibilities for AI tools to (i) extract information from legal sources (or ‘understand’ them and their relationships), (ii) assess their relevance to a given legal problem and (iii) apply the legal source to provide a legal solution to the problem (or to suggest one for human validation).

Of course, this obviates other issues such as the need for AI to be able to understand the factual situation to formulate the relevant legal problem, to assess or rank different legal solutions where available, or take into account additional aspects such as the likelihood of obtaining a remedy, etc—all of which could be tackled by fields of AI & Law different from legal text analytics. The above also ignores other aspects of ‘understanding’ documents, such as the ability for an algorithm to distinguish factual and legal issues within a legal document (ie a judgment) or to extract basic descriptive information (eg being able to create a citation based on the information in the judgment, or to cluster different types of provisions within a contract or across contracts)—some of which seems to be at hand or soon to be developed on the basis of the recently released Google ‘Document Understanding AI’ tool.

The latest issue of Artificial Intelligence and the Law luckily concentrates on ‘Natural Language Processing for Legal Texts’ and offers some help in trying to understand where things currently stand regarding issues (i) and (ii) above. In this post, I offer some reflections based on my understanding of two of the papers included in the special issue: Nanda et al (2019) and Chalkidis & Kampas (2019). I may have gotten the specific technical details wrong (although I hope not), but I think I got the functional insights.

Establishing relationships between legal sources

One of the problems that legal text analytics is trying to solve concerns establishing relationships between different legal sources—which can be a partial aspect of the need to ‘understand’ them (issue (i) above). This is the main problem discussed in Nanda et al, 'Unsupervised and supervised text similarity systems for automated identification of national implementing measures of European directives' (2019) 27(2) Artificial Intelligence and Law 199-225. In this piece of research, AI is used to establish whether a provision of a national implementing measure (NIM) transposes a specific article of an EU Directive or not. In extremely simplified terms, the researchers train different algorithms to perform text comparison. The researchers work on a closed list of 43 EU Directives and the corresponding Luxembuorgian, Irish and Italian NIMs. The following table plots their results.

Nanda et al (2019: 208, Figure 6).

The table shows that the best AI solution developed by the researchers (the TF-IDF cosine) achieves levels of precision of around 83% for Luxembourg, 77% for Italy and 68% for Ireland. These seem like rather impressive results but a qualitative analysis of their experiment indicates that the significantly better performance for Luxembourgian transposition over Italian or Irish transposition likely results from the fact that Luxembourg tends to largely ‘copy & paste’ EU Directives into national law, whereas the Italian and Irish legislators adopt a more complex approach to the integration of EU rules into their existing legal instruments.

Moreover, it should be noted that the algorithms are working on a very specific issue, as they are only assessing the correspondence between provisions of EU and NIM instruments that were related—that is, they are operating in a closed or walled dataset that does not include NIMs that do not transpose any of the 43 chosen Directives. Once these aspects of the research design are taken into account, there are a number of unanswered questions, such as the precision that the algorithms would have if they had to compare entire NIMs against an open-ended list of EU Directives, or if they were used to screen for transposition rules. While the first issue could probably be answered simply extending the experiment, the second issue would probably require a different type of AI design.

On the whole, my impression after reading this interesting piece of research is that AI is still relatively far from a situation where it can provide reliable answers to the issue of establishing relationships across legal sources, particularly if one thinks of relatively more complex relationships than transposition within the EU context, such as development, modification or repeal of a given set of rules by other (potentially dispersed) rules.

Establishing relationships between legal problems and legal sources

A separate but related issue requires AI to identify legal sources that could be relevant to solve a specific legal problem (issue (ii) above)—that is, the relevant relationship is not across legal sources (as above), but between a legal problem or question and relevant legal sources.

This is covered in part of the literature review included in Chalkidis & Kampas, ‘Deep learning in law: early adaptation and legal word embeddings trained on large corpora‘ (2019) 27(2) Artificial Intelligence and Law 171-198 (see esp 188-194), where they discuss some of the solutions given to the task of the Competition on Legal Information Extraction/Entailment (COLIEE) from 2014 to 2017, which focused ‘on two aspects related to a binary (yes/no) question answering as follows: Phase one of the legal question answering task involves reading a question Q and extract[ing] the legal articles of the Civil Code that are relevant to the question. In phase two the systems should return a yes or no answer if the retrieved articles from phase one entail or not the question Q’.

The paper covers four different attempts at solving the task. It reports that the AI solutions developed to address the two binary questions achieved the following levels of precision: 66.67% (Morimoto et al. (2017)); 63.87% (Kim et al. (2015)); 57.6% (Do et al. (2017)); 53.8% (Nanda et al. (2017)). Once again, these results are rather impressive but some contextualisation may help to assess the extent to which this can be useful in legal practice.

The best AI solution was able to identify relevant provisions that entailed the relevant question 2 out of 3 times. However, the algorithms were once again working on a closed or walled field because they solely had to search for relevant provisions in the Civil Code. One can thus wonder whether algorithms confronted with the entirety of a legal order would be able to reach even close degrees of accuracy.

Some thoughts

Based on the current state of legal text analytics (as far as I can see it), it seems clear that AI is far from being able to perform independent/unsupervised legal analysis and provide automated solutions to legal problems (issue (iii) above) because there are still very significant shortcomings concerning issues of ‘understanding’ natural language legal texts (issue (i)) and adequately relating them to specific legal problems (issue (ii)). That should not be surprising.

However, what also seems clear is that AI is very far from being able to confront the vastness of a legal order and that, much as lawyers themselves, AI tools need to specialise and operate within the narrower boundaries of sub-domains or quite contained legal fields. When that is the case, AI can achieve much higher degrees of precision—see examples of information extraction precision above 90% in Chalkidis & Kampas (2019: 194-196) in projects concerning Chinese credit fraud judgments and Canadian immigration rules.

Therefore, the current state of legal text analytics seems to indicate that AI is (quickly?) reaching a point where algorithms can be used to extract legal information from natural language text sources within a specified legal field (which needs to be established through adequate supervision) in a way that allows it to provide fallible or incomplete lists of potentially relevant rules or materials for a given legal issue. However, this still requires legal experts to complement the relevant searches (to bridge any gaps) and to screen the proposed materials for actual relevance. In that regard, AI does hold the promise of much better results than previous expert systems and information retrieval systems and, where adequately trained, it can support and potentially improve legal research (ie cognitive computing, along the lines developed by Ashley (2017)). However, in my view, there are extremely limited prospects for ‘independent functionality’ of legaltech solutions. I would happily hear arguments to the contrary, though!

New paper: ‘Screening for Cartels’ in Public Procurement: Cheating at Solitaire to Sell Fool’s Gold?

I have uploaded a new paper on SSRN, where I critically assess the bid rigging screening tool published by the UK’s Competition and Markets Authority in 2017. I will be presenting it in a few weeks at the V Annual meeting of the Spanish Academic Network for Competition Law. The abstract is as follows:

Despite growing global interest in the use of algorithmic behavioural screens, big data and machine learning to detect bid rigging in procurement markets, the UK’s Competition and Markets Authority (CMA) was under no obligation to undertake a project in this area, much less to publish a bid-rigging algorithmic screening tool and make it generally available. Yet, in 2017 and under self-imposed pressure, the CMA released ‘Screening for Cartels’ (SfC) as ‘a tool to help procurers screen their tender data for signs of illegal bid-rigging activity’ and has since been trying to raise its profile internationally. There is thus a possibility that the SfC tool is not only used by UK public buyers, but also disseminated and replicated in other jurisdictions seeking to implement ‘tried and tested’ solutions to screen for cartels. This paper argues that such a legal transplant would be undesirable.

In order to substantiate this main claim, and after critically assessing the tool, the paper tracks the origins of the indicators included in the SfC tool to show that its functionality is rather limited as compared with alternative models that were put to the CMA. The paper engages with the SfC tool’s creation process to show how it is the result of poor policy-making based on the material dismissal of the recommendations of the consultants involved in its development, and that this has resulted in the mere illusion that big data and algorithmic screens are being used to detect bid rigging in the UK. The paper also shows that, as a result of the ‘distributed model’ used by the CMA, the algorithms underlying the SfC tool cannot improved through training, the publication of the SfC tool lowers the likelihood of some types of ‘easy to spot cases’ by signalling areas of ‘cartel sophistication’ that can bypass its tests and that, on the whole, the tool is simply not fit for purpose. This situation is detrimental to the public interest because reliance in a defective screening tool can create a false perception of competition for public contracts, and because it leads to immobilism that delays (or prevents) a much-needed engagement with the extant difficulties in developing a suitable algorithmic screen based on proper big data analytics. The paper concludes that competition or procurement authorities willing to adopt the SfC tool would be buying fool’s gold and that the CMA was wrong to cheat at solitaire to expedite the deployment of a faulty tool.

The full citation of the paper is: Sanchez-Graells, Albert, ‘Screening for Cartels’ in Public Procurement: Cheating at Solitaire to Sell Fool’s Gold? (May 3, 2019). Available at SSRN: https://ssrn.com/abstract=3382270