Innovation procurement under the Procurement Act 2023 -- changing procurement culture on the cheap?

On 13 November 2023, the UK Government published guidance setting out its ambitions for innovation procurement under the new Procurement Act 2023 (not yet in force, of which you can read a summary here). This further expands on the ambitions underpinning the Transforming Public Procurement project that started after Brexit. The Government’s expectation is that the ‘the new legislation will allow public procurement to be done in more flexible and innovative ways’, and that this will ‘enable public sector organisations to embrace innovation more’.

The innovation procurement guidance bases its expectation that the Procurement Act will unlock more procurement of innovation and more innovative procurement on the ambition that this will be an actively supported policy by all relevant policy- and decision-makers and that there will be advocacy for the development of commercial expertise. A first hurdle here is that unless such advocacy comes with the investment of significant funds in developing skills (and this relates to both commercial and technical skills, especially where the innovation relates to digital technologies), such high-level political buy-in may not translate into any meaningful changes. The guidance itself acknowledges that the ‘overall culture, expertise and incentive structure of the public sector has led to relatively low appetite for risk and experimentation’. Therefore, that greater investment in expertise needs to be coupled with a culture change. And we know this is a process that is very difficult to push forward.

The guidance also indicates that ‘Greater transparency of procurement data will make it easier to see what approaches have been successful and encourage use of those approaches more widely across the public sector.’ This potentially points to another hurdle in unlocking this policy because generic data is not enough to support innovation procurement or the procurement of innovation. Being able to successfully replicate innovation procurement practices requires a detailed understanding of how things were done, and how they need to be adapted when replicated. However, the new transparency regime does not necessarily guarantee that such granular and detailed information will be available, especially as the practical level of transparency that will stem from the new obligations crucially hinges on the treatment of commercially sensitive information (which is exempted from disclosure in s.94 PA 2023). Unless there is clear guidance on disclosure / withholding of sensitive commercial information, it can well be that the new regime does not generate additional meaningful (publicly accessible) data to push the knowledge stock and support innovative procurement. This is an important issue that may require further discussion in a separate post.

The guidance indicates that the changes in the Procurement Act will help public buyers in three ways:

  • The new rules focus more on delivering outcomes (as opposed to ‘going through the motions’ of a rigid process). Contracting authorities will be able to design their own process, tailored to the unique circumstances of the requirement and, most importantly, those who are best placed to deliver the best solution.

  • There will be clearer rules overall and more flexibility for procurers to use their commercial skills to achieve the desired outcomes.

  • Procurers will be able to better communicate their particular problem to suppliers and work with them to come up with potential solutions. Using product demonstrations alongside written tenders will help buyers get a proper appreciation of solutions being offered by suppliers. That is particularly impactful for newer, more innovative solutions which the authority may not be familiar with.

Although the guidance document indicates that the ‘new measures include general obligations, options for preliminary market engagement, and an important new mechanism, the Competitive Flexible Procedure’, in practice, there are limited changes to what was already allowed in terms of market consultation and the general obligations— to eg publish a pipeline notice (for contracting authorities with an annual spend over £100 million), or to ‘have regard to the fact that SMEs face barriers to participation and consider whether these barriers can be removed or reduced’—are also marginal (if at all) changes from the still current regime (see regs.48 and 46 PCR 2015). Therefore, it all boils down to the new ‘innovation-friendly procurement processes’ that are enabled by the flexible (under)regulation of the competitive flexible procedure (s.20 PA 2023).

The guidance stresses that the ‘objective is that the Competitive Flexible Procedure removes some of the existing barriers to procuring new and better solutions and gives contracting authorities freedom to enable them to achieve the best fit between the specific requirement and the best the market offers.’ The example provided in the guidance provides the skeleton structure of a 3-phase procedure involving an initial ideas and feasibility phase 1, an R&D and prototype phase 2 and a final tendering leading to the award of a production/service contract (phase 3). At this level of generality, there is little to distinguish this from a competitive dialogue under the current rules (reg.30 PCR 2015). Devil will be in the detail.

Moreover, as repeatedly highlighted from the initial consultations, the under-regulation of the competitive flexible procedure will raise the information costs and risks of engaging with innovation procurement as each new approach taken by a contracting authority will require significant investment of time in its design, as well as an unavoidable risk of challenge. The incentives are not particularly geared towards facilitating risk-taking. And any more detailed guidance on ‘how to'‘ carry out an innovative competitive flexible procedure will simply replace regulation and become a de facto standard through which contracting authorities may take the same ‘going through the motions’ approach as the process detailed in teh guidance rigidifies.

The guidance acknowledges this, at least partially, when it stresses that ‘Behavioural changes will make the biggest difference’. Such behavioural changes will be supported through training, which the guidance document also describes (and there is more detail here). The training offered will consist of:

  • Knowledge drops (open to everyone): An on-demand, watchable resource up to a maximum of 45 minutes in total, providing an overview of all of the changes in legislation.

  • E-learning (for skilled practitioners within the public sector only): a learning & development self-guided course consisting of ‘10 1-hour modules and concludes with a skilled practitioner certification’.

  • Advanced course deep dives (for public sector expert practitioners only): ‘3-day, interactive, instructor-led course. It consists of virtual ‘deep dive’ webinars, which allow learners to engage with subject matter experts. This level of interaction allows a deeper insight across the full spectrum of the legislative change and support ‘hearts and minds’ change amongst the learner population (creating ‘superusers’).

  • Communities of practice (for skilled and expert practitioners only): ‘a system of collective critical inquiry and reflection into the regime changes. Supported by the central team and superusers, they will support individuals to embed what they have learned.’

As an educator and based on my experience of training expert professionals in complex procurement, I am skeptical that this amount of training can lead to meaningful changes. The 45-minute resource can hardly cover the entirety of changes in the Procurement Act, and even the 10 hour course for public buyers only will be quite limited in how far it can go. 3 days of training are also insufficient to go much further than exploring a few examples in meaningful detail. And this is relevant because that training is not only for innovation procurement, but for all types of ‘different’ procurement under the Procurement Act 2023 (ie green, social, more robustly anti-corruption, more focused on contract performance, etc). Shifting culture and practice would require a lot more than this.

It is also unclear why this (minimal) investment in public sector understanding of the procurement framework has not taken place earlier. As I already said in the consultation, all of this could have taken place years ago and a better understanding of the current regime would have led to improvements in the practice of innovative procurement in the UK.

All in all, it seems that the aspirations of more innovation procurement and more innovative procurement are pinned on a rather limited amount of training and in (largely voluntary, in addition to the day job) collaboration for super-user experienced practitioners (who will probably see their scarce skills in high demand). It is unclear to me how this will be a game changer. Especially as most of this (and in particular collaboration and voluntary knowledge exchange) could already take place. It may be that more structure and coordination will bring better outcomes, but this would require adequate and sufficient resourcing.

Whether there will be more innovation procurement then depends on whether more money will be put into procurement structures and support. From where I stand, this is by no means a given. I guess we’ll have to wait and see.

Some thoughts on the US' Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On 30 October 2023, President Biden adopted the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the ‘AI Executive Order’, see also its Factsheet). The use of AI by the US Federal Government is an important focus of the AI Executive Order. It will be subject to a new governance regime detailed in the Draft Policy on the use of AI in the Federal Government (the ‘Draft AI in Government Policy’, see also its Factsheet), which is open for comment until 5 December 2023. Here, I reflect on these documents from the perspective of AI procurement as a major plank of this governance reform.

Procurement in the AI Executive Order

Section 2 of the AI Executive Order formulates eight guiding principles and priorities in advancing and governing the development and use of AI. Section 2(g) refers to AI risk management, and states that

It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans. These efforts start with people, our Nation’s greatest asset. My Administration will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines — including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields — and ease AI professionals’ path into the Federal Government to help harness and govern AI. The Federal Government will work to ensure that all members of its workforce receive adequate training to understand the benefits, risks, and limitations of AI for their job functions, and to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used.

Section 10 then establishes specific measures to advance Federal Government use of AI. Section 10.1(b) details a set of governance reforms to be implemented in view of the Director of the Office of Management and Budget (OMB)’s guidance to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government. Section 10.1(b) includes the following (emphases added):

The Director of OMB’s guidance shall specify, to the extent appropriate and consistent with applicable law:

(i) the requirement to designate at each agency within 60 days of the issuance of the guidance a Chief Artificial Intelligence Officer who shall hold primary responsibility in their agency, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI …;

(ii) the Chief Artificial Intelligence Officers’ roles, responsibilities, seniority, position, and reporting structures;

(iii) for [covered] agencies […], the creation of internal Artificial Intelligence Governance Boards, or other appropriate mechanisms, at each agency within 60 days of the issuance of the guidance to coordinate and govern AI issues through relevant senior leaders from across the agency;

(iv) required minimum risk-management practices for Government uses of AI that impact people’s rights or safety, including, where appropriate, the following practices derived from OSTP’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework: conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI;

(v) specific Federal Government uses of AI that are presumed by default to impact rights or safety;

(vi) recommendations to agencies to reduce barriers to the responsible use of AI, including barriers related to information technology infrastructure, data, workforce, budgetary restrictions, and cybersecurity processes;

(vii) requirements that [covered] agencies […] develop AI strategies and pursue high-impact AI use cases;

(viii) in consultation with the Secretary of Commerce, the Secretary of Homeland Security, and the heads of other appropriate agencies as determined by the Director of OMB, recommendations to agencies regarding:

(A) external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency;

(B) testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI;

(C) reasonable steps to watermark or otherwise label output from generative AI;

(D) application of the mandatory minimum risk-management practices defined under subsection 10.1(b)(iv) of this section to procured AI;

(E) independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings;

(F) documentation and oversight of procured AI;

(G) maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation;

(H) provision of incentives for the continuous improvement of procured AI; and

(I) training on AI in accordance with the principles set out in this order and in other references related to AI listed herein; and

(ix) requirements for public reporting on compliance with this guidance.

Section 10.1(b) of the AI Executive Order establishes two sets or types of requirements.

First, there are internal governance requirements and these revolve around the appointment of Chief Artificial Intelligence Officers (CAIOs), AI Governance Boards, their roles, and support structures. This set of requirements seeks to strengthen the ability of Federal Agencies to understand AI and to provide effective safeguards in its governmental use. The crucial set of substantive protections from this internal perspective derives from the required minimum risk-management practices for Government uses of AI, which is directly placed under the responsibility of the relevant CAIO.

Second, there are external (or relational) governance requirements that revolve around the agency’s ability to control and challenge tech providers. This involves the transfer (back to back) of minimum risk-management practices to AI contractors, but also includes commercial considerations. The tone of the Executive Order indicates that this set of requirements is meant to neutralise risks of commercial capture and commercial determination by imposing oversight and external verification. From an AI procurement governance perspective, the requirements in Section 10.1(b)(viii) are particularly relevant. As some of those requirements will need further development with a view to their operationalisation, Section 10.1(d)(ii) of the AI Executive Order requires the Director of OMB to develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with its Section 10.1(b) guidance.

Procurement in the Draft AI in Government Policy

The guidance required by Section 10.1(b) of the AI Executive Order has been formulated in the Draft AI in Government Policy, which offers more detail on the relevant governance mechanisms and the requirements for AI procurement. Section 5 on managing risks from the use of AI is particularly relevant from an AI procurement perspective. While Section 5(d) refers explicitly to managing risks in AI procurement, given that the primary substantive obligations will arise from the need to comply with the required minimum risk-management practices for Government uses of AI, this specific guidance needs to be read in the broader context of AI risk-management within Section 5 of the Draft AI in Government Policy.

Scope

The Draft AI in Government Policy relies on a tiered approach to AI risk by imposing specific obligations in relation to safety-impacting and rights-impacting AI only. This is an important element of the policy because these two categories are defined (in Section 6) and in principle will cover pre-established lists of AI use, based on a set of presumptions (Section 5(b)(i) and (ii)). However, CAIOs will be able to waive the application of minimum requirements for specific AI uses where, ‘based upon a system-specific risk assessment, [it is shown] that fulfilling the requirement would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations‘ (Section 5(c)(iii)). Therefore, these are not closed lists and the specific scope of coverage of the policy will vary with such determinations. There are also some exclusions from minimum requirements where the AI is used for narrow purposes (Section 5(c)(i))—notably the ‘Evaluation of a potential vendor, commercial capability, or freely available AI capability that is not otherwise used in agency operations, solely for the purpose of making a procurement or acquisition decision’; AI evaluation in the context of regulatory enforcement, law enforcement or national security action; or research and development.

This scope of the policy may be under-inclusive, or generate risks of under-inclusiveness at the boundary, in two respects. First, the way AI is defined for the purposes of the Draft AI in Government Policy, excludes ‘robotic process automation or other systems whose behavior is defined only by human-defined rules or that learn solely by repeating an observed practice exactly as it was conducted’ (Section 6). This could be under-inclusive to the extent that the minimum risk-management practices for Government uses of AI create requirements that are not otherwise applicable to Government use of (non-AI) algorithms. There is a commonality of risks (eg discrimination, data governance risks) that would be better managed if there was a joined up approach. Moreover, developing minimum practices in relation to those means of automation would serve to develop institutional capability that could then support the adoption of AI as defined in the policy. Second, the variability in coverage stemming from consideration of ‘unacceptable impediments to critical agency operations‘ opens the door to potentially problematic waivers. While these are subject to disclosure and notification to OMB, it is not entirely clear on what grounds OMB could challenge those waivers. This is thus an area where the guidance may require further development.

extensions and waivers

In relation to covered safety-impacting or rights-impacting AI (as above), Section 5(a)(i) establishes the important principle that US Federal Government agencies have until 1 August 2024 to implement the minimum practices in Section 5(c), ‘or else stop using any AI that is not compliant with the minimum practices’. This type of sunset clause concerning the currently implicit authorisation for the use of AI is a potentially powerful mechanism. However, the Draft also establishes that such obligation to discontinue non-compliant AI use must be ‘consistent with the details and caveats in that section [5(c)]’, which includes the possibility, until 1 August 2024, for agencies to

request from OMB an extension of limited and defined duration for a particular use of AI that cannot feasibly meet the minimum requirements in this section by that date. The request must be accompanied by a detailed justification for why the agency cannot achieve compliance for the use case in question and what practices the agency has in place to mitigate the risks from noncompliance, as well as a plan for how the agency will come to implement the full set of required minimum practices from this section.

Again, the guidance does not detail on what grounds OMB would grant those extensions or how long they would be for. There is a clear interaction between the extension and waiver mechanism. For example, an agency that saw its request for an extension declined could try to waive that particular AI use—or agencies could simply try to waive AI uses rather than applying for extensions, as the requirements for a waiver seem to be rather different (and potentially less demanding) than those applicable to a waiver. In that regard, it seems that waiver determinations are ‘all or nothing’, whereas the system could be more flexible (and protective) if waiver decisions not only needed to explain why meeting the minimum requirements would generate the heightened overall risks or pose such ‘unacceptable impediments to critical agency operations‘, but also had to meet the lower burden of mitigation currently expected in extension applications, concerning detailed justification for what practices the agency has in place to mitigate the risks from noncompliance where they can be partly mitigated. In other words, it would be preferable to have a more continuous spectrum of mitigation measures in the context of waivers as well.

general minimum practices

Both in relation to safety- and rights-impact AI uses, the Draft AI in Government Policy would require agencies to engage in risk management both before and while using AI.

Preventative measures include:

  • completing an AI Impact Assessment documenting the intended purpose of the AI and its expected benefit, the potential risks of using AI, and and analysis of the quality and appropriateness of the relevant data;

  • testing the AI for performance in a real-world context—that is, testing under conditions that ‘mirror as closely as possible the conditions in which the AI will be deployed’; and

  • independently evaluate the AI, with the particularly important requirement that ‘The independent reviewing authority must not have been directly involved in the system’s development.’ In my view, it would also be important for the independent reviewing authority not to be involved in the future use of the AI, as its (future) operational interest could also be a source of bias in the testing process and the analysis of its results.

In-use measures include:

  • conducting ongoing monitoring and establish thresholds for periodic human review, with a focus on monitoring ‘degradation to the AI’s functionality and to detect changes in the AI’s impact on rights or safety’—‘human review, including renewed testing for performance of the AI in a real-world context, must be conducted at least annually, and after significant modifications to the AI or to the conditions or context in which the AI is used’;

  • mitigating emerging risks to rights and safety—crucially, ‘Where the AI’s risks to rights or safety exceed an acceptable level and where mitigation is not practicable, agencies must stop using the affected AI as soon as is practicable’. In that regard, the draft indicates that ‘Agencies are responsible for determining how to safely decommission AI that was already in use at the time of this memorandum’s release without significant disruptions to essential government functions’, but it would seem that this is also a process that would benefit from close oversight by OMB as it would otherwise jeopardise the effectiveness of the extension and waiver mechanisms discussed above—in which case additional detail in the guidance would be required;

  • ensuring adequate human training and assessment;

  • providing appropriate human consideration as part of decisions that pose a high risk to rights or safety; and

  • providing public notice and plain-language documentation through the AI use case inventory—however, this is subject a large number of caveats (notice must be ‘consistent with applicable law and governmentwide guidance, including those concerning protection of privacy and of sensitive law enforcement, national security, and other protected information’) and more detailed guidance on how to assess these issues would be welcome (if it exists, a cross-reference in the draft policy would be helpful).

additional minimum practices for rights-impacting ai

In relation to rights-affecting AI only, the Draft AI in Government Policy would require agencies to take additional measures.

Preventative measures include:

  • take steps to ensure that the AI will advance equity, dignity, and fairness—including proactively identifying and removing factors contributing to algorithmic discrimination or bias; assessing and mitigating disparate impacts; and using representative data; and

  • consult and incorporate feedback from affected groups.

In-use measures include:

  • conducting ongoing monitoring and mitigation for AI-enabled discrimination;

  • notifying negatively affected individuals—this is an area where the draft guidance is rather woolly, as it also includes a set of complex caveats, as individual notice that ‘AI meaningfully influences the outcome of decisions specifically concerning them, such as the denial of benefits’ must only be given ‘[w]here practicable and consistent with applicable law and governmentwide guidance’. Moreover, the draft only indicates that ‘Agencies are also strongly encouraged to provide explanations for such decisions and actions’, but not required to. In my view, this tackles two of the most important implications for individuals in Government use of AI: the possibility to understand why decisions are made (reason giving duties) and the burden of challenging automated decisions, which is increased if there is a lack of transparency on the automation. Therefore, on this point, the guidance seems too tepid—especially bearing in mind that this requirement only applies to ‘AI whose output serves as a basis for decision or action that has a legal, material, or similarly significant effect on an individual’s’ civil rights, civil liberties, or privacy; equal opportunities; or access to critical resources or services. In these cases, it seems clear that notice and explainability requirements need to go further.

  • maintaining human consideration and remedy processes—including ‘potential remedy to the use of the AI by a fallback and escalation system in the event that an impacted individual would like to appeal or contest the AI’s negative impacts on them. In developing appropriate remedies, agencies should follow OMB guidance on calculating administrative burden and the remedy process should not place unnecessary burden on the impacted individual. When law or governmentwide guidance precludes disclosure of the use of AI or an opportunity for an individual appeal, agencies must create appropriate mechanisms for human oversight of rights-impacting AI’. This is another crucial area concerning rights not to be subjected to fully-automated decision-making where there is no meaningful remedy. This is also an area of the guidance that requires more detail, especially as to what is the adequate balance of burdens where eg the agency can automate the undoing of negative effects on individuals identified as a result of challenges by other individuals or in the context of the broader monitoring of the functioning and effects of the rights-impacting AI. In my view, this would be an opportunity to mandate automation of remediation in a meaningful way.

  • maintaining options to opt-out where practicable.

procurement related practices

In addition to the need for agencies to be able to meet the above requirements in relation to procured AI—which will in itself create the need to cascade some of the requirements down to contractors, and which will be the object of future guidance on how to ensure that AI contracts align with the requirements—the Draft AI in Government Policy also requires that agencies procuring AI manage risks by:

  • aligning to National Values and Law by ensuring ‘that procured AI exhibits due respect for our Nation’s values, is consistent with the Constitution, and complies with all other applicable laws, regulations, and policies, including those addressing privacy, confidentiality, copyright, human and civil rights, and civil liberties’;

  • taking ‘steps to ensure transparency and adequate performance for their procured AI, including by: obtaining adequate documentation of procured AI, such as through the use of model, data, and system cards; regularly evaluating AI-performance claims made by Federal contractors, including in the particular environment where the agency expects to deploy the capability; and considering contracting provisions that incentivize the continuous improvement of procured AI’;

  • taking ‘appropriate steps to ensure that Federal AI procurement practices promote opportunities for competition among contractors and do not improperly entrench incumbents. Such steps may include promoting interoperability and ensuring that vendors do not inappropriately favor their own products at the expense of competitors’ offering’;

  • maximizing the value of data for AI; and

  • responsibly procuring Generative AI.

These high level requirements are well targeted and compliance with them would go a long way to fostering ‘responsible AI procurement’ through adequate risk mitigation in ways that still allow the procurement mechanism to harness market forces to generate value for money.

However, operationalising these requirements will be complex and the further OMB guidance should be rather detailed and practical.

Final thoughts

In my view, the AI Executive Order and the Draft AI in Government Policy lay the foundations for a significant strengthening of the governance of AI procurement with a view to embedding safeguards in public sector AI use. A crucially important characteristic in the design of these governance mechanisms is that it imposes significant duties on the agencies seeking to procure and use the AI, and it explicitly seeks to address risks of commercial capture and commercial determination. Another crucially important characteristic is that, at least in principle, use of AI is made conditional on compliance with a rather comprehensive set of preventative and in-use risk mitigation measures. The general aspects of this governance approach thus offer a very valuable blueprint for other jurisdictions considering how to boost AI procurement governance.

However, as always, the devil is in the details. One of the crucial risks in this approach to AI governance concerns a lack of independence of the entities making the relevant assessments. In the Draft AI in Government Policy, there are some risks of under-inclusion and/or excessive waivers of compliance with the relevant requirements (both explicit and implicit, through protracted processes of decommissioning of non-compliant AI), as well as a risk that ‘practical considerations’ will push compliance with the risk mitigation requirements well past the (ambitious) 1 August 2024 deadline through long or rolling extensions.

To mitigate for this, the guidance should be much clearer on the role of OMB in extension, waiver and decommissioning decisions, as well as in relation to the specific criteria and limits that should form part of those decisions. Only by ensuring adequate OMB intervention can a system of governance that still does not entirely (organisationally) separate procurement, use and oversight decisions reach the levels of independent verification required not only to neutralise commercial determination, but also operational dependency and the ‘policy irresistibility’ of digital technologies.

Thoughts on the AI Safety Summit from a public sector procurement & use of AI perspective

The UK Government hosted an AI Safety Summit on 1-2 November 2023. A summary of the targeted discussions in a set of 8 roundtables has been published for Day 1, as well as a set of Chair’s statements for Day 2, including considerations around safety testing, the state of the science, and a general summary of discussions. There is also, of course, the (flagship?) Bletchley Declaration, and an introduction to the announced AI Safety Institute (UK AISI).

In this post, I collect some of my thoughts on these outputs of the AI Safety Summit from the perspective of public sector procurement and use of AI.

What was said at the AI safety Summit?

Although the summit was narrowly targeted to discussion of ‘frontier AI’ as particularly advanced AI systems, some of the discussions seem to have involved issues also applicable to less advanced (ie currently in existence) AI systems, and even to non-AI algorithms used by the public sector. As the general summary reflects, ‘There was also substantive discussion of the impact of AI upon wider societal issues, and suggestions that such risks may themselves pose an urgent threat to democracy, human rights, and equality. Participants expressed a range of views as to which risks should be prioritised, noting that addressing frontier risks is not mutually exclusive from addressing existing AI risks and harms.’ Crucially, ‘participants across both days noted a range of current AI risks and harmful impacts, and reiterated the need for them to be tackled with the same energy, cross-disciplinary expertise, and urgency as risks at the frontier.’ Hopefully, then, some of the rather far-fetched discussions of future existential risks can be conducive to taking action on current harms and risks arising from the procurement and use of less advanced systems.

There seemed to be some recognition of the need for more State intervention through regulation, for more regulatory control of standard-setting, and for more attention to be paid to testing and evaluation in the procurement context. For example, the summary of Day 1 discussions indicates that participants agreed that

  • ‘We should invest in basic research, including in governments’ own systems. Public procurement is an opportunity to put into practice how we will evaluate and use technology.’ (Roundtable 4)

  • ‘Company policies are just the baseline and don’t replace the need for governments to set standards and regulate. In particular, standardised benchmarks will be required from trusted external third parties such as the recently announced UK and US AI Safety Institutes.’ (Roundtable 5)

In Day 2, in the context of safety testing, participants agreed that

  • Governments have a responsibility for the overall framework for AI in their countries, including in relation to standard setting. Governments recognise their increasing role for seeing that external evaluations are undertaken for frontier AI models developed within their countries in accordance with their locally applicable legal frameworks, working in collaboration with other governments with aligned interests and relevant capabilities as appropriate, and taking into account, where possible, any established international standards.

  • Governments plan, depending on their circumstances, to invest in public sector capability for testing and other safety research, including advancing the science of evaluating frontier AI models, and to work in partnership with the private sector and other relevant sectors, and other governments as appropriate to this end.

  • Governments will plan to collaborate with one another and promote consistent approaches in this effort, and to share the outcomes of these evaluations, where sharing can be done safely, securely and appropriately, with other countries where the frontier AI model will be deployed.

This could be a basis on which to build an international consensus on the need for more robust and decisive regulation of AI development and testing, as well as a consensus of the sets of considerations and constraints that should be applicable to the procurement and use of AI by the public sector in a way that is compliant with individual (human) rights and social interests. The general summary reflects that ‘Participants welcomed the exchange of ideas and evidence on current and upcoming initiatives, including individual countries’ efforts to utilise AI in public service delivery and elsewhere to improve human wellbeing. They also affirmed the need for the benefits of AI to be made widely available’.

However, some statements seem at first sight contradictory or problematic. While the excerpt above stresses that ‘Governments have a responsibility for the overall framework for AI in their countries, including in relation to standard setting’ (emphasis added), the general summary also stresses that ‘The UK and others recognised the importance of a global digital standards ecosystem which is open, transparent, multi-stakeholder and consensus-based and many standards bodies were noted, including the International Standards Organisation (ISO), International Electrotechnical Commission (IEC), Institute of Electrical and Electronics Engineers (IEEE) and relevant study groups of the International Telecommunication Union (ITU).’ Quite how State responsibility for standard setting fits with industry-led standard setting by such organisations is not only difficult to fathom, but also one of the potentially most problematic issues due to the risk of regulatory tunnelling that delegation of standard setting without a verification or certification mechanism entails.

Moreover, there seemed to be insufficient agreement around crucial issues, which are summarised as ‘a set of more ambitious policies to be returned to in future sessions’, including:

‘1. Multiple participants suggested that existing voluntary commitments would need to be put on a legal or regulatory footing in due course. There was agreement about the need to set common international standards for safety, which should be scientifically measurable.

2. It was suggested that there might be certain circumstances in which governments should apply the principle that models must be proven to be safe before they are deployed, with a presumption that they are otherwise dangerous. This principle could be applied to the current generation of models, or applied when certain capability thresholds were met. This would create certain ‘gates’ that a model had to pass through before it could be deployed.

3. It was suggested that governments should have a role in testing models not just pre- and post-deployment, but earlier in the lifecycle of the model, including early in training runs. There was a discussion about the ability of governments and companies to develop new tools to forecast the capabilities of models before they are trained.

4. The approach to safety should also consider the propensity for accidents and mistakes; governments could set standards relating to how often the machine could be allowed to fail or surprise, measured in an observable and reproducible way.

5. There was a discussion about the need for safety testing not just in the development of models, but in their deployment, since some risks would be contextual. For example, any AI used in critical infrastructure, or equivalent use cases, should have an infallible off-switch.

8. Finally, the participants also discussed the question of equity, and the need to make sure that the broadest spectrum was able to benefit from AI and was shielded from its harms.’

All of these are crucial considerations in relation to the regulation of AI development, (procurement) and use. A lack of consensus around these issues already indicates that there was a generic agreement that some regulation is necessary, but much more limited agreement on what regulation is necessary. This is clearly reflected in what was actually agreed at the summit.

What was agreed at the AI Safety Summit?

Despite all the discussions, little was actually agreed at the AI Safety Summit. The Blethcley Declaration includes a lengthy (but rather uncontroversial?) description of the potential benefits and actual risks of (frontier) AI, some rather generic agreement that ‘something needs to be done’ (eg welcoming ‘the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed’) and very limited and unspecific commitments.

Indeed, signatories only ‘committed’ to a joint agenda, comprising:

  • ‘identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.

  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research’ (emphases added).

This does not amount to much that would not happen anyway and, given that one of the UK Government’s objectives for the Summit was to create mechanisms for global collaboration (‘a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks’), this agreement for each jurisdiction to do things as they see fit in accordance to their own circumstances and collaborate ‘as appropriate’ in view of those seems like a very poor ‘win’.

In reality, there seems to be little coming out of the Summit other than a plan to continue the conversations in 2024. Given what had been said in one of the roundtables (num 5) in relation to the need to put in place adequate safeguards: ‘this work is urgent, and must be put in place in months, not years’; it looks like the ‘to be continued’ approach won’t do or, at least, cannot be claimed to have made much of a difference.

What did the UK Government promise in the AI Summit?

A more specific development announced with the occasion of the Summit (and overshadowed by the earlier US announcement) is that the UK will create the AI Safety Institute (UK AISI), a ‘state-backed organisation focused on advanced AI safety for the public interest. Its mission is to minimise surprise to the UK and humanity from rapid and unexpected advances in AI. It will work towards this by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance.’

Crucially, ‘The Institute will focus on the most advanced current AI capabilities and any future developments, aiming to ensure that the UK and the world are not caught off guard by progress at the frontier of AI in a field that is highly uncertain. It will consider open-source systems as well as those deployed with various forms of access controls. Both AI safety and security are in scope’ (emphasis added). This seems to carry forward the extremely narrow focus on ‘frontier AI’ and catastrophic risks that augured a failure of the Summit. It is also in clear contrast with the much more sensible and repeated assertions/consensus in that other types of AI cause very significant risks and that there is ‘a range of current AI risks and harmful impacts, and reiterated the need for them to be tackled with the same energy, cross-disciplinary expertise, and urgency as risks at the frontier.’

Also crucially, UK AISI ‘is not a regulator and will not determine government regulation. It will collaborate with existing organisations within government, academia, civil society, and the private sector to avoid duplication, ensuring that activity is both informing and complementing the UK’s regulatory approach to AI as set out in the AI Regulation white paper’.

According to initial plans, UK AISI ‘will initially perform 3 core functions:

  • Develop and conduct evaluations on advanced AI systems, aiming to characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts

  • Drive foundational AI safety research, including through launching a range of exploratory research projects and convening external researchers

  • Facilitate information exchange, including by establishing – on a voluntary basis and subject to existing privacy and data regulation – clear information-sharing channels between the Institute and other national and international actors, such as policymakers, international partners, private companies, academia, civil society, and the broader public’

It is also stated that ‘We see a key role for government in providing external evaluations independent of commercial pressures and supporting greater standardisation and promotion of best practice in evaluation more broadly.’ However, the extent to which UK AISI will be able to do that will hinge on issues that are not currently clear (or publicly disclosed), such as the membership of UK AISI or its institutional set up (as ‘state-backed organisation’ does not say much about this).

On that very point, it is somewhat problematic that the UK AISI ‘is an evolution of the UK’s Frontier AI Taskforce. The Frontier AI Taskforce was announced by the Prime Minister and Technology Secretary in April 2023’ (ahem, as ‘Foundation Model Taskforce’—so this is the second rebranding of the same initiative in half a year). As is problematic that UK AISI ‘will continue the Taskforce’s safety research and evaluations. The other core parts of the Taskforce’s mission will remain in [the Department for Science, Innovation and Technology] as policy functions: identifying new uses for AI in the public sector; and strengthening the UK’s capabilities in AI.’ I find the retention of analysis pertaining to public sector AI use within government problematic and a clear indication of the UK’s Government unwillingness to put meaningful mechanisms in place to monitor the process of public sector digitalisation. UK AISI very much sounds like a research institute with a focus on a very narrow set of AI systems and with a remit that will hardly translate into relevant policymaking in areas in dire need of regulation. Finally, it is also very problematic that funding is not locked: ‘The Institute will be backed with a continuation of the Taskforce’s 2024 to 2025 funding as an annual amount for the rest of this decade, subject to it demonstrating the continued requirement for that level of public funds.’ In reality, this means that the Institute’s continued existence will depend on the Government’s satisfaction with its work and the direction of travel of its activities and outputs. This is not at all conducive to independence, in my view.

So, all in all, there is very little new in the announcement of the creation of the UK AISI and, while there is a (theoretical) possibility for the Institute to make a positive contribution to regulating AI procurement and use (in the public sector), this seems extremely remote and potentially undermined by the Institute’s institutional set up. This is probably in stark contrast with the US approach the UK is trying to mimic (though more on the US approach in a future entry).

European Commission wants to see more AI procurement. Ok, but priorities need reordering

The European Commission recently published its 2023 State of the Digital Decade report. One of its key takeaways is that the Commission recommends Member States to step up innovation procurement investments in digital sector.

The Commission has identified that ‘While the roll-out of digital public services is progressing steadily, investment in public procurement of innovative digital solutions (e.g. based on AI or big data) is insufficient and would need to increase substantially from EUR 188 billon to EUR 295 billon in order to reach full speed adoption of innovative digital solutions in public services’ (para 4.2, original emphasis).

The Commission has thus recommended that ‘Member States should step up investment and regulatory measures to develop and make available secure, sovereign and interoperable digital solutions for online public and government services’; and that ‘Member States should develop action plans in support of innovation procurement and step up efforts to increase public procurement investments in developing, testing and deploying innovative digital solutions’.

Tucked away in a different part of the report (which, frankly, has a rather odd structure), the Commission also recommends that ‘Member States should foster the availability of legal and technical support to procure and implement trustworthy and sovereign AI solutions across sectors.’

To my mind, the priorities for investment of public money need to be further clarified. Without a significant investment in an ambitious plan to quickly expand the public sector’s digital skills and capabilities, there can be no hope that increased procurement expenditure in digital technologies will bring adequate public sector digitalisation or foster the public interest more broadly.

Without a sophisticated public buyer that can adequately cut through the process of technological innovation, there is no hope that ‘throwing money at the problem’ will bring meaningful change. In my view, the focus and priority should be on upskilling the public sector before anything else—including ahead of the also recommended mobilisation of ‘public policies, including innovative procurement to foster the scaling up of start-ups, to facilitate the creation of spinoffs from universities and research centres, and to monitor progress in this area’ (para 3.2.3). Perhaps a substantial fraction of the 100+ billion EUR the Commission expects Member States to put into public sector digitalisation could go to building up the required capability… too much to ask?

G7 Guiding Principles and Code of Conduct on Artificial Intelligence -- some comments from a UK perspective

On 30 October 2023, G7 leaders published the Hiroshima Process International Guiding Principles for Advanced AI system (the G7 AI Principles), a non-exhaustive list of guiding principles formulated as a living document that builds on the OECD AI Principles to take account of recent developments in advanced AI systems. The G7 stresses that these principles should apply to all AI actors, when and as applicable to cover the design, development, deployment and use of advanced AI systems.

The G7 AI Principles are supported by a voluntary Code of Conduct for Advanced AI Systems (the G7 AI Code of Conduct), which is meant to provide guidance to help seize the benefits and address the risks and challenges brought by these technologies.

The G7 AI Principles and Code of Conduct came just two days before the start of the UK’s AI Safety Summit 2023. Given that the UK is part of the G7 and has endorsed the G7 Hiroshima Process and its outcomes, the interaction between the G7’s documents, the UK Government’s March 2023 ‘pro-innovation’ approach to AI and its aspirations for the AI Safety Summit deserves some comment.

G7 AI Principles and Code of Conduct

The G7 AI Principles aim ‘to promote safe, secure, and trustworthy AI worldwide and will provide guidance for organizations developing and using the most advanced AI systems, including the most advanced foundation models and generative AI systems.’ The principles are meant to be cross-cutting, as they target ‘among others, entities from academia, civil society, the private sector, and the public sector.’ Importantly, also, the G7 AI Principles are meant to be a stop gap solution, as G7 leaders ‘call on organizations in consultation with other relevant stakeholders to follow these [principles], in line with a risk-based approach, while governments develop more enduring and/or detailed governance and regulatory approaches.’

The principles include the reminder that ‘[w]hile harnessing the opportunities of innovation, organizations should respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and human-centricity, in the design, development and deployment of advanced AI system’, as well as a reminder that organizations developing and deploying AI should not undermine democratic values, harm individuals or communities, ‘facilitate terrorism, enable criminal misuse, or pose substantial risks to safety, security, and human rights’. State (AI users) are reminder of their ‘obligations under international human rights law to promote that human rights are fully respected and protected’ and private sector actors are called to align their activities ‘with international frameworks such as the United Nations Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises’.

These are all very high level declarations and aspirations that do not go much beyond pre-existing commitments and (soft) law norms, if at all.

The G7 AI Principles comprises a non-exhaustive list of 11 high-level regulatory goals that organizations should abide by ‘commensurate to the risks’—ie following the already mentioned risk-based approach—which introduces a first element of uncertainty because the document does not establish any methodology or explanation on how risks should be assessed and tiered (one of the primary, and debated, features of the proposed EU AI Act). The principles are the following, prefaced by my own labelling between square brackets:

  1. [risk identification, evaluation and mitigation] Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle;

  2. [misuse monitoring] Patterns of misuse, after deployment including placement on the market;

  3. [transparency and accountability] Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.

  4. [incident intelligence exchange] Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.

  5. [risk management governance] Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures, in particular for organizations developing advanced AI systems.

  6. [(cyber) security] Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.

  7. [content authentication and watermarking] Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.

  8. [risk mitigation priority] Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.

  9. [grand challenges priority] Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.

  10. [technical standardisation] Advance the development of and, where appropriate, adoption of international technical standards.

  11. [personal data and IP safeguards] Implement appropriate data input measures and protections for personal data and intellectual property.

Each of the principles is accompanied by additional guidance or precision, where possible, and this is further developed in the G7 Code of Conduct.

In my view, the list is a bit of a mixed bag.

There are some very general aspirations or steers that can hardly be considered principles of AI regulation, for example principle 9 setting a grand challenges priority and, possibly, principle 8 setting a risk mitigation priority beyond the ‘requirements’ of principle 1 on risk identification, evaluation and mitigation—which thus seems to boil down to the more specific steer in the G7 Code of Conduct for (private) organisations to ‘share research and best practices on risk mitigation’.

Quite how these principles could be complied by current major AI developers seems rather difficult to foresee, especially in relation to principle 9. Most developers of generative AI or other AI applications linked to eg social media platforms will have a hard time demonstrating their engagement with this principle, unless we accept a general justification of ‘general purpose application’ or ‘dual use application’—which to me seems quite unpalatable. What is the purpose of this principle if eg it pushes organisations away from engaging with the rest of the G7 AI Principles? Or if organisations are allowed to gloss over it in any (future) disclosures linked to an eventual mechanism of commitment, chartering, or labelling associated with the principles? It seems like the sort of purely political aspiration that may have been better left aside.

Some other principles seem to push at an open door, such as principle 10 on the development of international technical standards. Again, the only meaningful detail seems to be in the G7 Code of Conduct, which specifies that ‘In particular, organizations also are encouraged to work to develop interoperable international technical standards and frameworks to help users distinguish content generated by AI from non-AI generated content.’ However, this is closely linked to principle 7 on content authentication and watermarking, so it is not clear how much that adds. Moreover, this comes to further embed the role of industry-led technical standards as a foundational element of AI regulation, with all the potential problems that arise from it (for some discussion from the perspective of regulatory tunnelling, see here and here).

Yet other principles present as relatively soft requirements or ‘noble’ commitments issues that are, in reality, legal requirements already binding on entities and States and that, in my view, should have been placed as hard obligations and a renewed commitment from G7 States to enforce them. These include principle 11 on personal data and IP safeguards, where the G7 Code of Conduct includes as an apparent after thought that ‘Organizations should also comply with applicable legal frameworks’. In my view, this should be starting point.

This reduces the list of AI Principles ‘proper’. But, even then, they can be further grouped and synthesised, in my view. For example, principles 1 and 5 are both about risk management, with the (outward-looking) governance layer of principle 5 seeking to give transparency to the (inward-looking) governance layer in principle 1. Principle 2 seems to simply seek to extend the need to engage with risk-based management post-market placement, which is also closely connected to the (inward-looking) governance layer in principle 1. All of them focus on the (undefined) risk-based approach to development and deployment of AI underpinning the G7’s AI Principles and Code of Conduct.

Some aspects of the incident intelligence exchange also relate to principle 1, while some other aspects relate to (cyber) security issues encapsulated in principle 6. However, given that this principle may be a placeholder for the development of some specific mechanisms of collaboration—either based on cyber security collaboration or other approaches, such as the much touted aviation industry’s—it may be treated separately.

Perhaps, then, the ‘core’ AI Principles arising from the G7 document could be trimmed down to:

  • Life-cycle risk-based management and governance, inclusive of principles 1, 2, and 5.

  • Transparency and accountability, principle 3.

  • Incident intelligence exchange, principle 4.

  • (Cyber) security, principle 6.

  • Content authentication and watermarking, principle 7 (though perhaps narrowly targeted to generative AI).

Most of the value in the G7 AI Principles and Code of Conduct thus arises from the pointers for collaboration, the more detailed self-regulatory measures, and the more specific potential commitments included in the latter. For example, in relation to the potential AI risks that are identified as potential targets for the risk assessments expected of AI developers (under guidance related to principle 1), or the desirable content of AI-related disclosures (under guidance related to principle 3).

It is however unclear how these principles will evolve when adopted at the national level, and to what extent they offer a sufficient blueprint to ensure international coherence in the development of the ‘more enduring and/or detailed governance and regulatory approaches’ envisaged by G7 leaders. It seems for example striking that both the EU and the UK have supported these principles, given that they have relatively opposing approaches to AI regulation—with the EU seeking to finalise the legislative negotiations on the first ‘golden standard’ of AI regulation and the UK taking an entirely deregulatory approach. Perhaps this is in itself an indication that, even at the level of detail achieved in the G7 AI Code of Conduct, the regulatory leeway is quite broad and still necessitates significant further concretisation for it to be meaningful in operational terms—as evidenced eg by the US President’s ‘Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’, which calls for that concretisation and provides a good example of the many areas for detailed work required to translate high level principles into actionable requirements (even if it leaves enforcement still undefined).

How do the G7 Principles compare to the UK’s ‘pro-innovation’ ones?

In March 2023, the UK Government published its white paper ‘A pro-innovation approach to AI regulation’ (the ‘UK AI White Paper’; for a critique, see here). The UK AI White Paper indicated (at para 10) that its ‘framework is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy:

  • Safety, security and robustness

  • Appropriate transparency and explainability

  • Fairness

  • Accountability and governance

  • Contestability and redress’.

A comparison of the UK and the G7 principles can show a few things.

First, that there are some areas where there seems to be a clear correlation—in particular concerning (cyber) security as a self-standing challenge requiring a direct regulatory focus.

Second, that it is hard to decide at which level to place incommensurable aspects of AI regulation. Notably, the G7 principles do not directly refer to fairness—while the UK does. However, the G7 Principles do spend some time in the preamble addressing the issue of fairness and unacceptable AI use (though in a woolly manner). Whether placing this type of ‘requirement’ at a level or other makes a difference (at all) is highly debatable.

Third, that there are different ways of ‘packaging’ principles or (soft) obligations. Just like some of the G7 principles are closely connected or fold into each other (as above), so do the UK’s principles in relation to the G7’s. For example, the G7 packaged together transparency and accountability (principle 3), while the UK had them separated. While the UK explicitly mentioned the issue of AI explainability, this remains implicit in the G7 principles (also in principle 3).

Finally, in line with the considerations above, that distinct regulatory approaches only emerge or become clear once the ‘principles’ become specific (so they arguably stop being principles). For example, it seems clear that the G7 Principles aspire to higher levels of incident intelligence governance and to a specific target of generative AI watermarking than the UK’s. However, whether the G7 or the UK principles are equally or more demanding on any other dimension of AI regulation is close to impossible to establish. In my view, this further supports the need for a much more detailed AI regulatory framework—else, technical standards will entirely occupy that regulatory space.

What do the G7 AI Principles tell us about the UK’s AI Safety Summit?

The Hiroshima Process that has led to the adoption of the G7 AI Principles and Code of Conduct emerged from the Ministerial Declaration of The G7 Digital and Tech Ministers’ Meeting of 30 April 2023, which explicitly stated that:

‘Given that generative AI technologies are increasingly prominent across countries and sectors, we recognise the need to take stock in the near term of the opportunities and challenges of these technologies and to continue promoting safety and trust as these technologies develop. We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation, including foreign information manipulation, and how to responsibly utilise these technologies’ (at para 47).

The UK Government’s ambitions for the AI Safety Summit largely focus on those same issues, albeit within the very narrow confines of ‘frontier AI’, which it has defined as ‘highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models‘. While the UK Government has published specific reports to focus discussion on (1) Capabilities and risks from frontier AI and (2) Emerging Processes for Frontier AI Safety, it is unclear how the level of detail of such narrow approach could translate into broader international commitments.

The G7 AI Principles already claim to tackle ‘the most advanced AI systems, including the most advanced foundation models and generative AI systems (henceforth "advanced AI systems")’ within their scope. It seems unclear that such approach would be based on a lack of knowledge or understanding of the detail the UK has condensed in those reports. It rather seems that the G7 was not ready to move quickly to a level of detail beyond that included in the G7 AI Code of Conduct. Whether significant further developments can be expected beyond the G7 AI Principles and Code of Conduct just two days after they were published seems hard to fathom.

Moreover, although the UK Government is downplaying the fact that eg Chinese participation in the AI Safety Summit is unclear and potentially rather marginal, it seems that, at best, the UK AI Safety Summit will be an opportunity for a continued conversation between G7 countries and a few others. It is also unclear whether significant progress will be made in a forum that seems rather clearly tilted towards industry voice and influence.

Let’s wait and see what the outcomes are, but I am not optimistic for significant progress other than, worryingly, a risk of further displacement of regulatory decision-making towards industry and industry-led (future) standards.

More model contractual AI clauses -- some comments on the SCL AI Clauses

Following the launch of the final version of the model contractual AI clauses sponsored by the European Commission earlier this month, the topic of how to develop and how to use contractual model clauses for AI procurement is getting hotter. As part of its AI Action Plan, New York City has announced that it is starting work to develop its own model clauses for AI procurement (to be completed in 2025). We can expect to see a proliferation of model AI clauses as more ‘AI legislation’ imposes constraints on contractual freedom and compliance obligations, and as different model clauses are revised to (hopefully) capture the learning from current experimentation in AI procurement.

Although not (closely) focused on procurement, a new set of interesting AI contractual clauses has been released by the Society for Computers & Law (SCL) AI Group (thanks to Gisele Waters for bringing them to my attention on LinkedIn!). In this post, I reflect on some aspects of the SCL AI clauses and try to answer Gisele’s question/challenge (below).

SCL AI Clauses

The SCL AI clauses have a clear commercial orientation and are meant as a starting point for supplier-customer negotiations, which is reflected on the fact that the proposed clauses contain two options: (1) a ‘pro-supplier’ drafting based on off-the-shelf provision, and (2) a ‘pro-customer’ drafting based on a bespoke arrangement. Following that commercial logic, most of the SCL AI clauses focus on an allocation of obligations (and thus costs and liability) between the parties (eg in relation to compliance with legal requirements).

The clauses include a few substantive requirements implicit in the allocation of the respective obligations (eg on data or third party licences) but mostly refer to detailed schedules of which there is no default proposal, or to industry standards (and thus have this limitation in common with eg the EU’s model AI clauses). The SCL AI clauses do contain some drafting notes that would help identify issues needing specific regulation in the relevant schedules, although this guidance necessarily remains rather abstract or generic.

This pro-supplier/pro-customer orientation prompted Gisele’s question/challenge, which is whether ‘there is EVER an opportunity for government (customer-buyer) to be better able to negotiate the final language with clauses like these in order to weigh the trade offs between interests?’, especially bearing in mind that the outcome of the negotiations could be strongly pro-supplier, strongly pro-customer, or balanced (and something in between those). I think that answering this question requires exploring what pro-supplier or pro-customer may mean in this specific context.

From a substantive regulation perspective, the SCL AI clauses include a few interesting elements, such as an obligation to establish a circuit-breaker capable of stopping the AI (aka an ‘off button’) and a roll-back obligation (to an earlier, non-faulty version of the AI solution) where the AI is malfunctioning or this is necessary to comply with applicable law. However, most of the substantive obligations are established by reference to ‘Good Industry Practice’, which requires some further unpacking.

SCL AI Clauses and ‘Good Industry Practice’

Most of crucial proposed clauses refer to the benchmark of ‘Good Industry Practice’ as a primary qualifier for the relevant obligations. The proposed clause on explainability is a good example. The SCL AI clause (C1.15) reads as follows:

C1.15 The Supplier will ensure that the AI System is designed, developed and tested in a way which ensures that its operation is sufficiently transparent to enable the Customer to understand and use the AI System appropriately. In particular, the Supplier will produce to the Customer, on request, information which allows the Customer to understand:

C1.15.1 the logic behind an individual output from the AI System; and

C1.15.2 in respect of the AI System or any specific part thereof, which features contributed most to the output of the AI System, in each case, in accordance with Good Industry Practice.

A first observation is that the SCL AI clauses seem to presume that off-the-shelf AI solutions would not be (necessarily) explainable, as they include no clause under the ‘pro-supplier’ version.

Second, the ‘pro-customer’ version both limits the types of explanation that would be contractually owed (to a model-level or global explanation under C1.15.2 and a limited decision-level or local explanation under C1.15.1 — which leaves out eg a counterfactual explanation, as well as not setting any specific requirements on how the explanation needs to be produced, eg is a ‘post hoc’ explanation acceptable and if so how should it be produced?) and qualifies it in two important ways: (1) the overall requirement is that the AI system’s operation should be ‘sufficiently transparent’, with ‘sufficient’ creating a lot of potential issues here; and, (2) the reference to ‘Good Industry Practice’ [more on this below].

The issue of transparency is similarly problematic in its more general treatment under another specific clause (C4.6), which also only has a ‘pro-customer’ version:

C4.6 The Supplier warrants that, so far as is possible [to achieve the intended use of the AI System / comply with the Specification], the AI System is transparent and interpretable [such that its output can be traced back to the input data] .

The qualifier ‘so far as is possible’ is again potentially quite problematic here, as are the open-ended references to transparency and interpretability of the system (with a potential conflict between interpretability for the purposes of this clause and explainability under C1.15).

What I find interesting about this clause is that the drafting notes explain that:

… the purpose of this provision is to ensure that the Supplier has not used an overly-complex algorithm if this is unnecessary for the intended use of the AI System or to comply with its Specification. That said, effectiveness and accuracy are often trade-offs for transparency in AI models.

From this perspective, I think the clause should be retitled and entirely redrafted to make explicit that the purpose is to establish a principle of ‘AI minimisation’ in the sense of the supplier guaranteeing that the AI system is the least complex that can provide the desired functionality — which, of course, has the tricky issue of trade-off and the establishment of the desired functionality in itself to work around. (and which in a procurement context would have been dealt with pre-contract, eg in the context of technical specifications and/or tender evaluation). Interestingly, this issue is another one where reference could be made to ‘Good Industry Practice’ if one accepted that it should be best practice to always use the most explainable/interpretable and most simple model available for a given task.

As mentioned, reference to ‘Good Industry Practice’ is used extensively in the SCL AI clauses, including crucial issues such as: explainability (above), user manual/user training, preventing unlawful discrimination, security (which is inclusive of cyber secturity and some aspects of data protection/privacy), or quality standards. The drafting notes are clear that

… while parties often refer to ‘best practice’ or ‘good industry practice’, these standards can be difficult to apply in developing industry. Accordingly a clear Specification is required, …

Which is the reason why the SCL AI clauses foresee that ‘Good Industry Practice’ will be a defined contract term, whereby the parties will specify the relevant requirements and obligations. And here lies the catch.

Defining ‘Good Industry Practice’?

In the SCL AI clauses, all references to ‘Good Industry Practice’ are used as qualifiers in the pro-customer version of the clauses. It is possible that the same term would be of relevance to establishing whether the supplier had discharged its reasonable duties/best efforts under the pro-supplier version (where the term would be defined but not explicitly used). In both cases, the need to define ‘Good Industry Practice’ is the Achilles heel of the model clauses, as well as a potential Trojan horse for customers seeking a seemingly pro-customer contractual design,

The fact is that the extent of the substantive obligations arising from the contract will entirely depend on how the concept of ‘Good Industry Practice’ is defined and specified. This leaves even seemingly strongly ‘pro-customer’ contracts exposed to weak substantive protections. The biggest challenge for buyers/procurers of AI will be that (1) it will be hard to know how to define the term and what standards to refer to, and (2) it will be difficult to monitor compliance with the standards, especially where those establish eg mechanisms of self-asessment by the tech supplier as the primary or sole quality control mechanims.

So, my answer to Gisele’s question/challenge would be that the SCL AI clauses, much like the EU’s, do not (and cannot?) go far enough in ensuring that the contract for the procurement/purchase of AI embeds adequate substantive requirements. The model clauses are helpful in understanding who needs to do what when, and thus who shoulders the relevant cost and risk. But they do not address the all-important question of how it needs to be done. And that is the crucial issue that will determine whether the contract (and the AI solution) really is in the public buyer’s interest and, ultimately in the public interest.

In a context where tech providers (almost always) have the upper hand in negotiations, this foundational weakness is all important, as suppliers could well ‘agree to pro-customer drafting’ and then immediately deactivate it through the more challenging and technical definition (and implementation) of ‘Good Industry Practices’.

That is why I think we need to cover this regulatory tunnelling risk and this foundational shortcoming of ‘AI regulation by contract’ by creating clear and binding requirements on the how (ie the ‘good (industry) practice’ or technical standards). The emergence of model AI contract clauses to me makes it clear that the most efficient contract design is such that it needs to refer to external benchmarks. Establishing adequarte protections and an adequate balance of risks and benefits (from a social perspective) hinges on this. The contract can then deal with an apportionment of the burdens, obligations, costs and risks stemming from the already set requirements.

So I would suggest that the focus needs to be squarely on developing the regulatory architecture that will lead us to the development of such mandatory requirements and standards for the procurement and use of AI by the public sector — which may then become adequate good industry practice for strictly commercial or private contracts. My proposal in that regard is sketched out here.

Final EU model contractual AI Clauses available -- some thoughts on regulatory tunnelling

Source: https://tinyurl.com/mrx9sbz8.

The European Commission has published the final version of the EU model contractual AI clauses to pilot in procurements of AI, which have been ‘developed for pilot use in the procurement of AI with the aim to establish responsibilities for trustworthy, transparent, and accountable development of AI technologies between the supplier and the public organisation.’

The model AI clauses have been developed by reference to the (future) obligations arising from the EU AI Act currently under advanced stages of negotiation. This regulatory technique simply seeks to allow public buyers to ensure compliance with the EU AI Act by cascading the relevant obligations and requirements down to tech providers (largely on a back to back basis). By the same regulatory logic, this technique will be a conveyor belt for the shortcomings of the EU AI Act, which will be embedded in public contracts using the clauses. It is thus important to understand the shortcomings inherent to this approach and to the model AI clauses, before assuming that their use will actually ensure the ‘trustworthy, transparent, and accountable development [and deployment] of AI technologies’. Much more is needed than mere reliance on the model AI clauses.

Two sets of model AI clauses

The EU AI Act will not be applicable to all types of AI use. Remarkably, most requirements will be limited to ‘high-risk AI uses’ as defined in its Article 6. This immediately translates into the generation of two sets of model AI clauses: one for ‘high-risk’ AI procurement, which embeds the requirements expected to arise from the EU AI Act once finalised, and another ‘light version’ for non-high-risk AI procurement, which would support the voluntary extension of some of those requirements to the procurement of AI for other uses, or even to the use of other types of algorithmic solutions not meeting the regulatory definition of AI.

A first observation is that the controversy surrounding the definition of ‘high-risk’ in the EU AI Act immediately carries over to the model AI clauses and to the choice of ‘demanding’ vs light version. While the original proposal of the EU AI Act contained a numerus clausus of high-risk uses (which was already arguably too limited, see here), the trilogue negotiations could well end suppressing a pre-defined classification and leaving it to AI providers to (self)assess whether the use would be ‘high-risk’.

This has been heavily criticised in a recent open letter. If the final version of the EU AI Act ended up embedding such a self-assessment of what uses are bound to be high-risk, there would be clear risks of gaming of the self-assessment to avoid compliance with the heightened obligations under the Act (and it is unclear that the system of oversight and potential fines foreseen in the EU AI Act would suffice to prevent this). This would directly translate into a risk of gaming (or strategic opportunism) in the choice between ‘demanding’ vs light version of the model AI clauses by public buyers as well.

As things stand today, it seems that most procurement of AI will be subject to the light version of the model AI clauses, where contracting authorities will need to decide which clauses to use and which standards to refer to. Importantly, the light version does not include default options in relation to quality management, conformity assessments, corrective actions, inscription in an AI register, or compliance and audit (some of which are also optional under the ‘demanding’ model). This means that, unless public buyers are familiar with both sets of model AI clauses, taking the light version as a starting point already generates a risk of under-inclusiveness and under-regulation.

Limitations in the model AI clauses

The model AI clauses come with some additional ‘caveat emptor’ warnings. As the Commission has stressed in the press release accompanying the model AI clauses:

The EU model contractual AI clauses contain provisions specific to AI Systems and on matters covered by the proposed AI Act, thus excluding other obligations or requirements that may arise under relevant applicable legislation such as the General Data Protection Regulation. Furthermore, these EU model contractual AI clauses do not comprise a full contractual arrangement. They need to be customized to each specific contractual context. For example, EU model contractual AI clauses do not contain any conditions concerning intellectual property, acceptance, payment, delivery times, applicable law or liability. The EU model contractual AI clauses are drafted in such a way that they can be attached as a schedule to an agreement in which such matters have already been laid down.

This is an important warning, as the sole remit of the model AI clauses links back to the EU AI Act and, in the case of the light version, only partially.

the link between model AI clauses and standards

However, the most significant shortcoming of the model AI clauses is that, by design, they do not include any substantive or material constraints or requirements on the development and use of AI. All substantive obligations are meant to be incorporated by reference to the (harmonised) standards to be developed under the EU AI Act, other sets of standards or, more generally, the state-of-the-art. Plainly, there is no definition or requirement in the model AI clauses that establishes the meaning of eg trustworthiness—and there is thus no baseline safety net ensuring it. Similarly, most requirements are offloaded to (yet to emerge) standards or the technical and organisational measures devised by the parties. For example,

  • Obligations on record-keeping (Art 5 high-risk model) refer to capabilities conforming ‘to state of the art and, if available, recognised standards or common specifications. <Optional: add, if available, a specific standard>’.

  • Measures to ensure transparency (Art 6 high-risk model) are highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way that the operation of the AI System is sufficiently transparent to enable the Public Organisation to reasonably understand the system’s functioning’. Moreover, the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is left entirely undefined in the relevant Annex (E) — thus leaving the option open for referral to emerging transparency standards.

  • Measures on human oversight (Art 7 high-risk model) are also highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way, including with appropriate human-machine interface tools, that it can be effectively overseen by natural persons as proportionate to the risks associated with the system’. Although there is some useful description of what ‘human oversight’ should mean as a minimum (Art 7(2)), the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is also left entirely undefined in the relevant Annex (F) — thus leaving the option open for referral to emerging ‘human on the loop’ standards.

  • Measures on accuracy, robustness and cybersecurity (Art 8 high-risk model) follow the same pattern. Annexes G and H on levels of accuracy and on measures to ensure an appropriate level of robustness, safety and cybersecurity are also blank. While there can be mandatory obligations stemming from other sources of EU law (eg the NIS 2 Directive), only partial aspects of cybersecurity will be covered, and not in all cases.

  • Measures on the ‘explainability’ of the AI (Art 13 high-risk model) fall short of imposing an absolute requirement of intelligibility of the AI outputs, as the focus is on a technical explanation, rather than a contextual or intuitive explanation.

All in all, the model AI clauses are primarily an empty regulatory shell. Operationalising them will require reliance on (harmonised) standards—eg on transparency, human oversight, accuracy, explainability … — or, most likely (at least until such standards are in place) significant additional concretisation by the public buyer seeking to rely on the model AI clauses.

For the reasons identified in my previous research, I think this is likely to generate regulatory tunnelling and to give the upper hand to AI providers in making sure they can comfortably live with requirements in any specific contract. The regulatory tunnelling stems from the fact that all meaningful requirements and constraints are offloaded to the (harmonised) standards to be developed. And it is no secret that the governance of the standardisation process falls well short of ensuring that the resulting standards will embed high levels of protection of the desired regulatory goals — some of which are very hard to define in ways that can be translated into procurement or contractual requirements anyway.

Moreover, public buyers with limited capabilities will struggle to use the model AI clauses in ways that meaningfully ‘establish responsibilities for trustworthy, transparent, and accountable development [and deployment] of AI technologies’—other than in relation to those standards. My intuition is that the content of the all too relevant schedules in the model AI clauses will either simply refer to emerging standards or where there is no standard or the standard is for whatever reason considered inadequate, be left for negotiation with tech providers, or be part of the evaluation (eg tenderers will be required to detail how they propose to regulate eg accuracy). Whichever way this goes, this puts the public buyer in a position of rule-taker.

Only very few, well-resourced, highly skilled public buyers (if any) would be able to meaningfully flesh out a comprehensive set of requirements in the relevant annexes to give the model AI clauses sufficient bite. And they would not benefit much from the model AI clauses as it is unlikely that in their sophistication they would not have already come up with similar solutions. Therefore, at best, the contribution of the model AI clauses is rather marginal and, at worse, it comes with a significant risk of regulatory complacency.

final thoughts

indeed, given all of this, it is clear that the model IA clauses generate a risk if (non-sophisticated/most) public buyers think that relying on them will deal with the many and complex challenges inherent to the acquisition of AI. And an even bigger risk if we collectively think that the existence of such model AI clauses is all the regulation of AI procurement we need. This is not a criticism of the clauses in themselves, but rather of the technique of ‘regulation by contract’ that underlies it and of the broader approach followed by the European Commission and other regulators (including the UK’s)!

I have demonstrated how this is a flawed regulatory strategy in my forthcoming book Digital Technologies and Public Procurement. Gatekeeping and Experimentation in Digital Public Governance (OUP) and in many working papers resulting from the project with the same title. In my view, we need to do a lot more if we want to make sure that the public sector only procures and uses trustworthy AI technologies. We need to create a regulatory system that assigns to an independent authority both the permissioning of the procurement of AI and the certification of the standards underpinning such procurement. In the absence of such regulatory developments, we cannot meaningfully claim that the procurement of AI will be in line with the values and goals to be expected from ‘responsible’ AI use.

I will further explore these issues in a public lecture on 23 November 2023 at University College London. All welcome: Hybrid | Responsibly Buying Artificial Intelligence: A Regulatory Hallucination? | UCL Faculty of Laws - UCL – University College London.

Source: https://public-buyers-community.ec.europa....

Some thoughts on the need to rethink the right to good administration in the digital context

Colleagues at The Digital Constitutionalist have put together a really thought-provoking symposium on ‘Safeguarding the Right to Good Administration in the Age of AI’. I had the pleasure of contributing my own views on the need to extend and broaden good administration guarantees in the context of AI-assisted decision-making. I thoroughly recommend reading all contributions to the symposium, as this is an area of likely development in the EU Administrative Law space.

Purchasing uncertain or indefinite requirements – guest post by Șerban Filipon

I am delighted to present to How to Crack a Nut readers an outline of my recently published book Framework Agreements, Supplier Lists and Other Public Procurement Tools: Purchasing Uncertain or Indefinite Requirements (Hart Publishing, 2023). It is the result of years of doctoral research in public procurement law and policy at the University of Nottingham, and it incorporates my practical experience as well. After the end of the PhD, I updated and further developed the research for publication as a monograph.

Framework agreements, supplier lists, ID/IQ contracts, dynamic purchasing systems, and other tools of this kind are very widely used throughout the world; and tend to be quite complex too in some respects. Paradoxically, the subject has so far received rather limited attention, particularly when it comes to analysing the phenomenon systematically, across a variety of (very) different public procurement systems and/or international instruments. The book covers this gap, mainly through legal contextual analysis with comparative perspectives.

If in your professional or academic activity you come across questions involving matters of the kind presented below, then you are very likely to benefit from reading and studying this book.

Topics covered in the book

Given the complexity and multiple dimensions of the subject, I have structured some examples of possible questions/matters into a few categories, for illustration purposes, but please take an open and flexible view when going through them (as there is much more covered in the book!).

(A) Regulation and policy

  • aspects to consider when regulating (or seeking to improve the regulation of and policy regarding) tools for procurement of recurrent, uncertain, or indefinite requirements, in order to support the wider objectives of your relevant public procurement system, including (where applicable) how such regulation should be integrated with other existing regulation, for instance, with regulation mainly focused on ‘one-off’ purchases;

  •  what can be learnt from various procurement systems or international instruments, and how can (certain) approaches or elements in those systems become relevant when regulating your system, including through adaptation and conceptual streamlining;

  •  addressing legal review in relation to tools for procurement of recurrent, uncertain, or indefinite requirements.

(B) Regulatory interpretation and application

  • how can existing regulation on framework agreements, supplier lists, etc, be interpreted/applied in relation to areas where such regulation is contradictory, inconsistent, or silent;

  • to what extent is / should the general procurement regulation (usually relating to ‘one-off’ procurements) be applicable to framework agreements, supplier lists, etc, and addressing ‘grey’ or ambiguous areas in this interaction.

(C) Practice and operations

  • designing and planning the type of procurement arrangement (tool) that could be appropriate for specific circumstances, i.e., framework agreement or supplier list, and downstream, the sub-type/configuration of framework or supplier list, and its relevant features (choosing between various possible options), thus supporting procurement portfolio planning and implementation at the purchaser’s level; considerations on operating the designed arrangement;

  • what criteria and procedures could be used for awarding call-offs under a framework agreement without reopening competition at call-off stage (and using these in a balanced and appropriate way, depending on circumstances);

  • to what extent could a call-off under, say a framework arrangement, consist in a (secondary) framework, the conditions that should be taken into consideration for this approach, and circumstances when it can be useful.

(D) Research, education, and training 

  • conceptual realignment, redefining, and adjustment to facilitate understanding of the phenomenon across various public procurement systems that regulate, address and classify (very) differently the arrangements/tools for procurement of uncertain or indefinite requirements;

  • taxonomies of potential arrangements, and identifying potential arrangements currently not expressly provided for in regulation;

  • conceptual framework for analysing procurement of uncertain or indefinite requirements – across various procurement systems or international instruments, or using a 360-degree perspective concerning a specific system or tool, rather than a perspective confined to a specific procurement system.

Scope of the research

These types of questions give a flavour of what the book does and its approach; certainly the book covers much more and offers an in-depth appreciation of this vital topic across public procurement systems and legal instruments.

To achieve this, and to be of wide relevance throughout the world, this monograph analyses in-depth seven different public procurement systems, using the same structure of analysis. The choice of systems and/or international legal instruments was carefully made to support such relevance, by taking into account a mixture of: legal and administrative traditions; experience with public procurement and public procurement regulation; specific experience in regulating and using procurement tools for recurrent, uncertain, or indefinite requirements; and of objectives pursued through public procurement regulation.

The book thus looks specifically and in context at: the UNCITRAL Model Law on public procurement, the World Bank’s procurement rules and policy for investment project financing, the US federal procurement system, the EU public procurement law and policy, and its transposition in two current EU member states – France and Romania – and the UK pre- and post-Brexit.

Systematic approach

By using the same structure for analysis both vertically (into each relevant tool under each procurement system or legal instrument investigated), as well as transversally, across all tools, systems and legal instruments investigated, the book discovers and reveals a whole ‘universe’ of approaches (current and potential) towards procurement of recurrent, uncertain, or indefinite requirements. The book presents this ‘universe’ in a clear and orderly fashion that is meaningful for readers anywhere in the world, and, on this basis the book articulates a discipline (a conceptual framework) for analysing and addressing the regulation of and policy on procurement of recurrent, uncertain, or indefinite requirements.

The purpose of this newly articulated discipline is both to offer an understanding of the overall phenomenon investigated (within and across the systems and legal instruments analysed in the book), and to enable the design and development of bespoke solutions concerning the regulation, policy, and practice of procurement of recurrent, uncertain, or indefinite requirements. By bespoke solutions in this context, I mean solutions that are relevant to and respond to the specific features and objectives of the procurement system in question or of the specific procurement exercise in question. From this perspective, I consider the book is of interest both for readers working in the procurement systems specifically analysed by the monograph, as well as for readers in many, many other procurement systems worldwide.

Main arguments and findings

With the vast coverage, complexity and variety of systems analysed, the arguments of the book (as well as findings) are multi-dimensional. The main ones are outlined here.

Firstly, I argue that whilst significant developments have occurred in this area of procurement of recurrent, uncertain, or indefinite requirements during the last decades, regulation in all systems / legal instruments analysed continues to be, in various ways and to various degrees, work in progress. To unleash the potential that these arrangements have for enhanced efficiency and effectiveness in public procurement, more balanced regulation is needed, and more work is needed on regulatory, policy as well as implementation matters.

The systems and legal instruments researched by the monograph tend to leave aside various potential configurations of arrangements, either by way of prohibiting them or by not expressly providing for them. Thus, a second main argument I make is that wider categories/configurations (or ranges) of potential arrangements should be expressly permitted in regulation, but subject to further – specifically tailored – regulatory controls and conditions concerning their use. These include procedural and transparency measures (which can be facilitated nowadays thanks to electronic means), as well as legal review and oversight mechanisms designed (and provided for in the relevant legal instrument) to address the specific matters that may arise in preparing and operating arrangements for procurement of uncertain or indefinite requirements.

Certainly, any such expansion of coverage as well as the specific safeguards referred above would be different (and differently approached) from system to system, so as to fit and respond to the relevant procurement context.

With a couple of notable exceptions, a trend in many of the systems or international instruments investigated in the book has reflected reluctance toward recognising and permitting the general use of supplier lists type of arrangements (like qualification systems in the EU utilities sector). The third argument I make here is that this approach is unjustified, and in fact it precludes purchasers from using a tool that can be particularly useful in certain situations if it is subject to appropriate procedural, transparency, and legal review measures, as discussed above.

Conversely – with the notable exception of the UNCITRAL Model Law on public procurement that can be regarded as a benchmark in many respects concerning the regulation of framework arrangements – a rather lax approach seems to govern framework type of arrangements. Regulation in many of the systems investigated in the monograph tend to permit a rather liberal use of framework arrangements, with insufficient conditions and/or controls in various respects, which can affect their beneficial use and/or may foster abuse. However, in other respects, the regulation could be too rigid. So, in addition to the need for more balanced regulation, my fourth argument relates to encouraging the use of framework arrangements for security of supply, and for planning for and responding to crises (catastrophic events), rather than mainly (just) for aggregation of (recurrent) demand, economies of scale, and administrative convenience.

Finally, I argue that all the above can be significantly supported by developing a specific area of public procurement regulation, to address – expressly, systematically, and directly – the complexities and features of procuring uncertain or indefinite requirements. In contrast, so far, the procurement systems / legal instruments analysed tend to address many issues arising from procurement of recurrent, uncertain, or indefinite requirements, indirectly through the lenses of ‘one-off’ procurements, by way of exception – or by implication – from the rules on ‘one-off’ procurements.

In my view, fundamental changes in approaching regulation and policy of framework arrangements and supplier lists in public procurement are strongly needed, as explained above. The sooner they occur, the better the chances for improvement in efficiency and effectiveness in and through public procurement.

For those wishing to deepen their understanding of this area, I am very pleased to attach here a voucher that provides a 20% discount of the book price. The book can be ordered using this link (and inserting the relevant discount code shown in the voucher.

I wish you an enjoyable and, most importantly, useful reading!

Șerban Filipon

Șerban Filipon

With over 20 years of international experience in public procurement professional consulting services, including procurement reform, capacity building, and implementation, as well as in procurement management and research, Șerban Filipon (MCIPS) holds a PhD in public procurement law from the University of Nottingham, UK (2018), and an MSc in Procurement Management awarded with distinction by the University of Strathclyde, UK (2006).

Șerban Filipon is senior procurement consultant.

Digital technologies and public procurement -- new monograph available for pre-order

My forthcoming monograph Digital Technologies and Public Procurement: Gatekeeping and Experimentation in Digital Public Governance is now advertised and ready for pre-order from OUP’s website.

Here is the book description, in case of interest:

The digital transformation of the public sector has accelerated. States are experimenting with technology, seeking more streamlined and efficient digital government and public services. However, there are significant concerns about the risks and harms to individual and collective rights under new modes of digital public governance. Several jurisdictions are attempting to regulate digital technologies, especially artificial intelligence, however regulatory effort primarily concentrates on technology use by companies, not by governments. The regulatory gap underpinning public sector digitalisation is growing.

As it controls the acquisition of digital technologies, public procurement has emerged as a 'regulatory fix' to govern public sector digitalisation. It seeks to ensure through its contracts that public sector digitalisation is trustworthy, ethical, responsible, transparent, fair, and (cyber) safe.

However, in Digital Technologies and Public Procurement: Gatekeeping and Experimentation in Digital Public Governance, Albert Sanchez-Graells argues that procurement cannot perform this gatekeeping role effectively. Through a detailed case study of procurement digitalisation as a site of unregulated technological experimentation, he demonstrates that relying on 'regulation by contract' creates a false sense of security in governing the transition towards digital public governance. This leaves the public sector exposed to the 'policy irresistibility' that surrounds hyped digital technologies.

Bringing together insights from political economy, public policy, science, technology, and legal scholarship, this thought-provoking book proposes an alternative regulatory approach and contributes to broader debates of digital constitutionalism and digital technology regulation.

AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?

The recording and slides of the public lecture on ‘AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?’ I gave at the University of Bristol Law School on 4 July 2023 are now available. As always, any further comments most warmly received at: a.sanchez-graells@bristol.ac.uk.

This lecture brought my research project to an end. I will now focus on finalising the manuscript and sending it off to the publisher, and then take a break for the rest of the summer. I will share details of the forthcoming monograph in a few months. I hope to restart blogging in September. in the meantime, I wish all HTCaN friends all the best. Albert

Two policy briefings on digital technologies and procurement

Now that my research project ‘Digital technologies and public procurement. Gatekeeping and experimentation in digital public governance’ nears its end, some outputs start to emerge. In this post, I would like to highlight two policy briefings summarising some of my top-level policy recommendations, and providing links to more detailed analysis. All materials are available in the ‘Digital Procurement Governance’ tab.

Policy Briefing 1: ‘Guaranteeing public sector adoption of trustworthy AI - a task that should not be left to procurement

What's the rush -- some thoughts on the UK's Foundation Model Taskforce and regulation by Twitter

I have been closely following developments on AI regulation in the UK, as part of the background research for the joint submission to the public consultation closing on Wednesday (see here and here). Perhaps surprisingly, the biggest developments do not concern the regulation of AI under the devolved model described in the ‘pro-innovation’ white paper, but its displacement outside existing regulatory regimes—both in terms of funding, and practical power.

Most of the activity and investments are not channelled towards existing resource-strained regulators to support them in their task of issuing guidance on how to deal with AI risks and harms—which stems from the white paper—but in digital industrial policy and R&D projects, including a new major research centre on responsible and trustworthy AI and a Foundation Model Taskforce. A first observation is that this type of investments can be worthwhile, but not at the expense of adequately resourcing regulators facing the tall order of AI regulation.

The UK’s Primer Minister is clearly making a move to use ‘world-leadership in AI safety’ as a major plank of his re-election bid in the coming Fall. I am not only sceptical about this move and its international reception, but also increasingly concerned about a tendency to ‘regulate by Twitter’ and to bullish approaches to regulatory and legal compliance that could well result in squandering a good part of the £100m set aside for the Taskforce.

In this blog, I offer some preliminary thoughts. Comments welcome!

Twitter announcements vs white paper?

During the preparation of our response to the AI public consultation, we had a moment of confusion. The Government published the white paper and an impact assessment supporting it, which primarily amount to doing nothing and maintaining the status quo (aka AI regulatory gap) in the UK. However, there were increasing reports of the Prime Minister’s change of heart after the emergence of a ‘doomer’ narrative peddled by OpenAI’s CEO and others. At some point, the PM sent out a tweet that made us wonder if the Government was changing policy and the abandoning the approach of the white paper even before the end of the public consultation. This was the tweet.

We could not locate any document describing the ‘Safe strategy of AI’, so the only conclusion we could reach is that the ‘strategy’ was the short twitter threat that followed that first tweet.

It was not only surprising that there was no detail, but also that there was no reference to the white paper or to any other official policy document. We were probably not the only ones confused about it (or so we hope!) as it is in general very confusing to have social media messaging pointing out towards regulatory interventions completely outside the existing frameworks—including live public consultations by the government!

It is also confusing to see multiple different documents make reference to different things, and later documents somehow reframing what previous documents mean.

For example, the announcement of the Foundation Model Taskforce came only a few weeks after the publication of the white paper, but there was no mention of it in the white paper itself. Is it possible that the Government had put together a significant funding package and related policy in under a month? Rather than whether it is possible, the question is why do things in this way? And how mature was the thinking behind the Taskforce?

For example, the initial announcement indicated that

The investment will build the UK’s ‘sovereign’ national capabilities so our public services can benefit from the transformational impact of this type of AI. The Taskforce will focus on opportunities to establish the UK as a world leader in foundation models and their applications across the economy, and acting as a global standard bearer for AI safety.

The funding will be invested by the Foundation Model Taskforce in foundation model infrastructure and public service procurement, to create opportunities for domestic innovation. The first pilots targeting public services are expected to launch in the next six months.

Less than two months later, the announcement of the appointment of the Taskforce chair (below) indicated that

… a key focus for the Taskforce in the coming months will be taking forward cutting-edge safety research in the run up to the first global summit on AI safety to be hosted in the UK later this year.

Bringing together expertise from government, industry and academia, the Taskforce will look at the risks surrounding AI. It will carry out research on AI safety and inform broader work on the development of international guardrails, such as shared safety and security standards and infrastructure, that could be put in place to address the risks.

Is it then a Taskforce and pot of money seeking to develop sovereign capabilities and to pilot public sector AI use, or a Taskforce seeking to develop R&D in AI safety? Can it be both? Is there money for both? Also, why steer the £100m Taskforce in this direction and simultaneously spend £31m in funding an academic-led research centre on ethical and trustworthy AI? Is the latter not encompassing issues of AI safety? How will all of these investments and initiatives be coordinated to avoid duplication of effort or replication of regulatory gaps in the disparate consideration of regulatory issues?

Funding and collaboration opportunities announced via Twitter?

Things can get even more confusing or worrying (for me). Yesterday, the Government put out an official announcement and heavy Twitter-based PR to announce the appointment of the Chair of the Foundation Model Taskforce. This announcement raises a few questions. Why on Sunday? What was the rush? Also, what was the process used to select the Chair, if there was one? I have no questions on the profile and suitability of the appointed Chair (have also not looked at them in detail), but I wonder … even if legally compliant to proceed without a formal process with an open call for expressions of interest, is this appropriate? Is the Government stretching the parallelism with the Vaccines Taskforce too far?

Relatedly, there has been no (or I have been unable to locate) official call for expressions of interest from those seeking to get involved with the Taskforce. However, once more, Twitter seems to have been the (pragmatic?) medium used by the newly appointed Chair of the Taskforce. On Sunday itself, this Twitter thread went out:

I find the last bit particularly shocking. A call for expressions of interest in participating in a project capable of spending up to £100m via Google Forms! (At the time of writing), the form is here and its content is as follows:

I find this approach to AI regulation rather concerning and can also see quite a few ways in which the emerging work approach can lead to breaches of procurement law and subsidies controls, or recruitment processes (depending on whether expressions of interest are corporate or individual). I also wonder what is the rush with all of this and what sort of record-keeping will be kept of all this so that it there is adequate accountability of this expenditure. What is the rush?

Or rather, I know that the rush is simply politically-driven and that this is another way in which public funds are at risk for the wrong reasons. But for the entirely arbitrary deadline of the ‘world AI safety summit’ the PM wants to host in the UK in the Fall — preferably ahead of any general election, I would think — it is almost impossible to justify the change of gear between the ‘do nothing’ AI white paper and the ‘rush everything’ approach driving the Taskforce. I hope we will not end up in another set of enquiries and reports, such as those stemming from the PPE procurement scandal or the ventilator challenge, but it is hard to see how this can all be done in a legally compliant manner, and with the serenity. clarity of view and long-term thinking required of regulatory design. Even in the field of AI. Unavoidably, more to follow.

Response to the UK’s March 2023 White Paper "A pro-innovation approach to AI regulation"

Together with colleagues at the Centre for Global Law and Innovation of the University of Bristol Law School, I submitted a response to the UK Government’s public consultation on its ‘pro-innovation’ approach to AI regulation. For an earlier assessment, see here.

The full submission is available at https://ssrn.com/abstract=4477368, and this is the executive summary:

The white paper ‘A pro-innovation approach to AI regulation’ (the ‘AI WP’) claims to advance a ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ model that leverages the capabilities and skills of existing regulators to foster AI innovation. This model, we are told, would be underpinned by a set of principles providing a clear, unified, and flexible framework improving upon the current ‘complex patchwork of legal requirements’ and striking ‘the right balance between responding to risks and maximising opportunities.’

In this submission, we challenge such claims in the AI WP. We argue that:

  • The AI WP does not advance a balanced and proportionate approach to AI regulation, but rather, an “innovation first” approach that caters to industry and sidelines the public. The AI WP primarily serves a digital industrial policy goal ‘to make the UK one of the top places in the world to build foundational AI companies’. The public interest is downgraded and building public trust is approached instrumentally as a mechanism to promote AI uptake. Such an approach risks breaching the UK’s international obligations to create a legal framework that effectively protects fundamental rights in the face of AI risks. Additionally, in the context of public administration, poorly regulated AI could breach due process rules, putting public funds at risk.

  • The AI WP does not embrace an agile regulatory approach, but active deregulation. The AI WP stresses that the UK ‘must act quickly to remove existing barriers to innovation’ without explaining how any of the existing safeguards are no longer required in view of identified heightened AI risks. Coupled with the “innovation first” mandate, this deregulatory approach risks eroding regulatory independence and the effectiveness of the regulatory regimes the AI WP claims to seek to leverage. A more nuanced regulatory approach that builds on, rather than threatens, regulatory independence is required.

  • The AI WP builds on shaky foundations, including the absence of a mapping of current regulatory remits and powers. This makes it near impossible to assess the effectiveness and comprehensiveness of the proposed approach, although there are clear indications that regulatory gaps will remain. The AI WP also presumes continuity in the legal framework, which ignores reforms currently promoted by Government and further reforms of the overarching legal regime repeatedly floated. It seems clear that some regulatory regimes will soon see their scope or stringency limited. The AI WP does not provide clear mechanisms to address these issues, which undermine its core claim that leveraging existing regulatory regimes suffices to address potential AI harms. This is perhaps particularly evident in the context of AI use for policing, which is affected by both the existence of regulatory gaps and limitations in existing legal safeguards.

  • The AI WP does not describe a full, workable regulatory model. Lack of detail on the institutional design to support the central function is a crucial omission. Crucial tasks are assigned to such central function without clarifying its institutional embedding, resourcing, accountability mechanisms, etc.

  • The AI WP foresees a government-dominated approach that further risks eroding regulatory independence, in particular given the “innovation first” criteria to be used in assessing the effectiveness of the proposed regime.

  • The principles-based approach to AI regulation suggested in the AI WP is undeliverable due to lack of detail on the meaning and regulatory implications of the principles, barriers to translation into enforceable requirements, and tensions with existing regulatory frameworks. The minimalistic legislative intervention entertained in the AI WP would not equip regulators to effectively enforce the general principles. Following the AI WP would also result in regulatory fragmentation and uncertainty and not resolve the identified problem of a ‘complex patchwork of legal requirements’.

  • The AI WP does not provide any route towards sufficiently addressing the digital capabilities gap, or towards mitigating new risks to capabilities, such as deskilling—which create significant constraints on the likely effectiveness of the proposed approach.

Full citation: A Charlesworth, K Fotheringham, C Gavaghan, A Sanchez-Graells and C Torrible, ‘Response to the UK’s March 2023 White Paper "A pro-innovation approach to AI regulation"’ (June 19, 2023). Available at SSRN: https://ssrn.com/abstract=4477368.

The challenges of researching digital technology regulation -- some quick thoughts (or a rant)

Keeping up with developments in digital technologies and their regulation is exhausting.

Whenever a technology becomes mainstream (looking at you, ChatGPT, but also looking at blockchain in the rear mirror, and web 2.0 slightly behind… etc) there is a (seemingly) steep learning curve for researchers interested in regulation to climb — sometimes to find little novelty in the regulatory challenges they pose.

It recently seems like those curves are coming closer and closer together, whichever route one takes to exploring tech regulation.

Yet, it is not only that debates and regulatory interventions shift rather quickly, but also that these are issues of such social importance that the (academic) literature around them has exploded. Any automated search will trigger daily alerts to new pieces of scholarship and analysis (of mixed quality and relevance). Not to mention news items, policy reports, etc. Sifting through them beyond a cursory look at the abstracts is a job in itself …

These elements of ‘moving tech targets’ and ‘exponentially available analysis’ make researching these areas rather challenging. And sometimes I wonder if it is even possible to do it (well).

Perhaps I am just a bit overwhelmed.

I am in the process of finalising the explanation of the methodology I have used for my monograph on procurement and digital technologies—as it is clear that I cannot simply get away with stating that it is a ‘technology-centred interdisciplinary legal method’ (which it is, though).

Chatting to colleagues about it (and with the UK’s REF-fuelled obsession for ‘4* rigour’ in the background), the question keeps coming up about how does one make sure to cover the relevant field, how is anyone’s choice of sources not *ahem* capricious or random? In other words, the question keeps coming up: how do you make sure you have not missed anything important? (I’ll try to sleep tonight, cheers!).

I am not sure I have a persuasive, good answer. I am also not sure that ‘comprehensiveness’ is a reasonable expectation of a well done literature review or piece of academic analysis (any more). If it is, barring automated and highly formalised approaches to ‘scoping the field’, I fear we may quickly presume there is no possible method in the madness. But that does not sit right with me. And I also do not think it is a case of throwing the ‘qualitative research’ label as defence, as it means something different (and rigorous).

The challenge of expressing (and implementing) a defensible legal method in the face of such ‘moving tech targets’ and ‘exponentially available analysis’ is not minor.

And, on the other side of the coin, there is a lurking worry that whichever output results from this research will be lost in such ocean of (electronic) academic papers and books —for, if everyone is struggling to sift through the materials and has ever growing (Russian-doll-style) ‘to read’ folders as I do, will eyes ever be set on the research?

Perhaps method does not matter that much after all? (Not comforting, I know!).

Rant over.

"Can Procurement Be Used to Effectively Regulate AI?" [recording]

The recording and slides for yesterday’s webinar on ‘Can Procurement Be Used to Effectively Regulate AI?’ co-hosted by the University of Bristol Law School and the GW Law Government Procurement Programme are now available for catch up if you missed it.

I would like to thank once again Dean Jessica Tillipman (GW Law), Dr Aris Georgopoulos (Nottingham), Elizabeth "Liz" Chirico (Acquisition Innovation Lead at Office of the Deputy Assistant Secretary of the Army - Procurement) and Scott Simpson (Digital Transformation Lead, Department of Homeland Security Office of the Chief Procurement Officer - Procurement Innovation Lab) for really interesting discussion, and to all participants for their questions. Comments most welcome, as always.

Testing the limits of ChatGPT’s procurement knowledge (and stubbornness) – guest post by Džeina Gaile

Following up on the discussion whether public sector use of ChatGPT should be banned, in this post, Džeina Gaile* shares an interesting (and at points unnerving) personal experiment with the tool. Džeina asked a few easy questions on the topic of her PhD research (tender clarifications).

The answers – and the ‘hallucinations’, that is, the glaring mistakes – and the tone are worth paying attention to. I find the bit of the conversation on the very existence of Article 56 and the content of Article 56(3) Directive 2014/24/EU particularly (ahem) illuminating. Happy reading!

PS. If you take Džeina up on her provocation and run your own procurement experiment on ChatGPT (or equivalent), I will be delighted to publish it here as well.

Liar, liar, pants on fire – what ChatGPT did not teach me
about my own PhD research topic

 DISCLAIMER: The views provided here are just a result of an experiment by some random procurement expert that is not a specialist in IT law or any other AI-related law field.

If we consider law as a form of art, as lawyers, words are our main instrument. Therefore, we have a special respect for language as well as the facts that our words represent. We know the liability that comes with the use of the wrong words. One problem with ChatGPT is - it doesn't. 

This brings us to an experiment that could be performed by anyone having at least basic knowledge of the internet and some in-depth knowledge in some specific field, or at least an idea of the information that could be tested on the web. What can you do? Ask ChatGPT (or equivalent) some questions you already know the answers to. It would be nice if the (expected) answers include some facts, numbers, or people you can find on Google. Just remember to double-check everything. And see how it goes.

My experiment was performed on May 3rd, 4th and 17th, 2023, mostly in the midst of yet another evening spent trying to do something PhD related. (As you may know, the status of student upgrades your procrastination skills to a level you never even knew before, despite your age. That is how this article came about).

I asked ChatGPT a few questions on my research topic for fun and possible insights. At the end of this article, you can see quite long excerpts from our conversation, where you will find that maybe you can get the right information (after being very persuasive with your questions!), but not always, as in the case of the May 4th and 17th interactions. And you can get very many apologies during that (if you are into that).[1]

However, such a need for persuasion oughtn’t be necessary if the information is relatively easy to find, since, well, we all have used Google and it already knows how to find things. Also, you can call the answers given on May 4th and 17th misleading, or even pure lies. This, consequently, casts doubt on any information that is provided by this tool (at least, at this moment), if we follow the human logic that simpler things (such as finding the right article or paragraph in law) are easier done than complex things (such as giving an opinion on difficult legal issues). As can be seen from the chat, we don’t even know what ChatGPT’s true sources are and how it actually works when it tells you something that is not true (while still presenting it as a fact). 

Maybe some magic words like “as far as I know” or “prima facie” in the answers could have provided me with more empathy regarding my chatty friend. The total certainty with which the information is provided also gives further reasons for concern. What if I am a normal human being and don’t know the real answer, have forgotten or not noticed the disclaimer at the bottom of the chat (as it happens with the small letter texts), or don’t have any persistence to check the info? I may include the answers in my homework, essay, or even in my views on the issue at work—since, as you know, we are short of time and need everything done by yesterday. The path of least resistance is one of the most tempting. (And in the case of AI we should be aware of a thing inherent to humans called “anthropomorphizing”, i.e., attributing human form or personality to things not human, so we might trust something a bit more or more easily than we should.)

The reliability of the information provided by State institutions as well as lawyers has been one of the cornerstones of people’s belief in the justice system. Therefore, it could be concluded that either I had bad luck, or one should be very careful when introducing AI in state institutions. And such use should be limited only to cases where only information about facts is provided (with the possibility to see and check the resources) until the credibility of AI opinions could be reviewed and verified. At this moment you should believe the disclaimers of its creators and use AI resources with quite (legitimate) mistrust and treat it somewhat as a child that has done something wrong but will not admit it, no matter how long you interrogate them. And don’t take it for something it is not, even if it sounds like you should listen to it.**

May 3rd, 2023

[Reminder: Article 56(3) of the Directive 2014/24/EU: Where information or documentation to be submitted by economic operators is or appears to be incomplete or erroneous or where specific documents are missing, contracting authorities may, unless otherwise provided by the national law implementing this Directive, request the economic operators concerned to submit, supplement, clarify or complete the relevant information or documentation within an appropriate time limit, provided that such requests are made in full compliance with the principles of equal treatment and transparency.]

[...]

[… a quite lengthy discussion about the discretion of the contracting authority to ask for the information ...]

[The author did not get into a discussion about the opinion of ChatGPT on this issue, because that was not the aim of the chat, however, this could be done in some other conversation.]

[…]

[… long explanation ...]

[...]

May 4th, 2023

[Editor’s note: apologies that some of the screenshots appear in a small font…].

[…]

Both links that the ChatGPT gave are correct:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32014L0024

https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32014L0024&from=EN

However, both citations are wrong.

May 17th, 2023

[As you will see, ChatGPT doesn’t give links anymore, so it could have learned a bit within these few weeks].

[Editor’s note: apologies again that the remainder of the screenshots appear in a small font…].

[...]

[Not to be continued.]

DŽEINA GAILE

My name is Džeina Gaile and I am a doctoral student at the University of Latvia. My research focuses on clarification of a submitted tender, but I am interested in many aspects of public procurement. Therefore, I am supplementing my knowledge as often as I can and have a Master of Laws in Public Procurement Law and Policy with Distinction from the University of Nottingham. I also have been practicing procurement and am working as a lawyer for a contracting authority. In a few words, a bit of a “procurement geek”. In my free time, I enjoy walks with my dog, concerts, and social dancing.

________________

** This article was reviewed by Grammarly. Still, I hope it will not tell anything to the ChatGPT… [Editor’s note – the draft was then further reviewed by a human, yours truly].

[1] To be fair, I must stress that at the bottom of the chat page, there is a disclaimer: “Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 3 Version” or “Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 12 Version” later. And, when you join the tool, there are several announcements that this is a work in progress.


ChatGPT in the Public Sector -- should it be banned?

In ‘ChatGPT in the Public Sector – overhyped or overlooked?’ (24 Apr 2023), the Analysis and Research Team (ART) of the General Secretariat of the Council of the European Union provides a useful and accessible explanation of how ChatGPT works, as well interesting analysis of the risks and pitfalls of rushing to embed generative artificial intelligence (GenAI), and large language models (LLMs) in particular, in the functioning of the public administration.

The analysis stresses the risks stemming from ‘inaccurate, biased, or nonsensical’ GenAI outputs and, in particular, that ‘the key principles of public administration such as accountability, transparency, impartiality, or reliability need to be considered thoroughly in the [GenAI] integration process’.

The paper provides a helpful introduction to how LLMs work and their technical limitations. It then maps potential uses in the public administration, assesses the potential impact of their use on the European principles of public sector administration, and then suggests some measures to mitigate the relevant risks.

This analysis is helpful but, in my view, it is already captured by the presumption that LLMs are here to stay and that what regulators can do is just try to minimise their potential negative impacts—which implies accepting that there will remain unaddressed impacts. By referring to general principles of public administration, rather than eg the right to good administration under the EU Charter of Fundamental Rights, the analysis is also unnecessarily lenient.

I find this type of discourse dangerous and troubling because it facilitates the adoption of digital technologies that cannot meet current legal requirements and guarantees of individual rights. This is clear from the paper itself, although the implications of part of the analysis are not sufficiently explored, in my view.

The paper has a final section where it explicitly recognises that, while some risks might be mitigated by technological advancements, other risks are of a more structural nature and cannot be fully corrected despite best efforts. The paper then lists a very worrying panoply of such structural issues (at 16):

  • ‘This is the case for detecting and removing biases in training data and model outputs. Efforts to sanitize datasets can even worsen biases’.

  • ‘Related to biases is the risk of a perpetuation of the status quo. LLMs mirror the values, habits and attitudes that are present in their training data, which does not leave much space for changing or underrepresented societal views. Relying on LLMs that have been trained with previously produced documents in a public administration severely limits the scope for improvement and innovation and risks leaving the public sector even less flexible than it is already perceived to be’.

  • ‘The ‘black box’ issue, where AI models arrive at conclusions or decisions without revealing the process of how they were reached is also primarily structural’.

  • ‘Regulating new technologies will remain a cat-and-mouse game. Acceleration risk (the emergence of a race to deploy new AI as quickly as possible at the expense of safety standards) is also an area of concern’.

  • ‘Finally […] a major structural risk lies in overreliance, which may be bolstered by rapid technological advances. This could lead to a lack of critical thinking skills needed to adequately assess and oversee the model’s output, especially amongst a younger generation entering a workforce where such models are already being used’.

In my view, beyond the paper’s suggestion that the way forward is to maintain human involvement to monitor the way LLMs (mal)function in the public sector, we should be discussing the imposition of a ban on the adoption of LLMs (and other digital technologies) by the public sector unless it can be positively proven that their deployment will not affect individual rights and more diffuse public interests, and that any residual risks are adequately mitigated.

The current state of affairs is unacceptable in that the lack of regulation allows for a quickly accelerating accumulation of digital deployments that generate risks to social and individual rights and goods. The need to reverse this situation underlies my proposal to permission the adoption of digital technologies by the public sector. Unless we take a robust approach to slowing down and carefully considering the implications of public sector digitalisation, we may be undermining public governance in ways that will be very difficult or impossible to undo. It is not too late, but it may be soon.

Source: https://www.thetimes.co.uk/article/how-we-...

Free registration open for two events on procurement and artificial intelligence

Registration is now open for two free events on procurement and artificial intelligence (AI).

First, a webinar where I will be participating in discussions on the role of procurement in contributing to the public sector’s acquisition of trustworthy AI, and the associated challenges, from an EU and US perspective.

Second, a public lecture where I will present the findings of my research project on digital technologies and public procurement.

Please scroll down for details and links to registration pages. All welcome!

1. ‘Can Procurement Be Used to Effectively Regulate AI?’ | Free online webinar
30 May 2023 2pm BST / 3pm CET-SAST / 9am EST (90 mins)
Co-organised by University of Bristol Law School and George Washington University Law School.

Artificial Intelligence (“AI”) regulation and governance is a global challenge that is starting to generate different responses in the EU, US, and other jurisdictions. Such responses are, however, rather tentative and politically contested. A full regulatory system will take time to crystallise and be fully operational. In the meantime, despite this regulatory gap, the public sector is quickly adopting AI solutions for a wide range of activities and public services.

This process of accelerated AI adoption by the public sector places procurement as the (involuntary) gatekeeper, tasked with ‘AI regulation by contract’, at least for now. The procurement function is expected to design tender procedures and contracts capable of attaining goals of AI regulation (such as trustworthiness, explainability, or compliance with data protection and human and fundamental rights) that are so far eluding more general regulation.

This webinar will provide an opportunity to take a hard look at the likely effectiveness of AI regulation by contract through procurement and its implications for the commercialisation of public governance, focusing on key issues such as:

  • The interaction between tender design, technical standards, and negotiations.

  • The challenges of designing, monitoring, and enforcing contractual clauses capable of delivering effective ‘regulation by contract’ in the AI space.

  • The tension between the commercial value of tailored contractual design and the regulatory value of default clauses and standard terms.

  • The role of procurement disputes and litigation in shaping AI regulation by contract.

  • The alternative regulatory option of establishing mandatory prior approval by an independent regulator of projects involving AI adoption by the public sector.

This webinar will be of interest to those working on or researching the digitalisation of the public sector and AI regulation in general, as the discussion around procurement gatekeeping mirrors the main issues arising from broader trends.

I will have the great opportunity of discussing my research with Aris Georgopoulos (Nottingham), Scott Simpson (Digital Transformation Lead at U.S. Department of Homeland Security), and Liz Chirico (Acquisition Innovation Lead at Office of the Deputy Assistant Secretary of the Army). Jessica Tillipman (GW Law) will moderate the discussion and Q&A.

Registration: https://law-gwu-edu.zoom.us/webinar/register/WN_w_V9s_liSiKrLX9N-krrWQ.

2. ‘AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?’ | Free in-person public lecture
4 July 2023 2pm BST, Reception Room, Wills Memorial Building, University of Bristol
Organised by University of Bristol Law School, Centre for Global Law and Innovation

The public sector is quickly adopting artificial intelligence (AI) to manage its interactions with citizens and in the provision of public services – for example, using chatbots in official websites, automated processes and call-centres, or predictive algorithms.

There are inherent high stakes risks to this process of public governance digitalisation, such as bias and discrimination, unethical deployment, data and privacy risks, cyber security risks, or risks of technological debt and dependency on proprietary solutions developed by (big) tech companies.

However, as part of the UK Government’s ‘light touch’ ‘pro-innovation’ approach to digital technology regulation, the adoption of AI in the public sector remains largely unregulated. 

In this public lecture, I will present the findings of my research funded by the British Academy, analysing how, in this deregulatory context, the existing rules on public procurement fall short of protecting the public interest.

An alternative approach is required to create mechanisms of external independent oversight and mandatory standards to embed trustworthy AI requirements and to mitigate against commercial capture in the acquisition of AI solutions. 

Registration: https://www.eventbrite.co.uk/e/can-procurement-promote-trustworthy-ai-and-avoid-commercial-capture-tickets-601212712407.

External oversight and mandatory requirements for public sector digital technology adoption

© Mateo Mulder-Graells (2023).

I thought the time would never come, but the last piece of my book project puzzle is now more or less in place. After finding that procurement is not the right regulatory actor and does not have the best tools of ‘digital regulation by contract’, in this last draft chapter, I explore how to discharge procurement of the assigned digital regulation role to increase the likelihood of effective enforcement of desirable goals of public sector digital regulation.

I argue that this should be done through two inter-related regulatory interventions consisting of developing (1) a regulator tasked with the external oversight of the adoption of digital technologies by the public sector, as well as (2) a suite of mandatory requirements binding both public entities seeking to adopt digital technologies and technology providers, and both in relation to the digital technologies to be adopted by the public sector and the applicable governance framework.

Detailed analysis of these issues would require much more extensive treatment than this draft chapter can offer. The modest goal here is simply to stress the key attributes and functions that each of these two regulatory interventions should have to make a positive contribution to governing the transition towards a new model of public digital governance. In this blog post, I summarise the main arguments.

As ever, I would be most grateful for feedback: a.sanchez-graells@bristol.ac.uk. Especially as I will now turn my attention to seeing how the different pieces of the puzzle fit together, while I edit the manuscript for submission before end of July 2023.

Institutional deficit and risk of capture

In the absence of an alternative institutional architecture (or while it is put in place), procurement is expected to develop a regulatory gatekeeping role in relation to the adoption of digital technologies by the public sector, which is in turn expected to have norm-setting and market-shaping effects across the economy. This could be seen as a way of bypassing or postponing decisions on regulatory architecture.

However, earlier analysis has shown that the procurement function is not the right institution to which to assign a digital regulation role, as it cannot effectively discharge such a duty. This highlights the existence of an institutional deficit in the process of public sector digitalisation, as well as in relation to digital technology regulation more broadly. An alternative approach to institutional design is required, and it can be delivered through the creation of a notional ‘AI in Public Sector Authority’ (AIPSA).

Earlier analysis has also shown that there are pervasive risks of regulatory capture and commercial determination of the process of public sector digitalisation stemming from reliance on standards and benchmarks created by technology vendors or by bodies heavily influenced by the tech industry. AIPSA could safeguard against such risk through controls over the process of standard adoption. AIPSA could also guard against excessive experimentation with digital technologies by creating robust controls to counteract their policy irresistibility.

Overcoming the institutional deficit through AIPSA

The adoption of digital technologies in the process of public sector digitalisation creates regulatory challenges that require external oversight, as procurement is unable to effectively regulate this process. A particularly relevant issue concerns whether such oversight should be entrusted to a new regulator (broad approach), or whether it would suffice to assign new regulatory tasks to existing regulators (narrow approach).

I submit that the narrow approach is inadequate because it perpetuates regulatory fragmentation and can lead to undesirable spillovers or knock-on effects, whether the new regulatory tasks are assigned to data protection authorities, (quasi)regulators with a ‘sufficiently close’ regulatory remit in relation with information and communications technologies (ICT) (such as eg the Agency for Digital Italy (AgID), or the Dutch Advisory Council on IT assessment (AcICT)), or newly created centres of expertise in algorithmic regulation (eg the French PEReN). Such ‘organic’ or ‘incremental’ approach to institutional development could overshadow important design considerations, as well embed biases due to the institutional drivers of the existing (quasi)regulators.

To avoid these issues, I advocate a broader or more joined up approach in the proposal for AIPSA. AIPSA would be an independent authority with the statutory function of promoting overarching goals of digital regulation, and specifically tasked with regulating the adoption and use of digital technologies by the public sector, whether through in-house development or procurement from technology providers. AIPSA would also absorb regulatory functions in cognate areas, such as the governance of public sector data, and integrate work in areas such as cyber security. It would also serve a coordinating function with the data protection authority.

In the draft chapter, I stress three fundamental aspects of AIPSA’s institutional design: regulatory coherence, independence and expertise. Independence and expertise would be the two most crucial factors. AIPSA would need to be designed in a way that ensured both political and industry independence, with the issue of political independence having particular salience and requiring countervailing accountability mechanisms. Relatedly, the importance of digital capabilities to effectively exercise a digital regulation role cannot be overemphasised. It is not only important in relation to the active aspects of the regulatory role—such as control of standard setting or permissioning or licencing of digital technology use (below)—but also in relation to the passive aspects of the regulatory role and, in particular, in relation to reactive engagement with industry. High levels of digital capability would be essential to allow AIPSA to effectively scrutinise claims from those that sought to influence its operation and decision-making, as well as reduce AIPSA’s dependence on industry-provided information.

safeguard against regulatory capture and policy irresistibility

Regulating the adoption of digital technologies in the process of public sector digitalisation requires establishing the substantive requirements that such technology needs to meet, as well as the governance requirements need to ensure its proper use. AIPSA’s role in setting mandatory requirements for public sector digitalisation would be twofold.

First, through an approval or certification mechanism, it would control the process of standardisation to neutralise risks of regulatory capture and commercial determination. Where no standards were susceptible of approval or certification, AIPSA would develop them.

Second, through a permissioning or licencing process, AIPSA would ensure that decisions on the adoption of digital technologies by the public sector are not driven by ‘policy irresistibility’, that they are supported by clear governance structures and draw on sufficient resources, and that adherence to the goals of digital regulation is sustained throughout the implementation and use of digital technologies by the public sector and subject to proactive transparency requirements.

The draft chapter provides more details on both issues.

If not AIPSA … then clearly not procurement

There can be many objections to the proposals developed in this draft chapter, which would still require further development. However, most of the objections would likely also apply to the use of procurement as a tool of digital regulation. The functions expected of AIPSA closely match those expected of the procurement function under the approach to ‘digital regulation by contract’. Challenges to AIPSA’s ability to discharge such functions would be applicable to any public buyer seeking to achieve the same goals. Similarly, challenges to the independence or need for accountability of AIPSA would be similarly applicable to atomised decision-making by public buyers.

While the proposal is necessarily imperfect, I submit that it would improve upon the emerging status quo and that, in discharging procurement of the digital regulation role, it would make a positive contribution to the governance of the transition to a new model of digital public governance.

The draft chapter is available via SSRN: Albert Sanchez-Graells, ‘Discharging procurement of the digital regulation role: external oversight and mandatory requirements for public sector digital technology adoption’.