G7 Guiding Principles and Code of Conduct on Artificial Intelligence -- some comments from a UK perspective

On 30 October 2023, G7 leaders published the Hiroshima Process International Guiding Principles for Advanced AI system (the G7 AI Principles), a non-exhaustive list of guiding principles formulated as a living document that builds on the OECD AI Principles to take account of recent developments in advanced AI systems. The G7 stresses that these principles should apply to all AI actors, when and as applicable to cover the design, development, deployment and use of advanced AI systems.

The G7 AI Principles are supported by a voluntary Code of Conduct for Advanced AI Systems (the G7 AI Code of Conduct), which is meant to provide guidance to help seize the benefits and address the risks and challenges brought by these technologies.

The G7 AI Principles and Code of Conduct came just two days before the start of the UK’s AI Safety Summit 2023. Given that the UK is part of the G7 and has endorsed the G7 Hiroshima Process and its outcomes, the interaction between the G7’s documents, the UK Government’s March 2023 ‘pro-innovation’ approach to AI and its aspirations for the AI Safety Summit deserves some comment.

G7 AI Principles and Code of Conduct

The G7 AI Principles aim ‘to promote safe, secure, and trustworthy AI worldwide and will provide guidance for organizations developing and using the most advanced AI systems, including the most advanced foundation models and generative AI systems.’ The principles are meant to be cross-cutting, as they target ‘among others, entities from academia, civil society, the private sector, and the public sector.’ Importantly, also, the G7 AI Principles are meant to be a stop gap solution, as G7 leaders ‘call on organizations in consultation with other relevant stakeholders to follow these [principles], in line with a risk-based approach, while governments develop more enduring and/or detailed governance and regulatory approaches.’

The principles include the reminder that ‘[w]hile harnessing the opportunities of innovation, organizations should respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and human-centricity, in the design, development and deployment of advanced AI system’, as well as a reminder that organizations developing and deploying AI should not undermine democratic values, harm individuals or communities, ‘facilitate terrorism, enable criminal misuse, or pose substantial risks to safety, security, and human rights’. State (AI users) are reminder of their ‘obligations under international human rights law to promote that human rights are fully respected and protected’ and private sector actors are called to align their activities ‘with international frameworks such as the United Nations Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises’.

These are all very high level declarations and aspirations that do not go much beyond pre-existing commitments and (soft) law norms, if at all.

The G7 AI Principles comprises a non-exhaustive list of 11 high-level regulatory goals that organizations should abide by ‘commensurate to the risks’—ie following the already mentioned risk-based approach—which introduces a first element of uncertainty because the document does not establish any methodology or explanation on how risks should be assessed and tiered (one of the primary, and debated, features of the proposed EU AI Act). The principles are the following, prefaced by my own labelling between square brackets:

  1. [risk identification, evaluation and mitigation] Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle;

  2. [misuse monitoring] Patterns of misuse, after deployment including placement on the market;

  3. [transparency and accountability] Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.

  4. [incident intelligence exchange] Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.

  5. [risk management governance] Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures, in particular for organizations developing advanced AI systems.

  6. [(cyber) security] Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.

  7. [content authentication and watermarking] Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.

  8. [risk mitigation priority] Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.

  9. [grand challenges priority] Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.

  10. [technical standardisation] Advance the development of and, where appropriate, adoption of international technical standards.

  11. [personal data and IP safeguards] Implement appropriate data input measures and protections for personal data and intellectual property.

Each of the principles is accompanied by additional guidance or precision, where possible, and this is further developed in the G7 Code of Conduct.

In my view, the list is a bit of a mixed bag.

There are some very general aspirations or steers that can hardly be considered principles of AI regulation, for example principle 9 setting a grand challenges priority and, possibly, principle 8 setting a risk mitigation priority beyond the ‘requirements’ of principle 1 on risk identification, evaluation and mitigation—which thus seems to boil down to the more specific steer in the G7 Code of Conduct for (private) organisations to ‘share research and best practices on risk mitigation’.

Quite how these principles could be complied by current major AI developers seems rather difficult to foresee, especially in relation to principle 9. Most developers of generative AI or other AI applications linked to eg social media platforms will have a hard time demonstrating their engagement with this principle, unless we accept a general justification of ‘general purpose application’ or ‘dual use application’—which to me seems quite unpalatable. What is the purpose of this principle if eg it pushes organisations away from engaging with the rest of the G7 AI Principles? Or if organisations are allowed to gloss over it in any (future) disclosures linked to an eventual mechanism of commitment, chartering, or labelling associated with the principles? It seems like the sort of purely political aspiration that may have been better left aside.

Some other principles seem to push at an open door, such as principle 10 on the development of international technical standards. Again, the only meaningful detail seems to be in the G7 Code of Conduct, which specifies that ‘In particular, organizations also are encouraged to work to develop interoperable international technical standards and frameworks to help users distinguish content generated by AI from non-AI generated content.’ However, this is closely linked to principle 7 on content authentication and watermarking, so it is not clear how much that adds. Moreover, this comes to further embed the role of industry-led technical standards as a foundational element of AI regulation, with all the potential problems that arise from it (for some discussion from the perspective of regulatory tunnelling, see here and here).

Yet other principles present as relatively soft requirements or ‘noble’ commitments issues that are, in reality, legal requirements already binding on entities and States and that, in my view, should have been placed as hard obligations and a renewed commitment from G7 States to enforce them. These include principle 11 on personal data and IP safeguards, where the G7 Code of Conduct includes as an apparent after thought that ‘Organizations should also comply with applicable legal frameworks’. In my view, this should be starting point.

This reduces the list of AI Principles ‘proper’. But, even then, they can be further grouped and synthesised, in my view. For example, principles 1 and 5 are both about risk management, with the (outward-looking) governance layer of principle 5 seeking to give transparency to the (inward-looking) governance layer in principle 1. Principle 2 seems to simply seek to extend the need to engage with risk-based management post-market placement, which is also closely connected to the (inward-looking) governance layer in principle 1. All of them focus on the (undefined) risk-based approach to development and deployment of AI underpinning the G7’s AI Principles and Code of Conduct.

Some aspects of the incident intelligence exchange also relate to principle 1, while some other aspects relate to (cyber) security issues encapsulated in principle 6. However, given that this principle may be a placeholder for the development of some specific mechanisms of collaboration—either based on cyber security collaboration or other approaches, such as the much touted aviation industry’s—it may be treated separately.

Perhaps, then, the ‘core’ AI Principles arising from the G7 document could be trimmed down to:

  • Life-cycle risk-based management and governance, inclusive of principles 1, 2, and 5.

  • Transparency and accountability, principle 3.

  • Incident intelligence exchange, principle 4.

  • (Cyber) security, principle 6.

  • Content authentication and watermarking, principle 7 (though perhaps narrowly targeted to generative AI).

Most of the value in the G7 AI Principles and Code of Conduct thus arises from the pointers for collaboration, the more detailed self-regulatory measures, and the more specific potential commitments included in the latter. For example, in relation to the potential AI risks that are identified as potential targets for the risk assessments expected of AI developers (under guidance related to principle 1), or the desirable content of AI-related disclosures (under guidance related to principle 3).

It is however unclear how these principles will evolve when adopted at the national level, and to what extent they offer a sufficient blueprint to ensure international coherence in the development of the ‘more enduring and/or detailed governance and regulatory approaches’ envisaged by G7 leaders. It seems for example striking that both the EU and the UK have supported these principles, given that they have relatively opposing approaches to AI regulation—with the EU seeking to finalise the legislative negotiations on the first ‘golden standard’ of AI regulation and the UK taking an entirely deregulatory approach. Perhaps this is in itself an indication that, even at the level of detail achieved in the G7 AI Code of Conduct, the regulatory leeway is quite broad and still necessitates significant further concretisation for it to be meaningful in operational terms—as evidenced eg by the US President’s ‘Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’, which calls for that concretisation and provides a good example of the many areas for detailed work required to translate high level principles into actionable requirements (even if it leaves enforcement still undefined).

How do the G7 Principles compare to the UK’s ‘pro-innovation’ ones?

In March 2023, the UK Government published its white paper ‘A pro-innovation approach to AI regulation’ (the ‘UK AI White Paper’; for a critique, see here). The UK AI White Paper indicated (at para 10) that its ‘framework is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy:

  • Safety, security and robustness

  • Appropriate transparency and explainability

  • Fairness

  • Accountability and governance

  • Contestability and redress’.

A comparison of the UK and the G7 principles can show a few things.

First, that there are some areas where there seems to be a clear correlation—in particular concerning (cyber) security as a self-standing challenge requiring a direct regulatory focus.

Second, that it is hard to decide at which level to place incommensurable aspects of AI regulation. Notably, the G7 principles do not directly refer to fairness—while the UK does. However, the G7 Principles do spend some time in the preamble addressing the issue of fairness and unacceptable AI use (though in a woolly manner). Whether placing this type of ‘requirement’ at a level or other makes a difference (at all) is highly debatable.

Third, that there are different ways of ‘packaging’ principles or (soft) obligations. Just like some of the G7 principles are closely connected or fold into each other (as above), so do the UK’s principles in relation to the G7’s. For example, the G7 packaged together transparency and accountability (principle 3), while the UK had them separated. While the UK explicitly mentioned the issue of AI explainability, this remains implicit in the G7 principles (also in principle 3).

Finally, in line with the considerations above, that distinct regulatory approaches only emerge or become clear once the ‘principles’ become specific (so they arguably stop being principles). For example, it seems clear that the G7 Principles aspire to higher levels of incident intelligence governance and to a specific target of generative AI watermarking than the UK’s. However, whether the G7 or the UK principles are equally or more demanding on any other dimension of AI regulation is close to impossible to establish. In my view, this further supports the need for a much more detailed AI regulatory framework—else, technical standards will entirely occupy that regulatory space.

What do the G7 AI Principles tell us about the UK’s AI Safety Summit?

The Hiroshima Process that has led to the adoption of the G7 AI Principles and Code of Conduct emerged from the Ministerial Declaration of The G7 Digital and Tech Ministers’ Meeting of 30 April 2023, which explicitly stated that:

‘Given that generative AI technologies are increasingly prominent across countries and sectors, we recognise the need to take stock in the near term of the opportunities and challenges of these technologies and to continue promoting safety and trust as these technologies develop. We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation, including foreign information manipulation, and how to responsibly utilise these technologies’ (at para 47).

The UK Government’s ambitions for the AI Safety Summit largely focus on those same issues, albeit within the very narrow confines of ‘frontier AI’, which it has defined as ‘highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models‘. While the UK Government has published specific reports to focus discussion on (1) Capabilities and risks from frontier AI and (2) Emerging Processes for Frontier AI Safety, it is unclear how the level of detail of such narrow approach could translate into broader international commitments.

The G7 AI Principles already claim to tackle ‘the most advanced AI systems, including the most advanced foundation models and generative AI systems (henceforth "advanced AI systems")’ within their scope. It seems unclear that such approach would be based on a lack of knowledge or understanding of the detail the UK has condensed in those reports. It rather seems that the G7 was not ready to move quickly to a level of detail beyond that included in the G7 AI Code of Conduct. Whether significant further developments can be expected beyond the G7 AI Principles and Code of Conduct just two days after they were published seems hard to fathom.

Moreover, although the UK Government is downplaying the fact that eg Chinese participation in the AI Safety Summit is unclear and potentially rather marginal, it seems that, at best, the UK AI Safety Summit will be an opportunity for a continued conversation between G7 countries and a few others. It is also unclear whether significant progress will be made in a forum that seems rather clearly tilted towards industry voice and influence.

Let’s wait and see what the outcomes are, but I am not optimistic for significant progress other than, worryingly, a risk of further displacement of regulatory decision-making towards industry and industry-led (future) standards.

More model contractual AI clauses -- some comments on the SCL AI Clauses

Following the launch of the final version of the model contractual AI clauses sponsored by the European Commission earlier this month, the topic of how to develop and how to use contractual model clauses for AI procurement is getting hotter. As part of its AI Action Plan, New York City has announced that it is starting work to develop its own model clauses for AI procurement (to be completed in 2025). We can expect to see a proliferation of model AI clauses as more ‘AI legislation’ imposes constraints on contractual freedom and compliance obligations, and as different model clauses are revised to (hopefully) capture the learning from current experimentation in AI procurement.

Although not (closely) focused on procurement, a new set of interesting AI contractual clauses has been released by the Society for Computers & Law (SCL) AI Group (thanks to Gisele Waters for bringing them to my attention on LinkedIn!). In this post, I reflect on some aspects of the SCL AI clauses and try to answer Gisele’s question/challenge (below).

SCL AI Clauses

The SCL AI clauses have a clear commercial orientation and are meant as a starting point for supplier-customer negotiations, which is reflected on the fact that the proposed clauses contain two options: (1) a ‘pro-supplier’ drafting based on off-the-shelf provision, and (2) a ‘pro-customer’ drafting based on a bespoke arrangement. Following that commercial logic, most of the SCL AI clauses focus on an allocation of obligations (and thus costs and liability) between the parties (eg in relation to compliance with legal requirements).

The clauses include a few substantive requirements implicit in the allocation of the respective obligations (eg on data or third party licences) but mostly refer to detailed schedules of which there is no default proposal, or to industry standards (and thus have this limitation in common with eg the EU’s model AI clauses). The SCL AI clauses do contain some drafting notes that would help identify issues needing specific regulation in the relevant schedules, although this guidance necessarily remains rather abstract or generic.

This pro-supplier/pro-customer orientation prompted Gisele’s question/challenge, which is whether ‘there is EVER an opportunity for government (customer-buyer) to be better able to negotiate the final language with clauses like these in order to weigh the trade offs between interests?’, especially bearing in mind that the outcome of the negotiations could be strongly pro-supplier, strongly pro-customer, or balanced (and something in between those). I think that answering this question requires exploring what pro-supplier or pro-customer may mean in this specific context.

From a substantive regulation perspective, the SCL AI clauses include a few interesting elements, such as an obligation to establish a circuit-breaker capable of stopping the AI (aka an ‘off button’) and a roll-back obligation (to an earlier, non-faulty version of the AI solution) where the AI is malfunctioning or this is necessary to comply with applicable law. However, most of the substantive obligations are established by reference to ‘Good Industry Practice’, which requires some further unpacking.

SCL AI Clauses and ‘Good Industry Practice’

Most of crucial proposed clauses refer to the benchmark of ‘Good Industry Practice’ as a primary qualifier for the relevant obligations. The proposed clause on explainability is a good example. The SCL AI clause (C1.15) reads as follows:

C1.15 The Supplier will ensure that the AI System is designed, developed and tested in a way which ensures that its operation is sufficiently transparent to enable the Customer to understand and use the AI System appropriately. In particular, the Supplier will produce to the Customer, on request, information which allows the Customer to understand:

C1.15.1 the logic behind an individual output from the AI System; and

C1.15.2 in respect of the AI System or any specific part thereof, which features contributed most to the output of the AI System, in each case, in accordance with Good Industry Practice.

A first observation is that the SCL AI clauses seem to presume that off-the-shelf AI solutions would not be (necessarily) explainable, as they include no clause under the ‘pro-supplier’ version.

Second, the ‘pro-customer’ version both limits the types of explanation that would be contractually owed (to a model-level or global explanation under C1.15.2 and a limited decision-level or local explanation under C1.15.1 — which leaves out eg a counterfactual explanation, as well as not setting any specific requirements on how the explanation needs to be produced, eg is a ‘post hoc’ explanation acceptable and if so how should it be produced?) and qualifies it in two important ways: (1) the overall requirement is that the AI system’s operation should be ‘sufficiently transparent’, with ‘sufficient’ creating a lot of potential issues here; and, (2) the reference to ‘Good Industry Practice’ [more on this below].

The issue of transparency is similarly problematic in its more general treatment under another specific clause (C4.6), which also only has a ‘pro-customer’ version:

C4.6 The Supplier warrants that, so far as is possible [to achieve the intended use of the AI System / comply with the Specification], the AI System is transparent and interpretable [such that its output can be traced back to the input data] .

The qualifier ‘so far as is possible’ is again potentially quite problematic here, as are the open-ended references to transparency and interpretability of the system (with a potential conflict between interpretability for the purposes of this clause and explainability under C1.15).

What I find interesting about this clause is that the drafting notes explain that:

… the purpose of this provision is to ensure that the Supplier has not used an overly-complex algorithm if this is unnecessary for the intended use of the AI System or to comply with its Specification. That said, effectiveness and accuracy are often trade-offs for transparency in AI models.

From this perspective, I think the clause should be retitled and entirely redrafted to make explicit that the purpose is to establish a principle of ‘AI minimisation’ in the sense of the supplier guaranteeing that the AI system is the least complex that can provide the desired functionality — which, of course, has the tricky issue of trade-off and the establishment of the desired functionality in itself to work around. (and which in a procurement context would have been dealt with pre-contract, eg in the context of technical specifications and/or tender evaluation). Interestingly, this issue is another one where reference could be made to ‘Good Industry Practice’ if one accepted that it should be best practice to always use the most explainable/interpretable and most simple model available for a given task.

As mentioned, reference to ‘Good Industry Practice’ is used extensively in the SCL AI clauses, including crucial issues such as: explainability (above), user manual/user training, preventing unlawful discrimination, security (which is inclusive of cyber secturity and some aspects of data protection/privacy), or quality standards. The drafting notes are clear that

… while parties often refer to ‘best practice’ or ‘good industry practice’, these standards can be difficult to apply in developing industry. Accordingly a clear Specification is required, …

Which is the reason why the SCL AI clauses foresee that ‘Good Industry Practice’ will be a defined contract term, whereby the parties will specify the relevant requirements and obligations. And here lies the catch.

Defining ‘Good Industry Practice’?

In the SCL AI clauses, all references to ‘Good Industry Practice’ are used as qualifiers in the pro-customer version of the clauses. It is possible that the same term would be of relevance to establishing whether the supplier had discharged its reasonable duties/best efforts under the pro-supplier version (where the term would be defined but not explicitly used). In both cases, the need to define ‘Good Industry Practice’ is the Achilles heel of the model clauses, as well as a potential Trojan horse for customers seeking a seemingly pro-customer contractual design,

The fact is that the extent of the substantive obligations arising from the contract will entirely depend on how the concept of ‘Good Industry Practice’ is defined and specified. This leaves even seemingly strongly ‘pro-customer’ contracts exposed to weak substantive protections. The biggest challenge for buyers/procurers of AI will be that (1) it will be hard to know how to define the term and what standards to refer to, and (2) it will be difficult to monitor compliance with the standards, especially where those establish eg mechanisms of self-asessment by the tech supplier as the primary or sole quality control mechanims.

So, my answer to Gisele’s question/challenge would be that the SCL AI clauses, much like the EU’s, do not (and cannot?) go far enough in ensuring that the contract for the procurement/purchase of AI embeds adequate substantive requirements. The model clauses are helpful in understanding who needs to do what when, and thus who shoulders the relevant cost and risk. But they do not address the all-important question of how it needs to be done. And that is the crucial issue that will determine whether the contract (and the AI solution) really is in the public buyer’s interest and, ultimately in the public interest.

In a context where tech providers (almost always) have the upper hand in negotiations, this foundational weakness is all important, as suppliers could well ‘agree to pro-customer drafting’ and then immediately deactivate it through the more challenging and technical definition (and implementation) of ‘Good Industry Practices’.

That is why I think we need to cover this regulatory tunnelling risk and this foundational shortcoming of ‘AI regulation by contract’ by creating clear and binding requirements on the how (ie the ‘good (industry) practice’ or technical standards). The emergence of model AI contract clauses to me makes it clear that the most efficient contract design is such that it needs to refer to external benchmarks. Establishing adequarte protections and an adequate balance of risks and benefits (from a social perspective) hinges on this. The contract can then deal with an apportionment of the burdens, obligations, costs and risks stemming from the already set requirements.

So I would suggest that the focus needs to be squarely on developing the regulatory architecture that will lead us to the development of such mandatory requirements and standards for the procurement and use of AI by the public sector — which may then become adequate good industry practice for strictly commercial or private contracts. My proposal in that regard is sketched out here.

Final EU model contractual AI Clauses available -- some thoughts on regulatory tunnelling

Source: https://tinyurl.com/mrx9sbz8.

The European Commission has published the final version of the EU model contractual AI clauses to pilot in procurements of AI, which have been ‘developed for pilot use in the procurement of AI with the aim to establish responsibilities for trustworthy, transparent, and accountable development of AI technologies between the supplier and the public organisation.’

The model AI clauses have been developed by reference to the (future) obligations arising from the EU AI Act currently under advanced stages of negotiation. This regulatory technique simply seeks to allow public buyers to ensure compliance with the EU AI Act by cascading the relevant obligations and requirements down to tech providers (largely on a back to back basis). By the same regulatory logic, this technique will be a conveyor belt for the shortcomings of the EU AI Act, which will be embedded in public contracts using the clauses. It is thus important to understand the shortcomings inherent to this approach and to the model AI clauses, before assuming that their use will actually ensure the ‘trustworthy, transparent, and accountable development [and deployment] of AI technologies’. Much more is needed than mere reliance on the model AI clauses.

Two sets of model AI clauses

The EU AI Act will not be applicable to all types of AI use. Remarkably, most requirements will be limited to ‘high-risk AI uses’ as defined in its Article 6. This immediately translates into the generation of two sets of model AI clauses: one for ‘high-risk’ AI procurement, which embeds the requirements expected to arise from the EU AI Act once finalised, and another ‘light version’ for non-high-risk AI procurement, which would support the voluntary extension of some of those requirements to the procurement of AI for other uses, or even to the use of other types of algorithmic solutions not meeting the regulatory definition of AI.

A first observation is that the controversy surrounding the definition of ‘high-risk’ in the EU AI Act immediately carries over to the model AI clauses and to the choice of ‘demanding’ vs light version. While the original proposal of the EU AI Act contained a numerus clausus of high-risk uses (which was already arguably too limited, see here), the trilogue negotiations could well end suppressing a pre-defined classification and leaving it to AI providers to (self)assess whether the use would be ‘high-risk’.

This has been heavily criticised in a recent open letter. If the final version of the EU AI Act ended up embedding such a self-assessment of what uses are bound to be high-risk, there would be clear risks of gaming of the self-assessment to avoid compliance with the heightened obligations under the Act (and it is unclear that the system of oversight and potential fines foreseen in the EU AI Act would suffice to prevent this). This would directly translate into a risk of gaming (or strategic opportunism) in the choice between ‘demanding’ vs light version of the model AI clauses by public buyers as well.

As things stand today, it seems that most procurement of AI will be subject to the light version of the model AI clauses, where contracting authorities will need to decide which clauses to use and which standards to refer to. Importantly, the light version does not include default options in relation to quality management, conformity assessments, corrective actions, inscription in an AI register, or compliance and audit (some of which are also optional under the ‘demanding’ model). This means that, unless public buyers are familiar with both sets of model AI clauses, taking the light version as a starting point already generates a risk of under-inclusiveness and under-regulation.

Limitations in the model AI clauses

The model AI clauses come with some additional ‘caveat emptor’ warnings. As the Commission has stressed in the press release accompanying the model AI clauses:

The EU model contractual AI clauses contain provisions specific to AI Systems and on matters covered by the proposed AI Act, thus excluding other obligations or requirements that may arise under relevant applicable legislation such as the General Data Protection Regulation. Furthermore, these EU model contractual AI clauses do not comprise a full contractual arrangement. They need to be customized to each specific contractual context. For example, EU model contractual AI clauses do not contain any conditions concerning intellectual property, acceptance, payment, delivery times, applicable law or liability. The EU model contractual AI clauses are drafted in such a way that they can be attached as a schedule to an agreement in which such matters have already been laid down.

This is an important warning, as the sole remit of the model AI clauses links back to the EU AI Act and, in the case of the light version, only partially.

the link between model AI clauses and standards

However, the most significant shortcoming of the model AI clauses is that, by design, they do not include any substantive or material constraints or requirements on the development and use of AI. All substantive obligations are meant to be incorporated by reference to the (harmonised) standards to be developed under the EU AI Act, other sets of standards or, more generally, the state-of-the-art. Plainly, there is no definition or requirement in the model AI clauses that establishes the meaning of eg trustworthiness—and there is thus no baseline safety net ensuring it. Similarly, most requirements are offloaded to (yet to emerge) standards or the technical and organisational measures devised by the parties. For example,

  • Obligations on record-keeping (Art 5 high-risk model) refer to capabilities conforming ‘to state of the art and, if available, recognised standards or common specifications. <Optional: add, if available, a specific standard>’.

  • Measures to ensure transparency (Art 6 high-risk model) are highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way that the operation of the AI System is sufficiently transparent to enable the Public Organisation to reasonably understand the system’s functioning’. Moreover, the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is left entirely undefined in the relevant Annex (E) — thus leaving the option open for referral to emerging transparency standards.

  • Measures on human oversight (Art 7 high-risk model) are also highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way, including with appropriate human-machine interface tools, that it can be effectively overseen by natural persons as proportionate to the risks associated with the system’. Although there is some useful description of what ‘human oversight’ should mean as a minimum (Art 7(2)), the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is also left entirely undefined in the relevant Annex (F) — thus leaving the option open for referral to emerging ‘human on the loop’ standards.

  • Measures on accuracy, robustness and cybersecurity (Art 8 high-risk model) follow the same pattern. Annexes G and H on levels of accuracy and on measures to ensure an appropriate level of robustness, safety and cybersecurity are also blank. While there can be mandatory obligations stemming from other sources of EU law (eg the NIS 2 Directive), only partial aspects of cybersecurity will be covered, and not in all cases.

  • Measures on the ‘explainability’ of the AI (Art 13 high-risk model) fall short of imposing an absolute requirement of intelligibility of the AI outputs, as the focus is on a technical explanation, rather than a contextual or intuitive explanation.

All in all, the model AI clauses are primarily an empty regulatory shell. Operationalising them will require reliance on (harmonised) standards—eg on transparency, human oversight, accuracy, explainability … — or, most likely (at least until such standards are in place) significant additional concretisation by the public buyer seeking to rely on the model AI clauses.

For the reasons identified in my previous research, I think this is likely to generate regulatory tunnelling and to give the upper hand to AI providers in making sure they can comfortably live with requirements in any specific contract. The regulatory tunnelling stems from the fact that all meaningful requirements and constraints are offloaded to the (harmonised) standards to be developed. And it is no secret that the governance of the standardisation process falls well short of ensuring that the resulting standards will embed high levels of protection of the desired regulatory goals — some of which are very hard to define in ways that can be translated into procurement or contractual requirements anyway.

Moreover, public buyers with limited capabilities will struggle to use the model AI clauses in ways that meaningfully ‘establish responsibilities for trustworthy, transparent, and accountable development [and deployment] of AI technologies’—other than in relation to those standards. My intuition is that the content of the all too relevant schedules in the model AI clauses will either simply refer to emerging standards or where there is no standard or the standard is for whatever reason considered inadequate, be left for negotiation with tech providers, or be part of the evaluation (eg tenderers will be required to detail how they propose to regulate eg accuracy). Whichever way this goes, this puts the public buyer in a position of rule-taker.

Only very few, well-resourced, highly skilled public buyers (if any) would be able to meaningfully flesh out a comprehensive set of requirements in the relevant annexes to give the model AI clauses sufficient bite. And they would not benefit much from the model AI clauses as it is unlikely that in their sophistication they would not have already come up with similar solutions. Therefore, at best, the contribution of the model AI clauses is rather marginal and, at worse, it comes with a significant risk of regulatory complacency.

final thoughts

indeed, given all of this, it is clear that the model IA clauses generate a risk if (non-sophisticated/most) public buyers think that relying on them will deal with the many and complex challenges inherent to the acquisition of AI. And an even bigger risk if we collectively think that the existence of such model AI clauses is all the regulation of AI procurement we need. This is not a criticism of the clauses in themselves, but rather of the technique of ‘regulation by contract’ that underlies it and of the broader approach followed by the European Commission and other regulators (including the UK’s)!

I have demonstrated how this is a flawed regulatory strategy in my forthcoming book Digital Technologies and Public Procurement. Gatekeeping and Experimentation in Digital Public Governance (OUP) and in many working papers resulting from the project with the same title. In my view, we need to do a lot more if we want to make sure that the public sector only procures and uses trustworthy AI technologies. We need to create a regulatory system that assigns to an independent authority both the permissioning of the procurement of AI and the certification of the standards underpinning such procurement. In the absence of such regulatory developments, we cannot meaningfully claim that the procurement of AI will be in line with the values and goals to be expected from ‘responsible’ AI use.

I will further explore these issues in a public lecture on 23 November 2023 at University College London. All welcome: Hybrid | Responsibly Buying Artificial Intelligence: A Regulatory Hallucination? | UCL Faculty of Laws - UCL – University College London.

Source: https://public-buyers-community.ec.europa....

Some thoughts on the need to rethink the right to good administration in the digital context

Colleagues at The Digital Constitutionalist have put together a really thought-provoking symposium on ‘Safeguarding the Right to Good Administration in the Age of AI’. I had the pleasure of contributing my own views on the need to extend and broaden good administration guarantees in the context of AI-assisted decision-making. I thoroughly recommend reading all contributions to the symposium, as this is an area of likely development in the EU Administrative Law space.