Initial UK guidance on pro-innovation AI regulation: Much ado about nothing?

The UK Government’s Department for Science, Innovation and Technology (DSIT) has recently published its Initial Guidance for Regulators on Implementing the UK’s AI Regulatory Principles (Feb 2024) (the ‘AI guidance’). This follows from the Government’s response to the public consultation on its ‘pro-innovation approach’ to AI regulation (see here).

The AI guidance is meant to support regulators develop tailored guidance for the implementation of the five principles underpinning the pro-innovation approach to AI regulation, that is: (i) Safety, security & robustness; (ii) Appropriate transparency and explainability; (iii) Fairness;
(iv) Accountability and governance; and (v) Contestability and redress.

Voluntary approach and timeline for implementation

A first, perhaps, surprising element of the AI guidance comes from the way in which engagement with the principles by current regulators is framed as voluntary. The white paper describing the pro-innovation approach to AI regulation (the ‘AI white paper’) had indicated that, initially, ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’, with a clear expectation for regulators to make use their ‘domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used’.

The AI white paper made it clear that a failure by regulators to implement the principles would lead the government to introduce ‘a statutory duty on regulators requiring them to have due regard to the principles’, which would still ‘allow regulators the flexibility to exercise judgement when applying the principles in particular contexts, while also strengthening their mandate to implement them’. There seemed to be little room for discretion for regulators to decide whether to engage with the principles, even if they were expected to exercise discretion on how to implement them.

By contrast, the initial AI guidance indicates that it ‘is not intended to be a prescriptive guide on implementation as the principles are voluntary and how they are considered is ultimately at regulators’ discretion’. There is also a clear indication in the response to the public consultation that the introduction of a statutory duty is not in the immediate legislative horizon and the absence of a pre-determined date for the assessment of whether the principles have been ‘sufficiently implemented’ on a voluntary basis (for example, in two years’ time) will make it very hard to press for such legislative proposal (depending on the policy direction of the Government at the time).

This seems to follow from the Government’s position that ‘acknowledge[s] concerns from respondents that rushing the implementation of a duty to regard could cause disruption to responsible AI innovation. We will not rush to legislate’. At the same time, however, the response to the public consultation indicates that DSIT has asked a number of regulators to publish by 30 April 2024 updates on their strategic approaches to AI. This seems to create an expectation that regulators will in fact engage—or have defined plans for engaging—with the principles in the very short term. How this does not create a ‘rush to implement’ and how putting the duty to consider the principles on a statutory footing would alter any of this is hard to fathom, though.

An iterative, phased approach

The very tentative approach to the issuing of guidance is also clear in the fact that the Government is taking an iterative, phased approach to the production of AI regulation guidance, with three phases foreseen. A phase one consisting of the publication of the AI guidance in Feb 2024, a phase two comprising an iteration and development of the guidance in summer of 2024, and a phase three (with no timeline) involving further developments in cooperation with regulators—to eg ‘encourage multi-regulator guidance’. Given the short time between phases one and two, some questions arise as to how much practical experience will be accumulated in the coming 4-6 months and whether there is much value in the high-level guidance provided in phase one, as it only goes slightly beyond the tentative steer included in the AI white paper—which already contained some indication of ‘factors that government believes regulators may wish to consider when providing guidance/implementing each principle’ (Annex A).

Indeed, the AI guidance is still rather high-level and it does not provide much substantive interpretation of what the different principles mean. It is very much a ‘how to develop guidance’ document, rather than a document setting out core considerations and requirements for regulators to embed within their respective remits. A significant part of the document provides guidance on ‘interpreting and applying the AI regulatory framework’ (pp 7-12) but this is really ‘meta-guidance’ on issues such as potential collaboration between regulators for the issuance of joint guidance/tools, or an encouragement to benchmarking and the avoidance of duplicated guidance where relevant. General recommendations such as the value of publishing the guidance and keeping it updated seem superfluous in a context where the regulatory approach is premised on ‘the expertise of [UK] world class regulators’.

The core of the AI guidance is limited to the section on ‘applying individual principles’ (pp 13-22), which sets out a series of questions to consider in relation to each of the five principles. The guidance offers no answers and very limited steer for their formulation, which is entirely left to regulators. We will probably have to wait (at least) for the summer iteration to get some more detail of what substantive requirements relate to each of the principles. However, the AI guidance already contains some issues worthy of careful consideration, in particular in relation to the tunnelling of regulatory power and the imbalanced approach to the different principles that follows from its reliance on existing (and soon to emerge) technical standards.

technical standards and interpretation of the regulatory principles

regulatory tunnelling

As we said in our response to the public consultation on the AI white paper,

The principles-based approach to AI regulation suggested in the AI [white paper] is undeliverable, not only due to lack of detail on the meaning and regulatory implications of each of the principles, but also due to barriers to translation into enforceable requirements, and tensions with existing regulatory frameworks. The AI [white paper] indicates in Annex A that each regulator should consider issuing guidance on the interpretation of the principles within its regulatory remit, and suggests that in doing so they may want to rely on emerging technical standards (such as ISO or IEEE standards). This presumes both the adequacy of those standards and their sufficiency to translate general principles into operationalizable and enforceable requirements. This is by no means straightforward, and it is hard to see how regulators with significantly limited capabilities … can undertake that task effectively. There is a clear risk that regulators may simply rely on emerging industry-led standards. However, it has already been pointed out that this creates a privatisation of AI regulation and generates significant implicit risks (at para 27).

The AI guidance, in sticking to the same approach, confirms this risk of regulatory tunnelling. The guidance encourages regulators to explicitly and directly refer to technical standards ‘to support AI developers and AI deployers’—while at the same time stressing that ‘this guidance is not an endorsement of any specific standard. It is for regulators to consider standards and their suitability in a given situation (and/or encourage those they regulate to do so likewise).’ This does not seem to be the best approach. Leaving it to each of the regulators to assess the suitability of existing (and emerging) standards creates duplication of effort, as well as a risk of conflicting views and guidance. It would seem that it is precisely the role of centralised AI guidance to carry out that assessment and filter out technical standards that are aligned with the overarching regulatory principles for implementation by sectoral regulators. In failing to do that and pushing the responsibility down to each regulator, the AI guidance comes to abdicate responsibility for the provision of meaningful policy implementation guidelines.

Additionally, the strong steer to rely on references to technical standards creates an almost default position for regulators to follow—especially those with less capability to scrutinise the implications of those standards and to formulate complementary or alternative approaches in their guidance. It can be expected that regulators will tend to refer to those technical standards in their guidance and to take them as the baseline or starting point. This effectively transfers regulatory power to the standard setting organisations and further dilutes the regulatory approach followed in the UK, which in fact will be limited to industry self-regulation despite the appearance of regulatory intervention and oversight.

unbalanced approach

The second implication of this approach is that some principles are likely to be more developed than other in regulatory guidance, as they also are in the initial AI guidance. The series of questions and considerations are more developed in relation to principles for which there are technical standards—ie ‘safety, security & robustness’, and ‘accountability and governance’—and to some aspects of other principles for which there are standards. For example, in relation to ‘adequate transparency and explainability’, there is more of an emphasis on explainability than on transparency and there is no indication of how to gauge ‘adequacy’ in relation to either of them. Given that transparency, in the sense of publication of details on AI use, raises a few difficult questions on the interaction with freedom of information legislation and the protection of trade secrets, the passing reference to the algorithmic transparency recording standard will not be sufficient to support regulators in developing nuanced and pragmatic approaches.

Similarly, in relation to ‘fairness’, the AI guidance solely provides some reference in relation to AI ethics and bias, and in both cases in relation to existing standards. The document falls awfully short of any meaningful consideration of the implications and requirements of the (arguably) most important principle in AI regulation. The AI guidance solely indicates that

Tools and guidance could also consider relevant law, regulation, technical standards and assurance techniques. These should be applied and interpreted similarly by different regulators where possible. For example, regulators need to consider their responsibilities under the 2010 Equality Act and the 1998 Human Rights Act. Regulators may also need to understand how AI might exacerbate vulnerabilities or create new ones and provide tools and guidance accordingly.

This is unhelpful in many ways. First, ensuring that AI development and deployment complies with existing law and regulation should not be presented as a possibility, but as an absolute minimum requirement. Second, the duties of the regulators under the EA 2010 and HRA 1998 are likely to play a very small role here. What is crucial is to ensure that the development and use of the AI is compliant with them, especially where the use is by public sector entities (for which there is no general regulator—and in relation to which a passing reference to the EHRC guidance on AI use in the public sector will not be sufficient to support regulators in developing nuanced and pragmatic approaches). In failing to explicitly acknowledge the existence of approaches to the assessment of AI and algorithmic impacts on fundamental and human rights, the guidance creates obfuscation by omission.

‘Contestability and redress’ is the most underdeveloped principle in the AI guidance, perhaps because no technical standard addresses this issue.

final thoughts

In my view, the AI guidance does little to support regulators, especially those with less capability and resources, in their (voluntary? short-term?) task of issuing guidance in their respective remits. Meaningful AI guidance needs to provide much clearer explanations of what is expected and required for the correct implementation of the five regulatory principles. It needs to address in a centralised and unified manner the assessment of existing and emerging technical standards against the regulatory benchmark. It also needs to synthesise the multiple guidance documents issued (and to be issued) by regulators—which it currently simply lists in Annex 1—to avoid a multiplication of the effort required to assess their (in)comptability and duplications. By leaving all these tasks to the regulators, the AI guidance (and the centralised function from which it originates) does little to nothing to move the regulatory needle beyond industry-led self-regulation and fails to discharge regulators from the burden of issuing AI guidance.

High hopes but little movement for public sector AI use regulation through procurement in the UK Government's 'Pro-innovation Approach' response

The UK Government has recently published its official response (the ‘response’) to the public consultation of March 2023 on its ‘pro-innovation approach’ to AI regulation (for an initial discussion, see here). The response shows very little movement from the original approach and proposals and, despite claiming that significant developments have already taken place, it mainly provides a governmental self-congratulatory narrative and limited high-level details of a regulatory architecture still very much ‘under construction’. The publication of the response was coupled with that of Initial Guidance for Regulators on Implementing the UK’s AI Regulatory Principles (Feb 2024), which I will comment in a subsequent post.

A section of particular interest in the response refers to ‘Ensuring AI best practice in the public sector’ (at 21-22), which makes direct reference to the use of public procurement and the exercise of public sector buying power as a regulatory lever.

This section describes some measures being put in place or planned to seize ‘the opportunities presented by AI to deliver better public services including health, education, and transport’, such as:

  • tripling the number of technical AI engineers and developers within the Cabinet Office to create a new AI Incubator for the government’ (para 41).
    This is an interesting commitment to building in-house capability. It would however be interesting to know whether these are new or reassigned roles, as well as how the process of recruitment and retention is faring, given the massive difficulties evidenced in the recent analysis by the National Audit Office, Digital transformation in government: addressing the barriers to efficiency (10 Mar 2023, HC 2022-23, 1171).

  • The government is also using the procurement power of the public sector to drive responsible and safe AI innovation. The Central Digital and Data Office (CDDO) has published guidance on the procurement and use of generative AI for the UK government. Later this year, DSIT will launch the AI Management Essentials scheme, setting a minimum good practice standard for companies selling AI products and services. We will consult on introducing this as a mandatory requirement for public sector procurement, using purchasing power to drive responsible innovation in the broader economy’ (para 43).
    This is also an interesting aspiration, for several reasons. First, the GenAI guidance is very generic and solely highlights pre-existing (also very generic) guidance on how to carry out procurement of AI (see screenshot below). This can hardly be seen as a meaningful development of the existing regulatory framework. Second, the announcement of an ‘AI Management Essentials’ scheme seems to be mirrored on the ‘Cyber Essentials’ scheme in the area of cyber security, despite significant differences and the much higher level of complexity that can be expected from an ‘all-encompassing’ scheme for the management of the myriad risks generated by the deployment of AI.

Screenshot of the webpage https://www.gov.uk/government/publications/generative-ai-framework-for-hmg/generative-ai-framework-for-hmg-html (accessed 22 February 2024), where this information is available in accessible format.

  • This builds on the Algorithmic Transparency Recording Standard (ATRS), which established a standardised way for public sector organisations to proactively publish information about how and why they are using algorithmic methods in decision-making. Following a successful pilot of the standard, and publication of an approved cross-government version last year, we will now be making use of the ATRS a requirement for all government departments and plan to expand this across the broader public sector over time’ (para 44).
    This is also interesting in that the ‘success’ attributed to the development of the ATRS is very clearly undermined by the almost absolute lack of use other than in relation to the pilot projects (see screenshot below). It is also interesting that the ATRS allows public sector AI deployers to fill in but not publish the relevant documents, as a form of self-reflective/evaluative exercise. I wonder how many publications we will see in the coming months, even if ‘use of the ATRS’ becomes a requirement.

Screenshot of the list of published transparency disclosures at Algorithmic Transparency Reports - GOV.UK (www.gov.uk) (accessed 22 February 2024), where this information is available in accessible format.

Overall, I think the response to the ‘pro-innovation’ AI regulation consultation does little to back up the high expectations being placed in public procurement as a mechanism of regulation by contract. I will update the analysis in this UK-focused paper on the use of procurement to regulate public sector AI use before final publication, but there will be little change. The broader analysis in my recent monograph also remains applicable (phew): Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP 2024).

The principle of competition is dead. Long live the principle of competition (Free webinar)

Free webinar: 22 March 2024 *revised time* 1pm UK / 2pm CET / 3pm EET. Registration here.

The role of competition in public procurement regulation continues to be debated. While it is generally accepted that the proper functioning of procurement markets requires some level of competition – and the European Court of Auditors has recently pointed out that current levels of competition for public contracts in the EU are not satisfactory – the 'legal ranking' and normative weight of competition concerns are much less settled.

This has been evidenced in a recent wave of academic discussion on whether there is a general principle of competition at all in Directive 2014/24/EU, what is its normative status and how it ranks vis-à-vis sustainability and environmental considerations, and what are its practical implications for the interpretation and application of EU public procurement law.

Bringing together voices representing a wide range of views, this webinar will explore these issues and provide a space for reflective discussion on competition and public procurement. The webinar won't settle the debate, but hopefully it will allow us to take stock and outline thoughts for the next wave of discussion. It will also provide an opportunity for an interactive Q&A.

Speakers:

  • Prof Roberto Caranta, Full Professor of Administrative Law, University of Turin.

  • Mr Trygve Harlem Losnedahl, PhD researcher, University of Oslo.

  • Dr Dagne Sabockis, Senior Associate, Vinge law firm; Stockholm School of Economics.

  • Prof Albert Sanchez-Graells, Professor of Economic Law, University of Bristol.

Pre- or post-reading:

Centralised procurement for the health care sector -- bang for your pound or siphoning off scarce resources?

The National Health Service (NHS) has been running a centralised model for health care procurement in England for a few years now. The current system resulted from a redesign of the NHS supply chain that has been operational since 2019 [for details, see A Sanchez-Graells, ‘Centralisation of procurement and supply chain management in the English NHS: some governance and compliance challenges’ (2019) 70(1) NILQ 53-75.]

Given that the main driver for the implementation and redesign of the system was to obtain efficiencies (aka savings) through the exercise of the NHS’ buying power, both the UK’s National Audit Office (NAO) and the House of Commons’ Public Accounts Committee (PAC) are scrutinising the operation of the system in its first few years.

The NAO published a scathing report on 12 January 2024. Among many other concerning issues, the report highlighted how, despite the fundamental importance of measuring savings, ‘NHS Supply Chain has used different methods to report savings to different audiences, which could cause confusion.’ This triggered a clear risk of recounting (ie exaggeration) of claims of savings, as detailed below.

In my submission of written evidence to the PAC Inquiry ‘NHS Supply Chain and efficiencies in procurement’, I look in detail at the potential implications of the use of different savings reporting methods for the (mis)management of scarce NHS resources, should the recounting of savings have allowed private subcontractors to also overclaim savings in order to boost the financial return under their contracts. The full text of my submission is reproduced below, in case of interest.

nao’s findings on recounting of savings

There are three crucial findings in the NAO’s report concerning the use of different (and potentially problematic) savings reporting methods. They are as follows:

DHSC [the Department of Health and Social Care] set Supply Chain a cumulative target of making £2.4 billion savings by 2023-24. Supply Chain told us that it had exceeded this target by the end of 2022-23 although we have not validated this saving. The method for calculating this re-counted savings from each year since 2015-16. Supply Chain calculated its reported savings against the £2.4 billion target by using 2015-16 prices as its baseline. Even if prices had not reduced in any year compared with the year before, a saving was reported as long as prices were lower than that of the baseline year. This method then accumulated savings each year, by adding the difference in price as at the baseline year, for each year. This accumulation continued to re-count savings made in earlier years and did not take inflation into account. For example, if a product cost £10 in 2015-16 and reduced to £9 in 2016-17, Supply Chain would report a saving of £1. If it remained at £9 in 2017-18, Supply Chain would report a total saving of £2 (re-counting the £1 saved in 2016-17). If it then reduced to £8 in 2018-19, Supply Chain would report a total saving of £4 (re-counting the £1 saved in each of 2016-17 and 2017-18 and saving a further £2 in 2018-19) […]. DHSC could not provide us with any original sign-off or agreement that this was how Supply Chain should calculate its savings figure (para 2.4, emphasis added).

Supply Chain has used other methods for calculating savings which could cause confusion. It has used different methods for different audiences, for example, to government, trusts and suppliers (see Figure 5). When reporting progress against its £2.4 billion target it used a baseline from 2015-16 and accumulated the amount each year. To help show the savings that trusts have made individually, it also calculates in-year savings each trust has made using prices paid the previous year as the baseline. In this example, if a trust paid £10 for an item in 2015-16, and then procured it for £9 from Supply Chain in 2016-17 and 2017-18, Supply Chain would report a saving of £1 in the first year and no saving in the second year. These different methods have evolved since Supply Chain was established and there is a rationale for each. Having several methods to calculate savings has the potential to cause confusion (para 2.6, emphasis added).

When I read the report, I thought that the difference between the methods was not only problematic in itself, but also showed that the ‘main method’ for NHS Supply Chain and government to claim savings, in allowing recounting of savings, was likely to have allowed for excessive claims. This is not only a technical or political problem, but also a clear risk of siphoning off NHS scarce budgetary resources, for the reasons detailed below.

Submission to the pac inquiry

00. This brief written submission responds to the call for evidence issued by the Public Accounts Committee in relation to its Inquiry “NHS Supply Chain and efficiencies in procurement”. It focuses on the specific point of ‘Progress in delivering savings for the NHS’. This submission provides further details on the structure and functioning of NHS Supply Chain than those included in the National Audit Office’s report “NHS Supply Chain and efficiencies in procurement” (2023-24, HC 390). The purpose of this further detail is to highlight the broader implications that the potential overclaim of savings generated by NHS Supply Chain may have had in relation to payments made to private providers to whom some of the supply chain functions have been outsourced. It raises some questions that the Committee may want to explore in the context of its Inquiry.

1. NHS Supply Chain operating structure

01. The NAO report analyses the functioning and performance of NHS Supply Chain and SCCL in a holistic manner and without considering details of the complex structure of outsourced functions that underpins the model. This can obscure some of the practical impacts of some of NAO’s findings, in particular in relation with the potential overclaim of savings generated by NHS Supply Chain (paras 2.4, 2.6 and Figure 5 in the report). Approaching the analysis at a deeper level of detail on NHS Supply Chain’s operating structure can shed light on problems with the methods for calculating NHS Supply Chain savings other than the confusion caused by the use of multiple methods, and the potential overclaim of savings in relation to the original target set by DHSC.

02. NHS Supply Chain does not operate as a single entity and SCCL is not the only relevant actor in the operating structure.[1] Crucially, the operating model consists of a complex network of outsourcing contracts around what are called ‘category towers’ of products and services. SCCL coordinates a series of ‘Category Tower Service Providers’ (CTSPs), as listed in the graph below. CTSPs have an active role in developing category management strategies (that is, the ‘go to market approach’ at product level) and heavily influence the procurement strategy for the relevant category, subject to SCCL approval.

03. CTSPs are incentivised to reduce total cost in the system, not just reduce unit prices of the goods and services covered by the relevant category. They hold Guaranteed Maximum Price Target Cost (GMPTC) contracts, under which CTSPs will be paid the operational costs incurred in performing the services against an annual target set out in the contract, but will only make a profit when savings are delivered, on a gainshare basis that is capped.

Source: NHS Supply Chain - New operating model (2018).[2]

04. There are very limited public details on how the relevant targets for financial services have been set and managed throughout the operation of the system. However, it is clear that CTSPs have financial incentives tied to the generation of savings for SCCL. Given that SCCL does not carry out procurement activities without CTSP involvement, it seems plausible that SCCL’s own targets and claimed savings would (primarily) have been the result of the simple aggregation of those of CTSPs. If that is correct, the issues identified in the NAO report may have resulted in financial advantages to CTSPs if they have been allowed to overclaim savings generated.

05. NHS Supply Chain has publicly stated that[3]:

  • ‘Savings are contractual to the CTSPs. As part of the procurement, bidders were asked to provide contractual savings targets for each year. These were assessed and challenged through the process and are core to the commercial model. CTSPs cannot attain their target margins (i.e. profit) unless they are able to achieve contractual savings.’

  • ‘The CTSPs financial reward mechanism [is] based upon a gain share from the delivery of savings. The model includes savings generated across the total system, not just the price of the product. The level of gain share is directly proportional to the level of savings delivered.’

06. In view of this, if CTSPs had been allowed to use a method of savings calculation that re-counted savings in the way NAO details at para 2.4 of its report, it is likely that their financial compensation will have been higher than it should have been under alternative models of savings calculation that did not allow for such re-count. Given the volumes of savings claimed through the period covered by the report, any potential overcompensation could have been significant. As any such overcompensation would have been covered by NHS funding, the Committee may want to include its consideration within its Inquiry and in its evidence-gathering efforts.

__________________________________

[1] For a detailed account, see A Sanchez-Graells, “Centralisation of procurement and supply chain management in the English NHS: some governance and compliance challenges” (2019) 70(1) Northern Ireland Legal Quarterly 53-75.

[2] Available at https://wwwmedia.supplychain.nhs.uk/media/Customer_FAQ_November_2018.pdf (last accessed 12 January 2024).

[3] Ibid, FAQs 24 and 25.