Happy holidays and all the best for 2023

Dear HTCaN friends,

The last few months have required intense work to make progress on the digital technologies and procurement governance research project. And more remains to be done before the final deadline in July 2023.

Knowing that you are there and that the draft chapters and posts are being read is a source of constant motivation. Receiving some useful feedback is always a gift. Thank you for your continued support and engagement with my scholarship during 2022.

I will take a break now, and I hope you will all also be able to disconnect, recharge and enjoy yourselves over the coming weeks. See you in the new year.

Season’s greetings and all best wishes,
Albert

"Tech fixes for procurement problems?" [Recording]

The recording and slides for yesterday’s webinar on ‘Tech fixes for procurement problems?’ co-hosted by the University of Bristol Law School and the GW Law Government Procurement Programme are now available for catch up if you missed it.

I would like to thank once again Dean Jessica Tillipman (GW Law), Professor Sope Williams (Stellenbosch), and Eliza Niewiadomska (EBRD) for really interesting discussion, and to all participants for their questions. Comments most welcome, as always.

AI regulation by contract: submission to UK Parliament

In October 2022, the Science and Technology Committee of the House of Commons of the UK Parliament (STC Committee) launched an inquiry on the ‘Governance of Artificial Intelligence’. This inquiry follows the publication in July 2022 of the policy paper ‘Establishing a pro-innovation approach to regulating AI’, which outlined the UK Government’s plans for light-touch AI regulation. The inquiry seeks to examine the effectiveness of current AI governance in the UK, and the Government’s proposals that are expected to follow the policy paper and provide more detail. The STC Committee has published 98 pieces of written evidence, including submissions from UK regulators and academics that will make for interesting reading. Below is my submission, focusing on the UK’s approach to ‘AI regulation by contract’.

A. Introduction

01. This submission addresses two of the questions formulated by the House of Commons Science and Technology Committee in its inquiry on the ‘Governance of artificial intelligence (AI)’. In particular:

  • How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

  • To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

    • Is more legislation or better guidance required?

02. This submission focuses on the process of AI adoption in the public sector and, particularly, on the acquisition of AI solutions. It evidences how the UK is consolidating an inadequate approach to ‘AI regulation by contract’ through public procurement. Given the level of abstraction and generality of the current guidelines for AI procurement, major gaps in public sector digital capabilities, and potential structural conflicts of interest, procurement is currently an inadequate tool to govern the process of AI adoption in the public sector. Flanking initiatives, such as the pilot algorithmic transparency standard, are unable to address and mitigate governance risks. Contrary to the approach in the AI Regulation Policy Paper,[1] plugging the regulatory gap will require (i) new legislation supported by a new mechanism of external oversight and enforcement (an ‘AI in the Public Sector Authority’ (AIPSA)); (ii) a well-funded strategy to boost in-house public sector digital capabilities; and (iii) the introduction of a (temporary) mechanism of authorisation of AI deployment in the public sector. The Procurement Bill would not suffice to address the governance shortcomings identified in this submission.

B. ‘AI Regulation by Contract’ through Procurement

03. Unless the public sector develops AI solutions in-house, which is extremely rare, the adoption of AI technologies in the public sector requires a procurement procedure leading to their acquisition. This places procurement at the frontline of AI governance because the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’.[2] In that vein, the Committee on Standards in Public Life stressed that the ‘Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements’.[3] Procurement is thus erected as a public interest gatekeeper in the process of adoption of AI by the public sector.

04. However, to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards.

05. On a superficial reading, it could seem that the National AI Strategy tackled this by highlighting the importance of the public sector’s role as a buyer and stressing that the Government had already taken steps ‘to inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens’.[4] The National AI Strategy referred, in particular, to the setting up of the Crown Commercial Service’s AI procurement framework (the ‘CCS AI Framework’),[5] and the adoption of the Guidelines for AI procurement (the ‘Guidelines’)[6] as enabling tools. However, a close look at these instruments will show their inadequacy to provide clarity on the content of procedural and contractual obligations aimed at ensuring the goals stated above (para 03), as well as their potential to widen the existing public sector digital capability gap. Ultimately, they do not enable procurement to carry out the expected gatekeeping role.

C. Guidelines and Framework for AI procurement

06. Despite setting out to ‘provide a set of guiding principles on how to buy AI technology, as well as insights on tackling challenges that may arise during procurement’, the Guidelines provide high-level recommendations that cannot be directly operationalised by inexperienced public buyers and/or those with limited digital capabilities. For example, the recommendation to ‘Try to address flaws and potential bias within your data before you go to market and/or have a plan for dealing with data issues if you cannot rectify them yourself’ (guideline 3) not only requires a thorough understanding of eg the Data Ethics Framework[7] and the Guide to using Artificial Intelligence in the public sector,[8] but also detailed insights on data hazards.[9] This leads the Guidelines to stress that it may be necessary ‘to seek out specific expertise to support this; data architects and data scientists should lead this process … to understand the complexities, completeness and limitations of the data … available’.

07. Relatedly, some of the recommendations are very open ended in areas without clear standards. For example, the effectiveness of the recommendation to ‘Conduct initial AI impact assessments at the start of the procurement process, and ensure that your interim findings inform the procurement. Be sure to revisit the assessments at key decision points’ (guideline 4) is dependent on the robustness of such impact assessments. However, the Guidelines provide no further detail on how to carry out such assessments, other than a list of some generic areas for consideration (eg ‘potential unintended consequences’) and a passing reference to emerging guidelines in other jurisdictions. This is problematic, as the development of algorithmic impact assessments is still at an experimental stage,[10] and emerging evidence shows vastly diverging approaches, eg to risk identification.[11] In the absence of clear standards, algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also require access to specialist expertise to design and carry out the assessments.

08. Ultimately, understanding and operationalising the Guidelines requires advanced digital competency, including in areas where best practices and industry standards are still developing.[12] However, most procurement organisations lack such expertise, as a reflection of broader digital skills shortages across the public sector,[13] with recent reports placing civil service vacancies for data and tech roles throughout the civil service alone close to 4,000.[14] This not only reduces the practical value of the Guidelines to facilitate responsible AI procurement by inexperienced buyers with limited capabilities, but also highlights the role of the CCS AI Framework for AI adoption in the public sector.

09. The CCS AI Framework creates a procurement vehicle[15] to facilitate public buyers’ access to digital capabilities. CCS’ description for public buyers stresses that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation.’[16] The Framework thus seeks to enable contracting authorities, especially those lacking in-house expertise, to carry out AI procurement with the support of external providers. While this can foster the uptake of AI in the public sector in the short term, it is highly unlikely to result in adequate governance of AI procurement, as this approach focuses at most on the initial stages of AI adoption but can hardly be sustainable throughout the lifecycle of AI use in the public sector—and, crucially, would leave the enforcement of contractualised AI governance obligations in a particularly weak position (thus failing to meet the enforcement requirement at para 04). Moreover, it would generate a series of governance shortcomings which avoidance requires an alternative approach.

D. Governance Shortcomings

10. Despite claims to the contrary in the National AI Strategy (above para 05), the approach currently followed by the Government does not empower public buyers to responsibly procure AI. The Guidelines are not susceptible of operationalisation by inexperienced public buyers with limited digital capabilities (above paras 06-08). At the same time, the Guidelines are too generic to support sophisticated approaches by more advanced digital buyers. The Guidelines do not reduce the uncertainty and complexity of procuring AI and do not include any guidance on eg how to design public contracts to perform the regulatory functions expected under the ‘AI regulation by contract’ approach.[17] This is despite existing recommendations on eg the development of ‘model contracts and framework agreements for public sector procurement to incorporate a set of minimum standards around ethical use of AI, with particular focus on expected levels transparency and explainability, and ongoing testing for fairness’.[18] The guidelines thus fail to address the first requirement for effective regulation by contract in relation to clarifying the relevant obligations (para 04).

11. The CCS Framework would also fail to ensure the development of public sector capacity to establish, monitor, and enforce AI governance obligations (para 04). Perhaps counterintuitively, the CCS AI Framework can generate a further disempowerment of public buyers seeking to rely on external capabilities to support AI adoption. There is evidence that reliance on outside providers and consultants to cover immediate needs further erodes public sector capability in the long term,[19] as well as creating risks of technical and intellectual debt in the deployment of AI solutions as consultants come and go and there is no capture of institutional knowledge and memory.[20] This can also exacerbate current trends of pilot AI graveyard spirals, where most projects do not reach full deployment, at least in part due to insufficient digital capabilities beyond the (outsourced) pilot phase. This tends to result in self-reinforcing institutional weaknesses that can limit the public sector’s ability to drive digitalisation, not least because technical debt quickly becomes a significant barrier.[21] It also runs counter to best practices towards building public sector digital maturity,[22] and to the growing consensus that public sector digitalisation first and foremost requires a prioritised investment in building up in-house capabilities.[23] On this point, it is important to note the large size of the CCS AI Framework, which was initially pre-advertised with a £90 mn value,[24] but this was then revised to £200 mn over 42 months.[25] Procuring AI consultancy services under the Framework can thus facilitate the funnelling of significant amounts of public funds to the private sector, rather than using those funds to build in-house capabilities. It can result in multiple public buyers entering contracts for the same expertise, which thus duplicates costs, as well as in a cumulative lack of institutional learning by the public sector because of atomised and uncoordinated contractual relationships.

12. Beyond the issue of institutional dependency on external capabilities, the cumulative effect of the Guidelines and the Framework would be to outsource the role of ‘AI regulation by contract’ to unaccountable private providers that can then introduce their own biases on the substantive and procedural obligations to be embedded in the relevant contracts—which would ultimately negate the effectiveness of the regulatory approach as a public interest safeguard. The lack of accountability of external providers would not only result from the weakness (or absolute inability) of the public buyer to control their activities and challenge important decisions—eg on data governance, or algorithmic impact assessments, as above (paras 06-07)—but also from the potential absence of effective and timely external checks. Market mechanisms are unlikely to deliver adequate checks due market concentration and structural conflicts of interest affecting both providers that sometimes provide consultancy services and other times are involved in the development and deployment of AI solutions,[26] as well as a result of insufficiently effective safeguards on conflicts of interest resulting from quickly revolving doors. Equally, broader governance controls are unlikely to be facilitated by flanking initiatives, such as the pilot algorithmic transparency standard.

13. To try to foster accountability in the adoption of AI by the public sector, the UK is currently piloting an algorithmic transparency standard.[27] While the initial six examples of algorithmic disclosures published by the Government provide some details on emerging AI use cases and the data and types of algorithms used by publishing organisations, and while this information could in principle foster accountability, there are two primary shortcomings. First, completing the documentation requires resources and, in some respects, advanced digital capabilities. Organisations participating in the pilot are being supported by the Government, which makes it difficult to assess to what extent public buyers would generally be able to adequately prepare the documentation on their own. Moreover, the documentation also refers to some underlying requirements, such as algorithmic impact assessments, that are not yet standardised (para 07). In that, the pilot standard replicates the same shortcomings discussed above in relation to the Guidelines. Algorithmic disclosure will thus only be done by entities with high capabilities, or it will be outsourced to consultants (thus reducing the scope for the revelation of governance-relevant information).

14. Second, compliance with the standard is not mandatory—at least while the pilot is developed. If compliance with the algorithmic transparency standard remains voluntary, there are clear governance risks. It is easy to see how precisely the most problematic uses may not be the object of adequate disclosures under a voluntary self-reporting mechanism. More generally, even if the standard was made mandatory, it would be necessary to implement an external quality control mechanism to mitigate problems with the quality of self-reported disclosures that are pervasive in other areas of information-based governance.[28] Whether the Central Digital and Data Office (currently in charge of the pilot) would have capacity (and powers) to do so remains unclear, and it would in any case lack independence.

15. Finally, it should be stressed that the current approach to transparency disclosure following the adoption of AI (ex post) can be problematic where the implementation of the AI is difficult to undo and/or the effects of malicious or risky AI are high stakes or impossible to revert. It is also problematic in that the current approach places the burden of scrutiny and accountability outside the public sector, rather than establishing internal, preventative (ex ante) controls on the deployment of AI technologies that could potentially be very harmful for fundamental and individual socio-economic rights—as evidenced by the inclusion of some fields of application of AI in the public sector as ‘high risk’ in the EU’s proposed EU AI Act.[29] Given the particular risks that AI deployment in the public sector poses to fundamental and individual rights, the minimalistic and reactive approach outlined in the AI Regulation Policy Paper is inadequate.

E. Conclusion: An Alternative Approach

16. Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens will require new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector. Such legislation would then need to be developed in statutory guidance of a much more detailed and actionable nature than the current Guidelines. These developed requirements can then be embedded into public contracts by reference. Without such clarification of the relevant substantive obligations, the approach to ‘AI regulation by contract’ can hardly be effective other than in exceptional cases.

17. Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence.

18. It would also be necessary to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

19. Until sufficient in-house capability is built to ensure adequate understanding and ability to manage digital procurement governance requirements independently, the current reactive approach should be abandoned, and AIPSA should have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification.

20. The new legislation and statutory guidance would need to be self-standing, as the Procurement Bill would not provide the required governance improvements. First, the Procurement Bill pays limited to no attention to artificial intelligence and the digitalisation of procurement.[30] An amendment (46) that would have created minimum requirements on automated decision-making and data ethics was not moved at the Lords Committee stage, and it seems unlikely to be taken up again at later stages of the legislative process. Second, even if the Procurement Bill created minimum substantive requirements, it would lack adequate enforcement mechanisms, not least due to the limited powers and lack of independence of the foreseen Procurement Review Unit (to also sit within Cabinet Office).

_______________________________________
Note: all websites last accessed on 25 October 2022.

[1] Department for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI. An overview of the UK’s emerging approach (CP 728, 2022).

[2] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector (August 2021) 33.

[3] Committee on Standards in Public Life, Intelligence and Public Standards (2020) 51.

[4] Department for Digital, Culture, Media and Sport, National AI Strategy (CP 525, 2021) 47.

[5] AI Dynamic Purchasing System < https://www.crowncommercial.gov.uk/agreements/RM6200 >.

[6] Office for Artificial Intelligence, Guidelines for AI Procurement (2020) < https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement >.

[7] Central Digital and Data Office, Data Ethics Framework (Guidance) (2020) < https://www.gov.uk/government/publications/data-ethics-framework >.

[8] Central Digital and Data Office, A guide to using artificial intelligence in the public sector (2019) < https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector >.

[9] See eg < https://datahazards.com/index.html >.

[10] Ada Lovelace Institute, Algorithmic impact assessment: a case study in healthcare (2022) < https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ >.

[11] A Sanchez-Graells, ‘Algorithmic Transparency: Some Thoughts On UK's First Four Published Disclosures and the Standards’ Usability’ (2022) < https://www.howtocrackanut.com/blog/2022/7/11/algorithmic-transparency-some-thoughts-on-uk-first-disclosures-and-usability >.

[12] A Sanchez-Graells, ‘“Experimental” WEF/UK Guidelines for AI Procurement: Some Comments’ (2019) < https://www.howtocrackanut.com/blog/2019/9/25/wef-guidelines-for-ai-procurement-and-uk-pilot-some-comments >.

[13] See eg Public Accounts Committee, Challenges in implementing digital change (HC 2021-22, 637).

[14] S Klovig Skelton, ‘Public sector aims to close digital skills gap with private sector’ (Computer Weekly, 4 Oct 2022) < https://www.computerweekly.com/news/252525692/Public-sector-aims-to-close-digital-skills-gap-with-private-sector >.

[15] It is a dynamic purchasing system, or a list of pre-screened potential vendors public buyers can use to carry out their own simplified mini-competitions for the award of AI-related contracts.

[16] Above (n 5).

[17] This contrasts with eg the EU project to develop standard contractual clauses for the procurement of AI by public organisations. See < https://living-in.eu/groups/solutions/ai-procurement >.

[18] Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making (2020) < https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making >.

[19] V Weghmann and K Sankey, Hollowed out: The growing impact of consultancies in public administrations (2022) < https://www.epsu.org/sites/default/files/article/files/EPSU%20Report%20Outsourcing%20state_EN.pdf >.

[20] A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ in idem, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming) < https://ssrn.com/abstract=4254931 >.

[21] M E Nielsen and C Østergaard Madsen, ‘Stakeholder influence on technical debt management in the public sector: An embedded case study’ (2022) 39 Government Information Quarterly 101706.

[22] See eg Kevin C Desouza, ‘Artificial Intelligence in the Public Sector: A Maturity Model’ (2021) IBM Centre for the Business of Government < https://www.businessofgovernment.org/report/artificial-intelligence-public-sector-maturity-model >.

[23] A Clarke and S Boots, A Guide to Reforming Information Technology Procurement in the Government of Canada (2022) < https://govcanadacontracts.ca/it-procurement-guide/ >.

[24] < https://ted.europa.eu/udl?uri=TED:NOTICE:600328-2019:HTML:EN:HTML&tabId=1&tabLang=en >.

[25] < https://ted.europa.eu/udl?uri=TED:NOTICE:373610-2020:HTML:EN:HTML&tabId=1&tabLang=en >.

[26] See S Boots, ‘“Charbonneau Loops” and government IT contracting’ (2022) < https://sboots.ca/2022/10/12/charbonneau-loops-and-government-it-contracting/ >.

[27] Central Digital and Data Office, Algorithmic Transparency Standard (2022) < https://www.gov.uk/government/collections/algorithmic-transparency-standard >.

[28] Eg in the context of financial markets, there have been notorious ongoing problems with ensuring adequate quality in corporate and investor disclosures.

[29] < https://artificialintelligenceact.eu/ >.

[30] P Telles, ‘The lack of automation ideas in the UK Gov Green Paper on procurement reform’ (2021) < http://www.telles.eu/blog/2021/1/13/the-lack-of-automation-ideas-in-the-uk-gov-green-paper-on-procurement-reform >.

Wishful legal analysis as a trade strategy? A rebuttal to the Minister for International Trade

In the context of the Parliamentary scrutiny of the procurement chapters of the UK’s Free Trade Agreements with Australia and New Zealand, I submitted several pieces of written evidence, which I then gathered together and reformulated in A Sanchez-Graells, ‘The Growing Thicket of Multi-Layered Procurement Liberalisation between WTO GPA Parties, as Evidenced in Post-Brexit UK’ (2022) 49(3) Legal Issues of Economic Integration 247–268. I was also invited to submit oral evidence to the Public Bills Comittee for the Trade (Australia and New Zealand) Bill.

In my research, I raised some legal issues on the way the UK-AUS and UK-NZ procurement chapters would interact with the World Trade Agreement Government Procurement Agreement (GPA)—to which UK, AUS and NZ are members—and the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP)—to which the UK seeks accession and both AUS and NZ are members. I also raised issues with the rules on remedies in particular, both in relation to UK-AUS and the CPTPP.

I have now become aware of a letter from the Minister for International Trade, where the UK Government simply dismisses my legal analysis in an unconvicing manner. In this post, I try to rebut their position—although their lack of arguments makes this rather difficult—and stress some of the misunderstandings that the letter evidences. The letter seems to me to reflect a worrying strategy of ‘wishful legal analysis’ that does not bode well for post-Brexit UK trade realignment.

Interaction between the GPA, FTAs and the CPTPP

In my analysis and submissions, I stressed how deviations in the UK’s FTAs from the substantive obligations set in the GPA generate legal uncertainty and potential problems in ‘dual regulation’ situations, where one of the contracting parties (eg the UK) would be under the impossibility of complying at the same time with the obligations resulting from the GPA with tenderers from GPA countries and those arising from the FTAs with AUS or NZ with their tenderers—without either breaching GPA obligations or, what is more likely, ignoring the deviation in the FTAs to ensure GPA compliance. It would also generate issues where compliance with the more demanding standards in the FTAs would be automatically propagated to the benefit of economic operators from other jurisdictions. I also raised how the deviations can generate legal uncertainty and make it more difficult for UK tenderers to ascertain their legal position in AUS and NZ. And I also raised how this situation can get further complicated if the UK accesses CPTPP.

My concerns were discussed in Committee and the Minister had the following to say:

The [GPA] and the [CPTPP] are plurilateral agreements between twenty-one and eleven parties respectively, including in each case Australia and New Zealand. As recognised in Committee, the [GPA] in particular establishes a global baseline for international procurement. Nonetheless, neither prevents its members from entering into bilateral free trade agreements to sit alongside the [GPA] and [CPTPP] while at the same time going further in terms of the procurement commitments between members.

These Agreements with Australia and New Zealand do just that, going beyond both the [GPA] and the [CPTPP] baselines. … Although the texts of the Agreements with Australia and New Zealand are sometimes laid out differently to the way they are in the Agreement on Government Procurement, they in no way dilute or reduce the global baseline established by the [GPA]. (emphases added).

There are two points to note, here. The first one is that the fact that the GPA and the CPTPP allow for bilateral agreements between their parties does not clarify how the overlapping treaties would operate, which is exactly what I analysed. Of note, under the 1969 Vienna Convention on the Law of Treaties (Art 30), when States conclude successive treaties relating to the same subject matter, the most recent treary prevails, and the provisions of the earlier treaty/ies only apply to the extent that they are not incompatible with those of the later treaty.

This is crucial here, especially as the Minister indicates that the UK-AUS and UK-NZ go beyond not only the GPA, but also the CPTPP. This would mean that entering into CPTPP after UK-AUS and UK-NZ—as the UK is currently in train of doing—could negate some of the aspects that go beyond CPTPP in both those FTAs. Moreover, the simple assertion that the FTAs do not dilute the GPA baseline is unconvincing, as detailed analysis shows that there are significant problems with eg the interpretation of the national treatment under the different treaties.

Secondly, the explanation provided does not resolve the practical problems arising from ‘dual regulation’ that I have identified and leaves the question open as to how the obligations under the FTAs will be interpreted and complied with in triangular situations involving tenderers not from AUS or NZ. Either the UK will apply the more demanding obligations—which will then benefit all GPA parties, not only AUS and NZ—or will stick to the GPA baseline in breach of the FTAs. There is no recognition of this issue in the letter.

The Minister also indicated that:

There was also suggestion in Committee that it would be difficult for suppliers in the United Kingdom to navigate the Agreements with Australia and New Zealand, as well as the [CPTPP] in the future. I would like to reassure the Committee that when bidding for United Kingdom procurements, the only system that British suppliers need to concern themselves with is United Kingdom’s procurement regulations. (emphasis added).

The Minister has either not understood the situation, or is seeking to obscure the analysis here. The concerns about legal uncertainty do not relate to UK businesses tendering for contracts in the UK, but to UK businesses tendering for contracts in AUS or NZ—which are the ones that would be seeking to benefit from the trade liberalisation pursued by those FTAs. Nothing in the Minister’s letter addresses this issue.

Domestic review rights under the Australian procurement chapter

One of the specific deviations from the GPA baseline that I identified in my research concerns the exclusion of access to remedies on grounds of public interest. While the GPA only allows excluding interim measures on such grounds, the AUS-UK FTA and CPTPP seem to allow for public interest to also bar access to remedies such as compensation—and, if this does not limit access to remedies as I submit, at least it does cause legal uncertainty in that respect.

My submission is met with the following response by the Minister [the mentioned annex is reproduced at the end of this post]:

The Committee also considered the evidence raised by Professor Sánchez-Graells regarding domestic review procedures … The Government respectfully disagrees with the analysis presented at that session that a provision in the government procurement chapter of the [UK-AUS FTA] ‘allows for the exclusion of legal remedies completely on the basis of public interest’.

The public interest exclusion only applies to temporary measures put in place to ensure aggrieved suppliers may continue to participate in a procurement.

The Government also respectfully disagrees with the suggestion in the witness evidence that this public interest exclusion is not similarly reflected in the [GPA] or the [UK-NZ FTA]. The Government acknowledges that the specific position of the exclusion differs between these agreements and is closer to the approach adopted in the [CPTPP]. Nonetheless, the Government do not consider this alters the legal effect or gives rise to legal uncertainty. For the benefit of the Committee, the relevant provisions from each of the [FTAs], the [GPA] and the [CPTPP] are set out in an annex to this letter.

The Minister’s explanations are not supported by any arguments. There is no reasoning to explain why the order of the clauses and subclauses in the relevant provisions does not alter their legal interpretation or effects. There is also no justification whatsoever for the opinion that textual differences do not give rise to legal uncertainty. The Government seems to think that it can simply wish the legal issues away.

The table included in the annex to the letter (below) is revealing of the precise issue that raises legal uncertainty and, potentially, a restriction on access to remedies other than interim measures beyond the GPA (and thus, in breach thereof). Why would treaties that seem to replicate the same rules draft them differently? How can any legal interpreter be of the opinion that the positioning of the exception clause does not have an effect on the interpretation of its scope of application? Is the fact that these agreements post-date the GPA and still deviate from it not of legal relevance?

Of course, there are arguments that could be made to counter my analysis. They could eg focus on the use of different (undefined) terms in different sub-clauses (such as ‘measures’ and ‘corrective action’). They could also focus on any preparatory works to the treaties (especially the CPTPP and UK-AUS FTA, which I have not yet been able to locate). They could even be more creative and attempt functional or customary interpretation arguments. But the letter contains no arguments at all.

Conclusion

It is a sad state of affairs where detailed legal analysis—whether correct or not—is dismissed without offering any arguments to the contrary and simply seeking to leverage the ‘authority’ of a Minister or Department. If this is the generalised approach to assessing the legal implications of the trade agreements negotiated (at speed) by the UK post-Brexit, it does not bode well for the legal certainty required to promote international investments and commercial activities.

The reassurances in the letter are void of any weight, in my view. I can only hope that the Committee is not persuaded by the empty explanations contained in the letter either.



A tribute fit for a king -- some personal reflections after Steen Treumer's Mindeskrift

On 2 December 2022, the Faculty of Law at the University of Copenhagen hosted the conference ‘Into the Northern Light — In memory of Steen Treumer’ to celebrate his life and academic legacy on what would have been his 57th birthday. The conference was co-organised by Carina Risvig Hamer and Marta Andhov, who put together a tribute fit for a king. It was an exceptional event. Not only for the academic content of the presentations and the further papers in the tribute book (which you can buy here), but also because it provided an opportunity to learn more about Steen and his approach to academia. I have since been mulling over lots of things I heard on the day. This is a rather personal reflection on what knowing more about Steen’s life means for my aspirations as a senior academic (if you are interested, here are some earlier thoughts).

It is easy to idolize the academics that have been influential in your academic path to knowledge. And it is sometimes a bad idea to ‘meet your idols’, for great ideas are not always formualted or held by great people. However, in Steen’s case, it was not only transformative to know him, but also deeply inspirational. What most struck me at the conference is not only that all the stories and anecdotes that were shared rang true with my own experience of collaborating with Steen. But also that there was so much more exceptional in the person than in the academic, and that his personality and private life were an extension of his academic persona.

Steen incarnated exceptional virtues as an academic role model. He was extremely clever, dedicated and curious. This led him to pioneer research and produce a wealth of knowledge that was ahead of the curve and that had clear practical relevance and influence. It led him to have high standards and to always seek to engage in detailed discussions of complicated and controversial topics. It was said he was competitive and always keen on winning the argument. However, he was always approachable, accessible, respectful and never punched down. He was compassionate and kind. He was measured and knew how to be forceful without being aggressive. He was patient and listened twice as much as he spoke (for he never forgot that he had two ears and one mouth, as was stressed in the conference). He sought collaborations and nurtured relationships. He always played the long game. He was an enabler of others and took pride in that. He was extremely resilient and down to earth, and could control what others would have experienced as overwhelming emotions without losing hope or letting them derail his projects, even in the face of the greatest adversity. And this is not an exhaustive list of his virtues.

Sitting there, witnessing the love for Steen and the sadness for his unjustifiedly early departure, and reflecting on all this, I realized that I am now roughly of the same age Steen had when I first met him in 2009. And I hold a roughly comparable academic position. However, I am so far from having developed the skills and the approach he already had back then that I feel rather inadequate in many aspects of my role. I won’t list my shortcomings (too long a laundry list, best dealt with in private), but the one I keep thinking about is my limited humility (or rather, my egotism and pride) and my conflation of forceful or passionate arguing with aggressive attitudes. I am increasingly aware that over the years I have probably offended more than a fellow academic (at conferences, in this blog) and that some of my views could have been presented more kindly without detracting from the academic judgement underlying them. For that, I can only offer an unreserved apology. And to commit to try my best to change attitude, be more humble and, dare I say, try to be a little more like Steen.

If I have any chance of success, it is because of the role model Steen offered (which aligns with the core values and attitudes of other role models I still benefit from) and the unwavering support I receive from many colleagues, but especially the core of my academic collaborators and friends at the European Procurement Law Group: Roberto Caranta, Kirsi-Maria Halonen, Carina Risvig Hamer, and Pedro Telles. Seeing them again, after 3 or 4 years apart, made me far happier than I could have anticipated. And this reminded me both of the joys of belonging to a community and the duty to foster the right ways of engagement for such a community to thrive. I won’t forget this again.