This guest post by Giuseppe Bitti* explores the broad policy approaches that can be followed to slow down AI adoption by the public sector, with a focus on risk mitigation and management.
It was submitted to the ‘a book for a blog’ competition and has won Giuseppe a copy that will soon be on the post.
Please note Giuseppe’s disclaimer that “The views expressed are those of the author and do not necessarily reflect those of the European Central Bank”.
Public procurement of AI
between opportunities and risks
Introduction
In the past year, the growth in hype around AI and machine learning has been considerable. The thesis that most white-collar tasks will be automated within a few years (or even months!) would seem (for many) to have dispensed with the requirement to be subject to empirical demonstration, having thereby morphed into an item of faith.
IT providers are naturally keen to promote their AI solutions – claiming they can cover most of their customers’ needs – and often AI and machine learning are mixed with standard digitalisation, leading to additional confusion. Furthermore, this new AI ‘gold rush’ may lead to acritical applications of AI tools to inadequate use cases and situations where it might do more harm than good, out of FOMO by the said customers.
While the potential benefits are clearly appealing and AI will most likely have a strong impact on how a lot of white-collar tasks are performed – potentially revolutionising many of them – it would be wise to take this hype with a pinch of salt and proceed cautiously, especially in areas which are particularly sensitive, such the exercise of public tasks and powers, and the management of public money and resources (including data).
As known, the EU has taken action along this lines and has been (again) leading worldwide with the first general legal framework for AI, defining i.a. categories of tasks for which AI can be freely used, or used subject to light requirements (e.g. transparency for image manipulation), or substantial requirements (e.g. for law enforcement and essential public services), or not used at all (e.g. facial recognition via CCTV footage scraping).
However, besides general legislation, the aim of a pondered adoption of AI by the public sector can also be achieved via other, specific means, including public procurement, especially considering that the latter will likely be the main ‘entrance gate’ for AI tools into the public sector, except for those (very) few public administrations sufficiently large (and deep-pocketed) to be able to develop their own in-house solutions.
In this regard, it is noteworthy that the AI Act entrusts the Commission (and specifically the EU’s AI Office) with the task of “evaluating and promoting the convergence of best practices in public procurement procedures in relation to AI systems” [Article 62(3)(d) EU AI Act].
At first glance it might not appear very easy to use public procurement as a tool to delay other public sector activities (namely, adoption of AI tools). Public procurement is generally meant to be a tool for enhancing – and not hindering – the pursuit of public sector activities.
However, it is also true that public procurement can – and probably should – be a tool for the safeguard and promotion of interests other than the traditional ones. A good example of this is the use of public procurement to contribute to the promotion of sustainability objectives, even to the detriment of traditional aims such as ensuring the best value for money in a strict, financial sense.
Hence, we could also frame the task of using public procurement to slow down AI adoption by the public sector as a contribution of public procurement policy to the (desirable) aim of a cautious and thought-through adoption of AI tools in the public sector, even if to the detriment of the alternative aim of a swift – but risky – reaping of the appealing benefits offered by AI tools.
Some options to use public procurement to slow down AI adoption in the public sector
As mentioned above, public procurement should generally foster the action of public administration. However, it might also be used to steer it towards complementary dimensions. If so, what is to be done?
On a (trite) humoristic note: a first idea to slow any task down would be to leave its implementation to a cumbersome public tender procedure.
Dull jokes aside, I think several options can be identified, below in order of ‘invasiveness’.
Ban or Moratorium
A first, draconian idea would be to (temporarily) prohibit contractors from relying on AI tools for the provision of services to the contracting authority. This could be difficult to implement, especially as an increasing number of suppliers do rely – and will rely even more – on AI solutions for all their clients.
Still, public procurement does account for approx. 14% of the EU’s GDP, hence the margin for a strong suasion is there.
The first, direct result of this measure would be – self-evidently – to slow down the explicit use of AI solutions by the public sector.
However, such a measure would also likely contribute to an indirect slowing down of general AI adoption, by decreasing the providers’ overall economies of scale in adopting AI solutions.
A supplier might need to think again whether it makes sense to change its delivery model to include AI tools if this would benefit only its private clients, while it would need to keep the current non-AI-based delivery model towards its public sector portfolio.
AI-keen suppliers might not bid for public procurement opportunities, losing an important revenue stream, while other contractors might be forced to keep up two parallel systems, making AI adoption more costly due to reduced economies of scale.
Suppliers working predominantly or exclusively for public sector clients might decide to delay the adoption of any AI solution as it would expose them to more risks (e.g.: exclusion from the procurement procedure, contract termination) than potential benefits (namely prevailing in procurement procedures over competitors which would also be prevented from using AI).
Neutralising Risks
A second idea would be to follow a similar logic to that sometimes applied in sustainable procurement and focus on neutralising the main risks of AI.
For example, if the main concern is to avoid falling for the hype of an appealing solution promising extraordinary results but also exposing to a considerable financial risk due to possible mistakes made by the tool, such risk could be tackled in part via tailored tender specifications.
These could for example foresee penalties for gross (or even minor) errors made by the offered solution. To avoid later calculation issues, such liability could be precisely stipulated and agreed in the form of a service level agreement, foreseeing fixed amounts per category of issue/error.
This idea could also be complemented by a requirement – at selection stage – for the contractor to provide a financial guarantee (e.g. an insurance).
This would first ensure the solvency of the contractor (which might also not be a large AI player, but a simple software reseller, or a company relying on AI provided by another subcontractor).
Secondly, it would require the involvement of a third-party guarantor/insurer, thereby increasing the level of scrutiny on the precise functioning (and risks) of the AI solution and generally increasing the costs and complexity of providing it, especially in case of use by a ‘normal’ supplier – e.g. a provider of facility management services – of AI tools provided by a subcontractor.
Weighing Risks
A third idea, similar to the one above, would be to focus on weighting the main risks presented by the AI tool by enhancing their relevance in the technical evaluation of the offers.
Of course, assessing the risks of the proposed solutions is already standard practice in all procurement processes, especially for IT tools.
However, the weighting of such criterion could be enhanced beyond usual standards to increase the relevance of risks presented by AI solutions vis-à-vis other technologies. As mentioned, in a lot of cases, recourse to AI is a possible, but not a necessary solution to a genuine need for digitalisation.
An appropriate weighting of the risk factor could help to separate the two needs (actual vs potential need for AI). Suppliers might spontaneously decide not to offer an AI tool, if this exposed them to the unnecessary risk of a negative assessment.
Complementary measures to risk neutralisation and risk weighing
These options should be complemented by additional safeguards, which might also result in a slowdown of AI adoption.
Focus be either on risk neutralisation or weighing, the proposed solution must be thoroughly evaluated in the context of the procurement process as part of the offers. This could be done ideally via targeted functionality tests (e.g. proof-of-concept).
However, a known difficulty in algorithmic evaluation is the large variety of use cases to which AI can be applied. Therefore the contracting authority should be precise in identifying the specific use cases for the tool so as to allow for a meaningful evaluation.
Precisely defined benchmarks and use cases will also help with the auditing of the tool at a later stage. A recurrent auditing is particularly important, as the performance of the tool will evolve due its capacity to learn (and sometimes possibly even detect it is being tested).
Furthermore, in any case both the technical and the governance dimensions should be considered: the focus should not be limited to assessing the technical functionalities of the solution or its capacity to perform in a series of targeted tests, but also the adequacy of the set of control and quality management mechanisms implemented by the provider.
The contracting authority should also ensure that the contractor remains responsible and accountable for the output of its AI solution: the risk of a public administration relying on a tool which is too complex to understand both for the customer and for the contractor providing it – especially when via subcontracting – is too large to be neglected. Besides penalties, contractors should remain liable for damages caused by the solution they provide.
Finally, specific requirements should limit the geographical scope where the AI tool provider is allowed to process data, especially personal and confidential data.
Risks of a slowdown in AI adoption in the public sector
The aim of using public procurement to slow down AI adoption in the public sector has several merits as mentioned above, but it is not risk-free. Especially the option of a general ban/moratorium on procurement of AI tools (Option 1).
The main risk is that service provision by the public administration may remain ‘stuck in the past’, i.e. relying on outdated technologies and processes. In the long run this would become a structural weakness in the capacity of the public sector to deliver on its mandate and stay competitive.
Furthermore, a considerably more efficient private sector – thanks to adequate AI adoption – would naturally result in a stronger case for outsourcing those public tasks which could benefit from such advantage, with all the known risks related to loss of control by the public sector.
Finally, being a market player – as AI customer – also allows contributing to the development of both the market and product itself. Withdrawing completely – e.g. via a moratorium – would result in a development of the AI market which is neither shaped nor affected by the needs and concerns of the public sector, inevitably leading to a later need to use – or work on the adjustment of – solutions originally developed for another kind of clients, often with very different needs.
In conclusion, for most tasks performed by the public sector, the best options would likely be those aiming at neutralising (Option 2) or weighing (Option 3) the actual risks of AI, rather than prohibiting it altogether.
However, this is no size-fits-all: several core tasks performed by the public sector where the AI-related risks exceed the respective benefits would probably be better off by a freeze until additional safeguards can be deployed. This could be the case, for example, for AI tools contributing to key administrative decision-making (e.g. issuing a key license/permit or awarding a sensitive contract).
Ultimately it might be preferable to perform some tasks in a way which is less efficient, yet also less risky and possibly more ethically acceptable. Furthermore, for such tasks there would be less/no public sector competitivity concerns, due to the lack of competition.
However, in such cases the best fitting measure would rather be a moratorium on AI adoption via general legislation on the lines of the AI Act, rather than via public procurement policy.
Giuseppe Bitti
Giuseppe Bitti is a Procurement Expert in the Directorate Finance of the European Central Bank (ECB). Before joining the ECB, Giuseppe was Adjunct Lecturer of European Economic Law at the Faculty of Law of the University of Hamburg. His research interests include public procurement law and governance, AI procurement, and the impact of AI on public procurement processes. You can connect with him on LinkedIn.