Protecting procurement's AI gatekeeping role in domestic law, and trade agreements? -- re Irion (2022)

© r2hox / Flickr.

The increasing recognition of the role of procurement as AI gatekeeper, or even as AI (pseudo)regulator, is quickly galvanising and leading to proposals to enshrine it in domestic legislation. For example, in the Parliamentary process of the UK’s 2022 Procurement Bill, an interesting amendment has surfaced. The proposal by Lord Clement-Jones would see the introduction of the following clause:

Procurement principles: automated decision-making and data ethics

In carrying out a procurement, a contracting authority must ensure the safe, sustainable and ethical use of automated or algorithmic decision-making systems and the responsible and ethical use of data.”

The purpose of the clause would be to ensure ‘that the ethical use of automated decision-making and data is taken into account when carrying out a procurement.’ This is an interesting proposal that would put the procuring entity, even if not the future user of the AI (?), in the legally-mandated position of custodian or gatekeeper for trustworthy AI—which, of course, depending on future interpretation could be construed narrowly or expansively (e.g. on whether to limit it to automated decision-making, or extend it to decision-making support algorithmic systems?).

This would go beyond current regulatory approaches in the UK, where this gatekeeping position arises from soft law, such as the 2020 Guidelines for AI procurement. It would probably require significant additional guidance on how this role is to be operationalised, presumably through algorithmic impact assessments and/or other forms of ex ante intervention, such as the imposition of (standard) requirements in the contracts for AI procurement, or even ex ante algorithmic audits.

These requirements would be in line with influential academic proposals [e.g. M Martini, ‘Regulating Algorithms. How to Demystify the Alchemy of Code?’ in M Ebers & S Navas, Algorithms and Law (CUP 2020) 100, 115, and 120-22], as well as largely map onto voluntary compliance with EU AI Act’s requirements for high-risk AI uses (which is the approach also currently followed in the proposal for standard contractual clauses for the procurement of AI by public organisations being developed under the auspices of the European Commission).

One of the key practical considerations for a contracting authority to be able to discharge this gatekeeping role (amongst many others on expertise, time, regulatory capture, etc) is access to source code (also discussed here). Without accessing the source code, the contracting authority can barely understand the workings of the (to be procured) algorithms. Therefore, it is necessary to preserve the possibility of demanding access to source code for all purposes related to the procurement (and future re-procurement) of AI (and other software).

From this perspective, it is interesting to take a look at current developments in the protection of source code at the level of international trade regulation. An interesting paper coming out of the on-going FAccT conference addresses precisely this issue: K Irion, ‘Algorithms Off-limits? If digital trade law restricts access to source code of software then accountability will suffer’ (2022) FAccT proceedings 1561-70.

Irion’s paper provides a good overview of the global efforts to protect source code in the context of trade regulation, maps how free trade agreements are increasingly used to construct an additional layer of protection for software source code (primarily from forced technology transfer), and rightly points at risks of regulatory lock-out or pre-emption depending on the extent to which source code confidentiality is pierced for a range of public interest cases.

What is most interesting for the purposes of our discussion is that source code protection is not absolute, but explicitly deactivated in the context of public procurement in all emerging treaties (ibid, 1564-65). Generally, the treaties either do not prohibit, or have an explicit exception for, source code transfers in the context of commercially negotiated contracts—which can in principle include contracts with the public sector (although the requirement for negotiation could be a hurdle in some scenarios). More clearly, under what can be labelled as the ‘EU approach’, there is an explicit carve-out for ‘the voluntary transfer of or granting of access to source code for instance in the context of government procurement’ (see Article 8.73 EU-Japan EPA; similarly, Article 207 EU–UK TCA; and Article 9 EU-Mexico Agreement in principle). This means that the EU (and other major jurisdictions) are very clear in their (intentional?) approach to preserve the gatekeeping role of procurement by enabling contracting authorities to require access to software source code.

Conversely, the set of exceptions generally emerging in source code protection via trade regulation can be seen as insufficient to ensure high levels of algorithmic governance resulting from general rules imposing ex ante interventions. Indeed, Irion argues that ‘Legislation that mandates conformity assessments, certification schemes or standardized APIs would be inconsistent with the protection of software source code inside trade law’ (ibid, 1564). This is debatable, as a less limiting interpretation of the relevant exceptions seems possible, in particular as they concern disclosure for regulatory examination (with the devil, of course, being in the detail of what is considered a regulatory body and how ex ante interventions are regulated in a particular jurisdiction).

If this stringent understanding of the possibility to mandate regulatory compliance with this being seen as a violation of the general prohibition on source code disclosure for the purposes of its ‘tradability’ in a specific jurisdiction becomes the prevailing interpretation of the relevant FTAs, and regulatory interventions are thus constrained to ex post case-by-case investigations, it is easy to see how the procurement-related exceptions will become an (even more important) conduit for ex ante access to software source code for regulatory purposes, in particular where the AI is to be deployed in the context of public sector activity.

This is thus an interesting area of digital trade regulation to keep an eye on. And, more generally, it will be important to make sure that the AI gatekeeping role assigned to the procurement function is aligned with international obligations resulting from trade liberalisation treaties—which would require a general propagation of the ‘EU approach’ to explicitly carving out procurement-related disclosures.

Public procurement and [AI] source code transparency, a (downstream) competition issue (re C-796/18)

Two years ago, in its Judgment of 28 May 2020 in case C-796/18, Informatikgesellschaft für Software-Entwicklung, EU:C:2020:395 (the ‘ISE case’), the Court of Justice of the European Union (CJEU) answered a preliminary ruling that can have very significant impacts in the artificial intelligence (AI) space, despite it being concerned with ‘old school’ software. More generally, the ISE case set the requirements to ensure that a contracting authority does not artificially distort competition for public contracts concerning (downstream) software services generally, and I argue AI services in particular.

The case risks going unnoticed because it concerned a relatively under-discussed form of self-organisation by the public administration that is exempted from the procurement rules (i.e. public-public cooperation; on that dimension of the case, see W Janssen, ‘Article 12’ in R Caranta and A Sanchez-Graells, European Public Procurement. Commentary on Directive 2014/24/EU (EE 2021) 12.57 and ff). It is thus worth revisiting the case and considering how it squares with regulatory developments concerning the procurement of AI, such as the development of standard clauses under the auspices of the European Commission.

The relevant part of the ISE case

In the ISE case, one of the issues at stake concerned whether a contracting authority would be putting an economic operator (i.e. the software developer) in a position of advantage vis-à-vis its competitors by accepting the transfer of software free of charge from another contracting authority, conditional on undertaking to further develop that software and to share (also free of charge) those developments of the software with the entity from which it had received it.

The argument would be that by simply accepting the software, the receiving contracting authority would be advantaging the software publisher because ‘in practice, the contracts for the adaptation, maintenance and development of the base software are reserved exclusively for the software publisher since its development requires not only the source code for the software but also other knowledge relating to the development of the source code’ (C-796/18, para 73).

This is an important issue because it primarily concerns how to deal with incumbency (and IP) advantages in software-related procurement. The CJEU, in the context of the exemption for public-public cooperation regulated in Article 12 of Directive 2014/24/EU, established that

in order to ensure compliance with the principles of public procurement set out in Article 18 of Directive 2014/24 … first [the collaborating contracting authorities must] have the source code for the … software, second, that, in the event that they organise a public procurement procedure for the maintenance, adaptation or development of that software, those contracting authorities communicate that source code to potential candidates and tenderers and, third, that access to that source code is in itself a sufficient guarantee that economic operators interested in the award of the contract in question are treated in a transparent manner, equally and without discrimination (para 75).

Functionally, in my opinion, there is no reason to limit that three-pronged test to the specific context of public-public cooperation and, in my view, the CJEU position is generalisable as the relevant test to ensure that there is no artificial narrowing of competition in the tendering of software contracts due to incumbency advantage.

Implications of the ISE case

What this means is that, functionally, contracting authorities are under an obligation to ensure that they have access and dissemination rights over the source code, at the very least for the purposes of re-tendering the contract, or tendering ancillary contracts. More generally, they also need to have a sufficient understanding of the software — or technical documentation enabling that knowledge — so that they can share it with potential tenderers and in that manner ensure that competition is not artificially distorted.

All of this is of high relevance and importance in the context of emerging practices of AI procurement. The debates around AI transparency are in large part driven by issues of commercial opacity/protection of business secrets, in particular of the source code, which both makes it difficult to justify the deployment of the AI in the public sector (for, let’s call them, due process and governance reasons demanding explainability) and also to manage its procurement and its propagation within the public sector (e.g. as a result of initiatives such as ‘buy once, use many times’ or collaborative and joint approaches to the procurement of AI, which are seen as strategically significant).

While there is a movement towards requiring source code transparency (e.g. but not necessarily by using open source solutions), this is not at all mainstreamed in policy-making. For example, the pilot UK algorithmic transparency standard does not mention source code. Short of future rules demanding source code transparency, which seem unlikely (see e.g. the approach in the proposed EU AI Act, Art 70), this issue will remain one for contractual regulation and negotiations. And contracts are likely to follow the approach of the general rules.

For example, in the proposal for standard contractual clauses for the procurement of AI by public organisations being developed under the auspices of the European Commission and on the basis of the experience of the City of Amsterdam, access to source code is presented as an optional contractual requirement on transparency (Art 6):

<optional> Without prejudice to Article 4, the obligations referred to in article 6.2 and article 6.3 [on assistance to explain an AI-generated decision] include the source code of the AI System, the technical specifications used in developing the AI System, the Data Sets, technical information on how the Data Sets used in developing the AI System were obtained and edited, information on the method of development used and the development process undertaken, substantiation of the choice for a particular model and its parameters, and information on the performance of the AI System.

For the reasons above, I would argue that a clause such as that one is not at all voluntary, but a basic requirement in the procurement of AI if the contracting authority is to be able to legally discharge its obligations under EU public procurement law going forward. And given the uncertainty on the future development, integration or replacement of AI solutions at the time of procuring them, this seems an unavoidable issue in all cases of AI procurement.

Let’s see if the CJEU is confronted with a similar issue, or the need to ascertain the value of access to data as ‘pecuniary interest’ (which I think, on the basis of a different part of the ISE case, is clearly to be answered in the positive) any time soon.

Procurement recommenders: a response by the author (García Rodríguez)

It has been refreshing to receive a detailed response by the lead author of one of the papers I recently discussed in the blog (see here). Big thanks to Manuel García Rodríguez for following up and for frank and constructive engagement. His full comments are below. I think they will help round up the discussion on the potential, constraints and data-dependency of procurement recommender systems.

Thank you Prof. Sánchez Graells for your comments, it has been a rewarding reading. Below I present my point of view to continue taking an in-depth look about the topic.

Regarding the percentage of success of the recommender, a major initial consideration is that the recommender is generic. It means, it is not restricted to a type of contract, CPV codes, geographical area, etc. It is a recommender that uses all types of Spanish tenders, from any CPV and over 6 years (see table 3). This greatly influences the percentage of success because it is the most difficult scenario. An easier scenario would have restricted the browser to certain geographic areas or CPVs, for example. In addition, 102,000 tenders have been used to this study and, presumably, they are not enough for a search engine which learn business behaviour patterns from historical tenders (more tenders could not be used due to poor data quality).

Regarding the comment that ‘the recommender is an effective tool for society because it enables and increases the bidders participation in tenders with less effort and resources‘. With this phrase we mean that the Administration can have an assistant to encourage participation (in the tenders which are negotiations with or without prior notice) or, even, in which the civil servants actively search for companies and inform those companies directly. I do not know if the public contracting laws of the European countries allow to search for actively and inform directly but it would be the most efficient and reasonable. On the other hand, a good recommender (one that has a high percentage of accuracy) can be an analytical tool to evaluate the level of competition by the contracting authorities. That is, if the tenders of a contracting authority attract very little competition but the recommender finds many potential participating companies, it means that the contracting authority can make its tenders more attractive for the market.

Regarding the comment that “It is also notable that the information of the Companies Register is itself not (and probably cannot be, period) checked or validated, despite the fact that most of it is simply based on self-declarations.” The information in the Spanish Business Register are the annual accounts of the companies, audited by an external entity. I do not know the auditing laws of the different countries. Therefore, I think that the reliability of the data is quite high in our article.

Regarding the first problematic aspect that you indicate: “The first one is that the recommender seems by design incapable of comparing the functional capabilities of companies with very different structural characteristics, unless the parameters for the filtering are given such range that the basket of recommendations approaches four digits”. There will always be the difficulty of comparing companies and defining when they are similar. That analysis should be done by economists, engineers can contribute little. There is also the limitation of business data, the information of the Business Register is usually paywalled and limited to certain fields, as is the case with the Spanish Business Registry. For these reasons, we recognise in the article that it is a basic approach, and it should be modified the filters/rules in the future: “Creating this profile to search similar companies is a very complex issue, which has been simplified. For this reason, the searching phase (3) has basic filters or rules. Moreover, it is possible to modify or add other filters according to the available company dataset used in the aggregation phase”.

Regarding the second problematic aspect that you indicate: “The second issue is that a recommender such as this one seems quite vulnerable to the risk of perpetuating and exacerbating incumbency advantages, and/or of consolidating geographical market fragmentation (given the importance of eg distance, which cannot generate the expected impact on eg costs in all industries, and can increasingly be entirely irrelevant in the context of digital/remote delivery).” This will not happen in the medium and long term because the recommender will adapt to market conditions. If there are companies that win bids far away, the algorithm will include that new distance range in its search. It will always be based on the historical winner companies (and the rest of the bidders if we have that information). You cannot ask a machine learning algorithm (the one used in this article) to make predictions not based on the previous winners and historical market patterns.

I totally agree with your final comment: “It would in my view be preferable to start by designing the recommender system in a way that makes theoretical sense and then make sure that the required data architecture exists or is created.” Unfortunately, I did not find any articles that discuss this topic. Lawyers, economists and engineers must work together to propose solid architectures. In this article we want to convince stakeholders that it is possible to create software tools such as a bidder recommender and the importance of public procurement data and the company’s data in the Business Registers for its development.

Thank you for your critical review. Different approaches are needed to improve on the important topic of public procurement.

The importance of procurement for public sector AI uptake

In case there was any question on the importance and central role of public procurement for the uptake of artificial intelligence (AI) by the public sector (there wasn’t, though), two recent policy reports confirm that this is the case, at the very least in the European context.

AI Watch’s must-read ‘European landscape on the use of Artificial Intelligence by the Public Sector’ (1 June 2022) makes the point very clearly by reference to the analysis of AI strategies adopted by 24 EU Member States: ‘the procurement of AI technologies or the increased collaboration with innovative private partners is seen as an important way to facilitate the introduction of AI within the public sector. Guidance on how to stimulate and organise AI procurement by civil servants should potentially be strengthened and shared among Member States’ (at 26). Concerning guidance, the report refers to the European Commission’s supported process of developing standard contractual clauses for the procurement of AI (see here), and there is also a twin AI Watch Handbook for the adoption of AI by the public sector (25 May 2022) that includes a recommendation on procurement guidance (‘Promote the development of multilingual guidelines, criteria and tools for public procurement of AI solutions in the public sector throughout Europe‘, recommendation 2.5, and details at 34-35).

The European landscape report provides some more interesting detail on national strategies considering AI procurement adaptations.

The need to work together with the private sector in this area is repeatedly stressed. However, strategies mention that historically it has been difficult for innovative companies to work together with government authorities due to cumbersome procurement regulations. In this area, several strategies (12, 50%) [though note the table below indicates 13, rather than 12 strategies] come up with new policy initiatives to improve the procurement processes. The Spanish strategy, for example, mentions that new innovative public procurement mechanisms will be introduced to help the procurement of new solutions from the market, while the Maltese government describes how existing public procurement processes will be changed to facilitate the procurement of emerging technologies such as AI. The Dutch and Czech strategies mention that hackathons for public sector AI will be introduced to assist in the procurement of AI. Civil servants will be given training and awareness in procurement to assist them in this process, something that is highlighted in the Estonian strategy. The French strategy stresses that current procurement regulation already provides a lot of freedom for innovative procurement but that because of risk aversion present within public administrations all possibilities are not taken into consideration (at 25-26, emphasis in the original).

Own elaboration, based on Table 7 in the AI Watch report.

There is also an interesting point on the need to create internal public sector AI capabilities: “Some strategies say that the public organisations should work more together with private organisations (where the missing skillsets are present), either through partnerships or by procurement. On the one hand, this is an extremely important and promising shift in the public sector that more and more must move towards a networking perspective. In fact, the complexity and variety of skills required by AI cannot be always completely internalised. On the other hand, such partnerships and procurement still require a baseline in expertise in AI within the public sector staff to avoid common mistakes or dependency on external parties” (at 23, emphasis added).

Given the strategic importance of procurement, as well as the need to upskill the procurement workforce and to build additional AI capacity in the public sector to manage procurement process, this is an area of research and policy that will only increase in relevance in the near and longer term.

This same direction of travel is reflected in the also recent UK's Central Digital and Data Office ‘Transforming for a digital future: 2022 to 2025 roadmap for digital and data’ (9 June 2022). One of its main aspirations is to generate ‘Significant savings by leveraging government’s combined purchasing power and reducing duplicative procurement, to shift to a “buy once, use many times” approach to technology’. This should be achieved by the horizontal promotion of ‘a “buy once, use many times” approach to technology, including by making use of a common code, pattern and architecture repository for government’. Implicitly, this will also require a review of procurement policies and practices.

Importantly—and potentially problematically—it will also raise the stakes of AI procurement, in particular if the roll-out of the ‘bought once’ technology is rushed and its negative impacts or implications can only be identified once it has already been propagated, or in relation to some implementations only. Avoiding this will require very careful IA impact assessments, as well as piloting and scalability approaches that have strong risk-management systems embedded by design.

As always, this will be an area fun to keep an eye on.

Procurement recommender systems: how much better before we trust them? -- re García Rodríguez et al (2020)

© jm3 on Flickr.

How great would it be for a public buyer if an algorithm could identify the likely best bidder/s for a contract it sought to award? Pretty great, agreed.

For example, it would allow targeted advertising or engagement of public procurement opportunities to make sure those ‘best suited’ bidders came forward, or to start negotiations where this is allowed. It could also enable oversight bodies, such as competition authorities, to screen for odd (anti)competitive situations where well-placed providers did not bid for the contract, or only did in worse than expected conditions. If the algorithm was flipped, it would also allow potential bidders to assess for which tenders they are particularly well suited (or not).

It is thus not surprising that there are commercial attempts being developed (eg here) and interesting research going on trying to develop such recommender systems—which, at root, work similarly to recommender systems used in e-commerce (Amazon) or digital content platforms (Netflix, Spotify), in the sense that they try to establish which of the potential providers are most likely to satisfy user needs.

An interesting paper

On this issue, on which there has been some research for at least a decade (see here), I found this paper interesting: García Rodríguez et al, ‘Bidders Recommender for Public Procurement Auctions Using Machine Learning: Data Analysis, Algorithm, and Case Study with Tenders from Spain’ (2020) Complexity Art 8858258.

The paper is interesting in the way it builds the recommender system. It follows three steps. First, an algorithm trained on past tenders is used to predict the winning bidder for a new tender, given some specific attributes of the contract to be awarded. Second, the predicted winning bidder is matched with its data in the Companies Register, so that a number of financial, workforce, technical and location attributes are linked to the prediction. Third and final, the recommender system is used to identify companies similar to the predicted winner. Such identification is based on similarities with the added attributes of the predicted winner, which are subject to some basic filters or rules. In other words, the comparison is carried out at supplier level, not directly in relation to the object of the contract.

Importantly, such filters to sieve through the comparison need to be given numerical values and that is done manually (i.e. set at rather random thresholds, which in relation to some categories, such as technical specialism, make little intuitive sense). This would in principle allow the user of the recommender system to tailor the parameters of the search for recommended bidders.

In the specific case study developed in the paper, the filters are:

  • Economic resources to finance the project (i.e. operating income, EBIT and EBITDA);

  • Human resources to do the work (i.e. number of employees):

  • Specialised work which the company can do (based on code classification: NACE2, IAE, SIC, and NAICS); and

  • Geographical distance between the company’s location and the tender’s location.

Notably, in the case study, distance ‘is a fundamental parameter. Intuitively, the proximity has business benefits such as lower costs’ (at 8).

The key accuracy metric for the recommender system is whether it is capable of identifying the actual winner of a contract as the likely winning bidder or, failing that, whether it is capable of including the actual winner within a basket of recommended bidders. Based on the available Spanish data, the performance of the recommender system is rather meagre.

The poor results can be seen in the two scenarios developed in the paper. In scenario 1, the training and test data are split 80:20 and the 20% is selected randomly. In scenario 2, the data is also split 80:20, but the 20% test data is the most recent one. As the paper stresses, ‘the second scenario is more appropriate to test a real engine search’ (at 13), in particular because the use of the recommender will always be ‘for the next tender’ after the last one included in the relevant dataset.

For that more realistic scenario 2, the recommender has an accuracy of 10.25% in correctly identifying the actual winner, and this only raises to 23.12% if the recommendation includes a basket of five companies. Even for the more detached from reality scenario 1, the accuracy of a single prediction is only 17.07%, and this goes up to 31.58% for 5-company recommendations. The most accurate performance with larger baskets of recommended companies only reaches 38.52% in scenario 1, and 30.52% in scenario 2, although the much larger number of recommended companies (approximating 1,000) also massively dilutes the value of the information.

Comments

So, with the available information, the best performance of the recommender system creates about 1 in 10 chances of correctly identifying the most suitable provider, or 1 in 5 chances of having it included in a basket of 5 recommendations. Put the other way, the best performance of the realistic recommender is that it fails to identify the actual winner for a tender 9 out of 10 times, and it still fails 4 out of 5 times when it is given five chances.

I cannot say how this compares with non-automated searches based on looking at relevant company directories, other sources of industry intelligence or even the anecdotal experience of the public buyer, but these levels of accuracy could hardly justify the adoption of the recommender.

In that regard, the optimistic conclusion of the paper (‘the recommender is an effective tool for society because it enables and increases the bidders participation in tenders with less effort and resources‘ at 17) is a little surprising.

The discussion of the limitations of the recommender system sheds some more light:

The main limitation of this research is inherent to the design of the recommender’s algorithm because it necessarily assumes that winning companies will behave as they behaved in the past. Companies and the market are living entities which are continuously changing. On the other hand, only the identity of the winning company is known in the Spanish tender dataset, not the rest of the bidders. Moreover, the fields of the company’s dataset are very limited. Therefore, there is little knowledge about the profile of other companies which applied for the tender. Maybe in other countries the rest of the bidders are known. It would be easy to adapt the bidder recommender to this more favourable situation (at 17).

The issue of the difficulty of capturing dynamic behaviour is well put. However, there are more problems (below) and the issue of disclosure of other participants in the tender is not straightforwardly to the benefit of a more accurate recommender system, unless there was not only disclosure of other bidders but also of the full evaluations of their tenders, which is an unlikely scenario in practice.

There is also the unaddressed issue of whether it makes sense to compare the specific attributes selected in the study, which it mostly does not, but is driven by the available data.

What is ultimately clear from the paper is that the data required for the development of a useful recommender is simply not there, either at all or with sufficient quality.

For example, it is notable that due to data quality issues, the database of past tenders shrinks from 612,090 recorded to 110,987 useable tenders, which further shrink to 102,087 due to further quality issues in matching the tender information with the Companies Register.

It is also notable that the information of the Companies Register is itself not (and probably cannot be, period) checked or validated, despite the fact that most of it is simply based on self-declarations. There is also an issue with the lag with which information is included and updated in the Companies Register—e.g. under Spanish law, company accounts for 2021 will only have to be registered over the summer of 2022, which means that a use of the recommender in late 2022 would be relying on information that is already a year old (as the paper itself hints, at 14).

And I also have the inkling that recommender systems such as this one would be problematic in at least two aspects, even if all the necessary data was available.

The first one is that the recommender seems by design incapable of comparing the functional capabilities of companies with very different structural characteristics, unless the parameters for the filtering are given such range that the basket of recommendations approaches four digits. For example, even if two companies were the closest ones in terms of their specialist technical competence (even if captured only by the very coarse and in themselves problematic codes used in the model)—which seems to be the best proxy for identifying suitability to satisfy the functional needs of the public buyer—they could significantly differ in everything else, especially if one of them is a start-up. Whether the recommender would put both in the same basket (of a useful size) is an empirical question, but it seems extremely unlikely.

The second issue is that a recommender such as this one seems quite vulnerable to the risk of perpetuating and exacerbating incumbency advantages, and/or of consolidating geographical market fragmentation (given the importance of eg distance, which cannot generate the expected impact on eg costs in all industries, and can increasingly be entirely irrelevant in the context of digital/remote delivery).

So, all in all, it seems like the development of recommender systems needs to be flipped on its head if data availability is driving design. It would in my view be preferable to start by designing the recommender system in a way that makes theoretical sense and then make sure that the required data architecture exists or is created. Otherwise, the adoption of suboptimal recommender systems would not only likely generate significant issues of technical debt (for a thorough warning, see Sculley et al, ‘Hidden Technical Debt in Machine Learning Systems‘ (2015)), but also risk significantly worsening the quality (and effectiveness) of procurement decision-making. And any poor implementation in ‘real life’ would deal a sever blow to the prospects of sustainable adoption of digital technologies to support procurement decision-making.