UN WG on business and human rights' report on AI procurement -- key findings and recommendations

Last week, the UN working group on business and human rights officially presented its thematic report on the procurement and deployment of artificial intelligence systems by States and businesses (A/HRC/59/53, 14 May 2025 — note there is also an executive summary infographic).

The report focuses on actions to be taken to facilitate alignment of AI procurement and deployment with the UN’s Guiding Principles on Business and Human Rights and addresses organisations procuring rather than developing AI. The report approaches procurement in broad terms by encompassing both public and private procurement, and by taking into account the position and responsibilities of States, business and stakeholders. The report contains a series of findings and recommendations.

findings on the Regulatory landscape

One of the report’s key findings is that ‘States are increasingly shifting from voluntary guidelines to binding legislation on AI and human rights, such as through the European Union AI Act and Council of Europe AI Convention. However, there are significant gaps in terms of rights-respecting procurement and deployment of AI systems, including a lack of a human rights-based approach, no consensus on key definitions, insufficient integration of the perspective of the Global South, the provision of broad exceptions and limited involvement of civil society. Further, enforcement gaps and loopholes are weakening human rights protections in existing legislation on AI and human rights.’ This requires a closer look.

The report highlights that ‘Globally, there are over 1,000 AI-related standards and over 50 AI governance initiatives based on ethics, responsibility or safety principles’. Although unsurprising, I find this interesting and speaks to fragmentation and duplication of regulatory efforts that create a complex landscape. Given the repeated recognition that AI challenges transcend borders and the calls for international collaboration (eg here and here), there is clearly a gap still to be addressed.

In that regard, the report stresses that ‘The lack of consensus on key concepts such as “AI” and “ethics” is leading to inconsistencies in the regulation of AI systems and is particularly problematic given the transnational nature of AI’, and highlights UNESCO’s Recommendation on the Ethics of Artificial Intelligence as the sort of document that could be used as a blueprint to promote policy coherence across jurisdictions.

Although the report identifies a recent shift from voluntary guidelines to legally binding rules for AI systems, such as the EU AI Act or the Council of Europe Framework Convention on AI, it also highlights that ‘there is still uncertainty regarding how to address certain loopholes in the EU AI Act’ and that the Framework Convention creates similar challenges in relation to the significant exemptions it contains, and the way it gives signatory States discretion to set its scope of application. Although the report does not take an explicit position on this, I think it takes a small step to conclude that legislative action needs to be far more decisive if the challenge of upholding human rights and fundamental values in AI deployment is to be met.

Another key finding of the report is that ‘States are largely procuring and deploying AI systems without adequate safeguards, such as conducting human rights impact assessments as part of human rights due diligence (HRDD), leading to human rights impacts across the public sector, including in relation to healthcare, social protection, financial services, taxation, and others.’ This results from the limited emerging approaches to AI procurement.

Indeed, focusing on the regulation of AI public procurement, the report highlights a series of approaches to developing legally binding general requirements for AI procurement and deployment, such as in Korea, Chile, California, Lithuania or Rwanda, as well as efforts in other jurisdictions to tackle specific aspects of AI deployment. However, the report also stresses that those regimes tend to have exemptions in relation to the most controversial and potentially harmful areas for AI deployment (such as defence and intelligence), and that the practical implementation of those regimes still hinges on the limited development of commonly understood standards and guardrails and, crucially, on public sector digital skills.

On the latter, the report clearly puts it that ‘Currently, there is an imbalance in knowledge and expertise between States and the private sector around what AI is, how it works and what outcomes it produces. There is also little space and time for procurers to engage critically with the claims made by AI vendors or suppliers, including as they relate to potential and actual human rights impacts.’ Again, this is unsurprising, but this renewed call for investment in capacity-building should make it abundantly clear that with insufficient state capacity there can be no effective regulation of AI procurement or deployment across the public sector (because, ultimately, as we have recently argued procurement is the infrastructure on which this regulatory approach rests).

The report then covers in detail business responsibility in relation to AI procurement and deployment and covers issues of relevance even in contexts of light-touch self-regulation, such as due diligence, contextual impact assessments, or stakeholder involvement. Similarly, the report finds that ‘Businesses are largerly procuring and deploying AI systems without conducting HRDD, risking adverse human rights impacts such as biased decision making, exploitative worker surveillance, or manipulation of consumer behavior.’

The final part of the report covers access to remedies and, in another of its key findings, stresses that ‘Courts are increasingly recognizing the human rights-related concerns of AI procurement and deployment, highlighting the urgent need for transparency and public disclosure for public and private sector procurement and deployment of AI systems, and the fact that existing remedy mechanisms lack resources and enforcement power, leaving communities without effective recourse for AI-related human rights abuses. Stronger legal frameworks, public reporting obligations, and independent oversight bodies are needed to ensure transparency, accountability and redress.’

The report thus makes the primary point that much increased transparency on AI deployment is required, so that existing remedies can be effectively used by those affected and concerned. It also highlights how existing remedies may be insufficient and, in particular, new ‘mechanisms will also need to be set up, creating integrated approaches that recognize the intersectional nature of AI-related harms and their disproportionate impact on at-risk groups. Effective redress for AI-related harms requires both strong institutional frameworks and deep understanding of how technology intersects with existing patterns of human rights violation and abuses, both of which are currently missing’ (this largely chimes with my view that we need a dedicated authority to oversee public sector AI use, and that preventative approaches need to be explored given the risks of mass harms arising from AI deployment).

recommendations

In order to address the unsatisfactory state of affairs document in the report, the working group formulates a log list of recommendations to States, businesses and other actors. In the executive summary, the following are highlighted as key recommendations to States.

  1. Establish robust legal, regulatory and policy frameworks on AI: Develop and implement AI regulations following a human rights-based approach that are aligned with international human rights law, ensuring transparency and accountability in AI procurement and deployment and legal certainty for all.

  2. Mandate HRDD: Require public disclosure, HRDD, and safeguards for AI systems procured and deployed by private and public sector actors, including AI systems used in high-risk sectors like law enforcement, migration management, and social protection.

  3. Prohibit Harmful AI Systems: Ban AI technologies incompatible with human rights, like mass surveillance, remote real-time facial recognition, social scoring and predictive policing.

  4. Ensure Access to Remedy: Strengthen judicial and non-judicial mechanisms to address AI-related human rights abuses, shifting the burden of proof to businesses and authorities, and ensuring adequate resources.

  5. Promote AI Governance Collaboration: Build global cooperation to establish common AI standards, fostering interoperability and ensuring the representation of Global South perspectives.

However, it is worth bringing up other recommendations included in the much longer list in the report, as some of them are directly relevant to the specific task of AI procurement. In that regard, the report also recommends that, with regard to AI procurement and deployment, States:

  • Provide specific guidance to public sector procurement actors on a human-rights based approach to the procurement of AI systems; including specific limitations, guidance and safeguards for AI systems procured and deployed in high-risk sectors and areas such as justice, law enforcement, migration, border control, social protection and financial services, and in conflict-affected areas;

  • Provide capacity-building for all stakeholders to understand the technical and human rights dimensions of AI, and ensure accessible, explainable and understandable information about the procurement and deployment of AI systems, including by mandating public registration of AI systems deployed by both public and private entities;

  • Ensure independent oversight of AI systems and require the provision of clear documentation on AI system capabilities, limitations and data provenance;

  • Promote meaningful stakeholder consultation and participation in decision-making processes around AI procurement and deployment;

These recommendations will resonate with the maim requirements (in principle) applicable under eg the EU AI Act, or proposals for best practice AI procurement.

Final comment

The report helpfully highlights the current state of affairs in the regulation of AI procurement and deployment across the public and private sectors. The issues it raises are well-known and many of them involve complex governance challenges, including the need for levels of public investment commensurate to the socio-technical challenges brought by the digitalisation of the public sector and key private market services.

The report also highlights that, in the absence of adequate regulatory interventions, States (and businesses) are creating a significant stack of AI deployments that are simply not assured for relevant risks and, consequently, are creating an installed base of potentially problematic AI embeddings across the public sector and business. If anything, I think this should be a call for a renewed emphasis on slowing down AI adoption to allow for the development of the required governance instruments.