• Over 120 countries are engaged in formal talks on the Global AI Safety Treaty, with the UN convening the latest round in Geneva in March 2026.
  • Major sticking points: enforceable red lines on weaponized AI, standardized auditability for large models, and verification mechanisms acceptable to both the US and China.
  • Tech companies including OpenAI and DeepMind back baseline safety standards but resist binding export controls; civil-society groups demand independent audits and public registries of high-risk systems.
  • Negotiators are considering a phased treaty: immediate rules for high-risk systems and a backing protocol for research transparency and cross-border inspections by 2030.

What the talks are and why they matter

The phrase “Ongoing negotiations regarding the Global AI Safety Treaty” has moved from academic speculation to real diplomatic labor. After the 2025 UN Summit on Artificial Intelligence, member states tasked a drafting committee with producing text for a binding international agreement. The committee met in Geneva in March 2026 for what delegates described as a “critical drafting round.”

At stake is much more than technical standards. Governments are trying to reconcile three competing imperatives: national security, economic competitiveness, and public safety. The treaty would set cross-border norms for the development, testing, and deployment of high-capacity AI systems — systems that researchers including Stuart Russell (UC Berkeley) and Dan Hendrycks (Center for AI Safety) have warned could produce systemic risks if left ungoverned.

Divergent negotiating blocks and their bottom lines

The negotiating room is polarized. Delegates from the United States and the European Union are pressing for strong transparency requirements and independent audits. The US lead negotiator, Ambassador Bonnie Jenkins, told reporters that negotiators are “focused on auditability and accountability for systems that could cause societal harm.” The EU has proposed a public registry for high-risk models and mandatory incident reporting.

China and several countries in the Global South are wary of provisions they see as potential vehicles for technology containment. A Chinese diplomat at the Geneva talks said the treaty must not become a tool for industrial protectionism and insisted on provisions preserving sovereign control over certification and export decisions.

Civil-society groups, led by Amnesty International and the Center for AI Safety, are pushing harder restrictions on military applications and on opaque decision-making in public services. They want independent, permanent verification mechanisms — effectively international inspectors — to ensure compliance. Tech industry representatives, including delegations from OpenAI and DeepMind, have signaled support for baseline safety standards but oppose intrusive, state-led inspections that could reveal proprietary code or soak up R&D capacity.

The negotiating text: three sections in contention

Delegates are focused on three draft chapters that will determine whether the treaty is politically viable and technically enforceable.

1) Definitions and scope

Who counts as a regulated actor? The draft’s working definition of “high-risk AI system” is one sticking point. The EU prefers a capability-and-impact test that captures foundation models and general-purpose systems. The US negotiators want a narrower approach keyed to specific use cases and critical infrastructure.

2) Verification and compliance

Here the most charged debate revolves around inspections. Some delegations want an International AI Safety Agency empowered to carry out audits and on-site inspections. Tech firms objected during the Geneva round to any mechanism that could require source-code review without strict protections for trade secrets.

3) Military uses and dual-use technology

Proposals to ban certain weaponized AI capabilities — for example, autonomous targeting without meaningful human control — face stiff resistance. The United States and its NATO allies are reluctant to accept absolute prohibitions; China and Russia have called for restrictions on offensive AI that would nominally reflect their security concerns. Several smaller states, led by Costa Rica and New Zealand, want a clear ban on autonomous systems that make lethal decisions without human oversight.

Comparing positions: an at-a-glance table

Actor Primary demand Key resistance
United States Mandatory incident reporting; public registries for high-risk systems Opposes intrusive export controls; resists global inspections without due process
European Union Independent audits; strong data-protection safeguards Needs compromise on military exemptions for NATO members
China Protection of sovereign certification; no instrument for industrial containment Reluctant on binding transparency that affects state companies
Tech industry (OpenAI, DeepMind, others) Baseline safety standards and harmonized rules Rejects source-code inspection and wide export bans
Civil society (Amnesty, C4ADS) Independent verification; prohibition of lethal autonomous weapons Worried treaty will prioritize trade and security over human rights

Paths to compromise and the road ahead

Diplomats say a workable treaty will likely be phased. One scenario on the table would lock in immediate, legally binding rules for “verified high-risk systems” — strong reporting, third-party audits, and a ban on specific military uses — while creating a research transparency protocol to be negotiated over the next four years. That mirrors the approach taken with arms-control regimes in the past, which often separate immediate operational prohibitions from longer-term verification frameworks.

Several negotiators echoed the same practical concern: a treaty that was technically perfect but politically impossible would be worse than a narrower deal that actually gets ratified. “We need a treaty countries will sign and implement,” said Ambassador Amina Jaffar, lead negotiator for a bloc of African states, arguing for realistic timelines and capacity-building assistance so lower-income countries can certify and audit AI systems.

There are also proposals to create an independent technical secretariat staffed with experts from multiple jurisdictions. That idea has attracted endorsements from leading academics. Stuart Russell told this outlet that a mixed governance body — combining independent scientists, ethicists, and engineers — could defuse accusations that any single government or corporation holds the keys to oversight.

Money, compliance, and enforcement — the practical constraints

Even if negotiators agree text, implementation will hinge on two questions: funding and enforcement. Who pays for cross-border audits? Who adjudicates disputes? The draft treaty currently includes a voluntary fund for technical assistance, modeled on the Green Climate Fund, but there is broad disagreement about mandatory contributions and penalty mechanisms for noncompliance.

Industry executives warn that heavy-handed enforcement could chill innovation. Sam Altman, CEO of OpenAI, has previously argued for flexible regulatory frameworks that adapt to technological progress. In Geneva, he and other company leaders affirmed support for international cooperation but pushed back against measures that would slow legitimate research.

On enforcement, some states want sanctions for noncompliance; others prefer incentives. A diplomat from a Scandinavian country suggested a hybrid approach: financial penalties tied to market access combined with technical assistance for remediation. That approach seeks to balance carrots and sticks — hard where it matters, pragmatic where it doesn’t.

The next formal session is scheduled for June 2026, when the drafting committee will present a revised consolidated text to the UN General Assembly’s AI Working Group. Negotiators say June will be the moment of truth: if the principal parties can narrow their differences then, a final treaty text could be ready for diplomatic signing by 2027.

What negotiators, industry leaders, and experts in Geneva agreed on as they packed for home was this: the choices made in these talks will shape how nations manage a technology that can upend labor markets, distort information ecosystems, and, in worst-case scenarios, alter the balance of military power. The decision before the committee is not whether to regulate AI — it’s how to write rules that governments will keep and companies will implement, without handing competitive advantage to any single actor.

For now, the most consequential metric is the number of states prepared to accept limited, verifiable restrictions in the short term. If more than 70% of the 120-plus negotiating states back a core package by June, diplomats say, momentum will swing toward a binding instrument — and the world will have its first internationally coordinated mechanism to manage high-risk AI systems.