US and China Refuse to Sign AI Warfare Pact — And the World Is Alarmed

By Vikram Singh

Updated on Feb 07, 2026 | 4 min read | 1K+ views

Share:

The United States and China declined to sign a global declaration outlining principles for the responsible use of artificial intelligence in military applications at the REAIM summit in Spain. About 35 of 85 nations affirmed principles on oversight and human control, revealing widening international divide over AI regulation in warfare. 

At a high-profile summit on artificial intelligence in military settings held in A Coruña, Spain, major powers such as the United States and China opted out of signing a non-binding joint declaration on how AI should be used in warfare.

The Responsible AI in the Military Domain (REAIM) summit gathered representatives from 85 countries to discuss principles meant to guide the ethical governance of AI systems in military contexts. Only around a third of participants endorsed the principles, highlighting deep geopolitical divisions on how AI should be deployed in defence.

This development matters because AI is rapidly shaping future warfare capabilities, and without broad consensus among major powers, the risk of unregulated AI use in conflict scenarios increases.

The US–China decision to skip global rules on military AI highlights how data science and artificial intelligence now sit at the centre of geopolitical power and security strategy. As autonomous and agent-driven systems move into high-risk domains, agentic AI courses help professionals understand decision-making, risk control, and human-in-the-loop design. Training in AI governance and data science becomes critical to building systems that remain safe, accountable, and controllable at scale.

 

The REAIM 2026 Declaration: What was on the table?

The 20-point document was designed to prevent "unintended escalation" caused by AI hallucinations or autonomous errors.

Key Principles of the Declaration:

  • Human-in-the-Loop: Signatories affirmed that humans must remain responsible for the use of AI-powered weapons, particularly in lethal force decisions.
  • Traceable Command: Requirement for clear, auditable chains of command for every AI action.
  • Testing & Training: Commitment to "robust testing" and mandatory education for personnel operating military AI.
  • Information Sharing: A pledge to share national oversight mechanisms, provided it does not compromise national security.

The Superpower Snub: Why US and China Refused to Sign

The absence of the "Big Two" effectively renders the pact a symbolic gesture rather than a global standard.

Factor United States' Rationale China's Rationale
Strategic Competition Fear that any commitment (even non-binding) could create future legal hurdles for "Project Replicator" and other AI-led initiatives. View that the REAIM summit is a "Western-centric" framework designed to slow Chinese military modernization.
Geopolitical Tension Strained relations with European allies and uncertainty over future transatlantic ties. Desire to maintain "strategic ambiguity" regarding its use of AI in South China Sea operations.
The "Russia Factor" Reluctance to self-limit while Russia (not present) continues to deploy autonomous systems in Ukraine. Focus on "asymmetric advantage" through AI-powered influence and electronic warfare.

Latest AI NEWS

Machine Learning Courses to upskill

Explore Machine Learning Courses for Career Progression

360° Career Support

Executive PG Program12 Months
background

Liverpool John Moores University

Master of Science in Machine Learning & AI

Double Credentials

Master's Degree18 Months

Global Reaction: A Multipolar World in Flux

  • The Signatories: Only 35 of 85 nations signed. Major backers included Canada, Germany, France, Britain, the Netherlands, South Korea, and Ukraine.
  • The "Wait and See" Group: Many nations in the Global South abstained, watching to see how the US-China trade and tech wars evolve before picking a regulatory side.
  • The Research Perspective: Yasmin Afina (UN Institute for Disarmament Research) noted that even though the text was non-binding, many governments felt it was "too concrete," signaling a move toward more aggressive national AI doctrines.

Conclusion

The 2026 REAIM summit has revealed a sobering truth: when it comes to the "ultimate weapon," ethics are taking a backseat to speed. The failure of the US and China to join the declaration confirms that we have entered an era of unrestrained AI development. In this high-stakes environment, the only real "regulation" will be the capabilities of the systems themselves.

Frequently Asked Questions (FAQs)

1. What was the US-China AI military declaration?

It was a non-binding set of principles drafted at the REAIM summit to guide responsible use of AI in military systems, focusing on safety, human oversight, and ethical deployment.

2. Why did the US and China refuse to sign it?

They declined due to strategic concerns that even voluntary principles could limit future AI development or put them at a perceived competitive disadvantage.

3. How many countries did sign the declaration?

About 35 out of 85 countries at the summit endorsed the document, including major European allies and nations supportive of ethical AI governance.

4. Does the declaration have legal force?

No. It is non-binding, meaning it sets expectations and norms but cannot legally require countries to follow them.

5. Could this affect AI development globally?

Yes. It may lead to divergent national policies on military AI, complicating global governance efforts.

6. What are the core risks of AI in military use?

Risks include unintended escalation, bias, loss of human control, and lack of clear accountability for autonomous decisions.

7. What happens next internationally?

Diplomats and policymakers are expected to continue talks, potentially through UN forums or allied agreements, to build broader consensus.

8. How does this affect global security?

Fragmented AI norms could make coordinated safety standards more difficult and increase strategic unpredictability.

9. Are other forums discussing this issue?

Yes. UN, NATO, OECD and other bodies are actively debating how to govern AI in civilian and military domains.

10. Will industry be involved in future discussions?

Technology companies and researchers are expected to play a role in shaping voluntary standards and ethical frameworks.

11. Does this decision affect civilian AI use?

Not directly, but it shows the broader challenges in building global consensus on powerful new technologies.

12. What skills will be needed as AI governance evolves?

Professionals with expertise in AI ethics, policy, international law, and risk assessment will be crucial in shaping future frameworks.

Vikram Singh

51 articles published

Vikram Singh is a seasoned content strategist with over 5 years of experience in simplifying complex technical subjects. Holding a postgraduate degree in Applied Mathematics, he specializes in creatin...

Speak with AI & ML expert

+91

By submitting, I accept the T&C and
Privacy Policy

India’s #1 Tech University

Executive Program in Generative AI for Leaders

76%

seats filled

View Program

Top Resources

Recommended Programs

LJMU

Liverpool John Moores University

Master of Science in Machine Learning & AI

Double Credentials

Master's Degree

18 Months

IIITB
bestseller

IIIT Bangalore

Executive Diploma in Machine Learning and AI

360° Career Support

Executive PG Program

12 Months

IIITB
new course

IIIT Bangalore

Executive Programme in Generative AI for Leaders

India’s #1 Tech University

Dual Certification

5 Months