Dual-Use AI
What Is Dual-Use AI?
Dual-use AI refers to artificial intelligence capabilities that can be applied for both beneficial and harmful purposes, where the same underlying technology produces uplift for defenders and attackers alike. The concept is borrowed from arms-control discourse — historically applied to nuclear technology, biotechnology, and dual-use export-controlled goods — and has become central to AI policy as frontier AI models begin to demonstrate capabilities that materially affect cybersecurity, biosecurity, and information operations. A dual-use capability is not merely one that can be misused; it is one whose offensive and defensive applications draw on the same model behavior, making narrow technical mitigations insufficient and forcing decisions about access, distribution, and oversight.
Why It Matters Now
The clearest 2026 example is Anthropic's decision to withhold Claude Mythos from public release. Mythos can autonomously discover and exploit zero-day vulnerabilities at industrial scale; the same capability that lets a defender harden the Linux kernel lets an attacker compromise it. Anthropic concluded that broad release would hand attackers an asymmetric advantage and instead deployed Mythos through Project Glasswing, a consortium of vetted defenders. Similar dual-use considerations have shaped decisions around models with strong biology, chemistry, and persuasion capabilities: OpenAI, Google DeepMind, and Anthropic all now run pre-deployment evaluations specifically designed to surface dual-use risks before models ship.
The Defender-Attacker Asymmetry
Dual-use AI rarely produces symmetric uplift. Defenders typically face larger attack surfaces, must succeed everywhere, and operate under regulatory and ethical constraints that attackers ignore. A capability that helps both sides equally tends to favor attackers in practice — a dynamic explored extensively in the cybersecurity and AI in cybersecurity literature. This asymmetry is the core argument for restricted-release patterns like Glasswing and for the time-limited nature of those restrictions: once equivalent capabilities reach open-weights models, defenders have used the runway to harden their systems, while attackers gain only what they could have built independently within the same window.
Governance Approaches
Dual-use AI is now addressed by multiple overlapping governance instruments. The EU AI Act imposes obligations on general-purpose AI models with systemic risk, including evaluation, incident reporting, and cybersecurity protections. Anthropic's Responsible Scaling Policy, OpenAI's Preparedness Framework, and Google DeepMind's Frontier Safety Framework all condition release on capability evaluations specifically designed to detect dual-use uplift. Compute thresholds in U.S. executive orders and export controls aim to slow the diffusion of dual-use capabilities to adversarial states. The responsible AI field treats dual-use evaluation as a distinct discipline alongside fairness, transparency, and accountability.
Open Problems
Dual-use AI surfaces problems that existing governance was never designed to solve. Capabilities can be latent in a model and only emerge with the right scaffolding or fine-tuning, making evaluation a moving target. Open-weights models, once released, cannot be recalled. Defender-only access models like Glasswing concentrate power in small consortia and depend on the security of those consortia remaining intact. And the line between dual-use AI and ordinary AI is not fixed: a capability that is dual-use today — such as autonomous code generation — may become commoditized within months. The discipline of dual-use AI policy is therefore best understood as a continuous risk-management practice rather than a one-time release decision, intersecting with AI safety, AI regulation, and the broader trajectory of the agentic economy.
Further Reading
- Anthropic's Responsible Scaling Policy — Capability thresholds that gate frontier-model release on dual-use evaluations
- OpenAI Preparedness Framework — OpenAI's framework for catastrophic-risk and dual-use evaluations
- Google DeepMind Frontier Safety Framework — DeepMind's approach to dual-use risk assessment
- 'Too Dangerous to Release' Is Becoming AI's New Normal — TIME — Analysis of dual-use-driven release decisions in 2026
- Six Reasons Claude Mythos Is an Inflection Point — CFR — Foreign-policy framing of frontier dual-use capabilities
- EU AI Act — Regulatory obligations for general-purpose AI models with systemic risk
- RAND Research on AI and National Security — Long-running policy analysis of dual-use AI risks