The First Large-Scale Cyberattack by AI

image
Getty Images

A state-backed threat group, likely Chinese, crossed a threshold in September that cybersecurity experts have warned about for years. According to a report by Anthropic, attackers manipulated its AI system, Claude Code, to conduct what appears to be the first large-scale espionage operation executed primarily by artificial intelligence. The report states “with high confidence” that China was behind the attack.

AI carried out 80% to 90% of the tactical operations independently, from reconnaissance to data extraction. This espionage campaign targeted roughly 30 entities across the U.S. and allied nations, with Anthropic validating “a handful of successful intrusions” into “major technology corporations and government agencies.”

GTG-1002—Anthropic’s designation for this threat group—indicates that Beijing is unleashing AI for intelligence collection. Unless the U.S. responds quickly, this will be the first in a long series of increasingly automated intrusions. For the first time at this scale, AI didn’t merely assist in a cyberattack but conducted it.

Traditional cyber-espionage requires large teams working through reconnaissance, system mapping, vulnerability identification and lateral movement. A sophisticated intrusion can take days or weeks. China compressed that timeline dramatically through AI automation. The attackers manipulated Claude into functioning as an autonomous cyber agent, with the AI mapping internal systems, identifying high-value assets, pulling data and summarizing intelligence before human operators made decisions.

The attackers bypassed Claude’s safety systems through social engineering, convincing the AI they were legitimate cybersecurity professionals conducting authorized testing. By presenting malicious tasks as routine security work, they manipulated Claude into executing attack components without recognizing the broader hostile context.

An important limitation emerged: Claude frequently overstated findings and fabricated results, claiming credentials that didn’t validate or presenting publicly available information as critical discoveries. This AI hallucination problem remains a significant obstacle to fully autonomous cyberattacks—at least for now.

Most striking is what China didn’t need. GTG-1002 didn’t rely on cutting-edge malware or expensive proprietary tools. It used common open-source penetration-testing frameworks orchestrated through Model Context Protocol servers. Beijing hasn’t only upgraded its toolkit; it has replaced the craftsman with the assembly line. Capabilities once reserved for well-resourced intelligence agencies can now be replicated by smaller actors using widely available technology.

It also reveals a deeper strategic dynamic. China is spying with AI and spying on American AI. Beijing is studying how U.S. models behave, where they fail, and how they can be manipulated. Every malicious query becomes training data for China’s systems.

Anthropic deserves credit for disclosing the incident publicly and working with U.S. authorities. That transparency should set an industry standard. But the disclosure underscores a larger problem: Current safeguards aren’t designed for adversarial actors that move at machine speed.

The response must be urgent and clear. AI misuse can’t be treated as a narrow cyber issue; it is now central to the broader technology competition with China. Five responses are necessary:

First, AI-assisted defense must become standard across federal agencies, critical infrastructure and major corporations. AI can detect anomalies in real time and accelerate incident response from hours to minutes. China is using AI to accelerate attacks. The U.S. must use AI to accelerate defense.

Second, companies must disclose incidents of AI misuse within 72 hours. When AI systems are manipulated into performing malicious actions, critical details must be shared: attack vectors, guardrail failures, and forensic signatures that might help others detect similar intrusions. Without mandatory disclosure backed by safe-harbor provisions, businesses will keep quiet to avoid reputational damage. Policymakers can’t craft effective rules if the private sector conceals the incidents that illuminate emerging risk.

Third, AI companies must embrace secure-by-design principles. As Anthropic’s report warns, the techniques used by GTG-1002 will proliferate. The next generation of AI models must incorporate robust identity verification, real-time monitoring for malicious behavior, and guardrails resilient to social-engineering prompts.

Fourth, the U.S. and its allies need international norms governing AI-enabled cyber operations. Existing frameworks were created before autonomous systems existed. If Washington doesn’t shape these norms and lead, Beijing will.

Fifth, the U.S. must modernize how it shares threat intelligence. AI-accelerated attacks unfold too quickly for traditional bureaucratic information-sharing mechanisms. We need automated real-time systems capable of disseminating alerts across sectors in hours, not weeks.

The first AI-driven cyberattack is the opening act of a new era. GTG-1002 should be remembered the way we recall the first internet worm or the first ransomware wave: as an inflection point.

The cyber cold war just went kinetic. The weapons fire themselves now.

The question isn’t whether adversaries will continue exploiting AI for offensive operations. They will. The question is whether the U.S. will act quickly enough to defend itself.

China has signaled its ambitions. GTG-1002 shows it is already acting on them. The U.S. must stop debating the future of AI-enabled aggression and begin preparing for the conflict that has already arrived.

Mr. Turkel is a lawyer specializing in global trade compliance, export controls, sanctions and anticorruption compliance. He is author of “No Escape: The True Story of China’s Genocide of the Uyghurs.”

Copyright ©2025 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Comments

Popular posts from this blog