China Deploys AI-Powered Hacking Campaign; US Must Prepare for Post-Quantum Threats
.
Chinese state-sponsored hackers have used Anthropic’s Claude artificial intelligence (AI) system to carry out the first known large-scale automated cyberespionage operation against major technology companies, financial institutions, chemical manufacturers, and government agencies.
Investigators recently found that the operators posed as cybersecurity workers and broke their instructions into small, harmless-looking tasks that convinced the model it was performing legitimate testing.
The attack relied on an autonomous framework built around Claude Code, which used Model Context Protocol tools and security utilities to scan systems, validate vulnerabilities, harvest credentials, move laterally, and triage stolen data.
Once activated, Claude mapped networks, identified key databases, wrote exploit code, exfiltrated information, located privileged accounts, and created backdoors. This turned Claude’s coding capabilities into an automated system capable of breaching networks and processing stolen information at a speed and scale unattainable by human teams.
Similar misuse is believed to be occurring across other advanced AI models, and previous disclosures from Google, Grok, Microsoft, and OpenAI show that state-linked groups are already experimenting with AI to enhance their operations.
According to Anthropic’s November statement, the company identified a mid-September breach operation carried out by a Chinese state-sponsored group designated GTG-1002. Claude executed 80 to 90 percent of the operational workload, with human operators stepping in only for strategic decisions. Anthropic’s analysis also found that the model made errors, including generating fake credentials and misidentifying public data, which remains a limiting factor for fully autonomous attacks.
The attackers targeted about 30 major organizations, including tech companies, financial institutions, chemical manufacturers, and government agencies, launching near-simultaneous intrusion attempts across these sectors. Anthropic confirmed several successful compromises before disrupting the activity, banning accounts, notifying affected entities, and coordinating with authorities over a 10-day response period.
GTG-1002 represents the first documented large-scale cyber operation in which an AI system carried out most stages without substantial human involvement, including gaining access to high-value intelligence targets and conducting post-exploitation activities.
The group demonstrates that a capable and well-resourced state-sponsored actor can use commercially available AI systems to accelerate timelines, conduct simultaneous multivector intrusions, and reduce the resources required to sustain a sophisticated espionage campaign.
.
Autonomous AI agents have sharply lowered the barrier to conducting large-scale intrusions, allowing even limited-resource actors to mount operations that once required experienced human teams. The incident also shows how the same autonomous techniques could be deployed inside major cloud environments.
At the same time, investigators relied on Claude’s analytical power to process the massive volume of evidence generated during the breach, underscoring that the capabilities that make AI valuable to attackers are also essential for strengthening detection, defense, and resilience.
Cybersecurity professionals maintain that AI-powered attacks, such as the GTG-1002 operation, are simply the next evolution of existing hacking methods. Adversaries now use artificial intelligence, machine learning, and large language models to enhance or automate traditional intrusion techniques.
In the hands of the Chinese Communist Party (CCP), AI becomes a force multiplier, accelerating speed and scale while removing many of the skill barriers that once limited threat actors.
AI tools now generate convincing phishing content in multiple languages, craft targeted social-engineering messages, clone voices, create deepfake videos, and automate reconnaissance at machine speed. As a result, operations that once required skilled teams can now be executed rapidly by actors with minimal expertise, turning the human element into the primary point of exploitation and making social engineering, identity spoofing, and large-scale credential theft significantly harder to detect.
The CCP is using AI not only to steal data but also to conduct information warfare and disseminate propaganda. The Chinese regime increasingly leverages generative AI—text, video, voice, and imagery—to produce large volumes of propaganda, misinformation, and influence content targeting foreign audiences.
AI-driven tools enable Beijing to scale operations rapidly on social media and video-sharing platforms, delivering convincing, high-quality material that appears authentic. Deepfake “news anchors” and AI-generated personas help conceal the origin of these campaigns and lend false legitimacy.
CCP-linked networks use AI to automate coordinated posting, simulate grassroots conversations, and spread divisive or pro-China narratives across multiple platforms, manipulating algorithms and public opinion abroad.
AI also improves operational efficiency by automating translation, content scheduling, data collection, persona management, and other tasks that once required large teams. At the same time, China integrates AI into domestic and international censorship and surveillance systems, shaping internal narratives, suppressing dissent, and influencing perceptions overseas.
Because of these developments, observers describe the CCP’s AI-enabled propaganda efforts as part of a broader strategy of narrative warfare that merges psychological operations, algorithmic content distribution, and covert influence campaigns to shape global opinion.
Beijing’s cognitive warfare strategy in Taiwan shows how these AI-driven influence tactics work in practice. TikTok algorithm suppresses pro-Taiwan content, amplifies unification narratives, and feeds emotionally charged videos to young users, many of whom rely on the platform for political information. These short-form videos blend entertainment with subtle ideological cues, allowing Beijing to shape youth identity, weaken civic trust, and erode perceptions of sovereignty without overt propaganda.
In the United States, China has used AI to generate fake images of Americans across the political spectrum and distribute them on U.S. social networks. The goal is to inject divisive content, stoke racial, economic, and ideological tensions, and portray the United States as fractured and in decline, as part of a broader AI-driven influence campaign aimed at undermining trust and social cohesion.
The U.S. Committee on Homeland Security is currently examining how advances in artificial intelligence, quantum computing, and cloud infrastructure are expanding the capabilities available to state-sponsored cyber actors. This recent incursion by CCP-backed operators has raised concerns that AI-enabled intrusions could later be paired with quantum decryption, allowing adversaries to collect encrypted U.S. government and critical infrastructure data now and decrypt it in the future.
Because of this risk, the committee is seeking expert testimony on integrating quantum-resilient technologies, improving cryptographic agility at scale, and preparing federal and commercial networks for post-quantum threats.


