Anthropic Warns of AI-Driven Hacking Campaign Linked to China

Washington, D.C. – November 14, 2025 – AI research company Anthropic has issued a warning about the first reported use of artificial intelligence (AI) to automate cyberattacks on a targeted scale, linking the operation to the Chinese government. Researchers described the development as “disturbing,” highlighting the potential for AI-equipped hackers to expand their reach dramatically.

AI-Directed Cyberattacks: A New Threat

According to Anthropic, the operation leveraged AI to automate portions of a hacking campaign, a significant escalation in the use of technology by state-backed actors.

“While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” the report stated.

The campaign was relatively modest, targeting roughly 30 individuals employed at technology companies, financial institutions, chemical firms, and government agencies. Anthropic first detected the operation in September 2025 and immediately took steps to disrupt the attacks and notify the victims.

The company noted that the hackers were successful in only a limited number of cases, but emphasized that the weaponization of AI in cyber operations represents a growing global security risk.

AI Agents and the Future of Cybersecurity

Anthropic, the creator of the generative AI chatbot Claude, highlighted that AI “agents” can now perform complex tasks beyond simple chatbot interactions. These systems can access computer tools, execute tasks, and make decisions autonomously.

“Agents are valuable for everyday work and productivity — but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks. These attacks are likely to only grow in their effectiveness,” Anthropic researchers warned.

The report underscores the dual-use nature of AI technologies: while they can enhance productivity, in the hands of foreign adversaries or cybercriminal groups, they pose severe risks.

Broader Context: AI in Global Cyber Operations

Concerns about AI in cyber warfare are not new, but Anthropic’s report marks the first time a campaign has been observed where AI played a central role in directing attacks autonomously.

Earlier this year, Microsoft and other cybersecurity firms warned that state-backed actors are increasingly leveraging AI to make cyber campaigns more efficient and less labor-intensive. U.S. adversaries, criminal gangs, and hacking groups have used AI to:

  • Automate and optimize phishing attacks.
  • Spread disinformation online.
  • Create deepfakes and digital replicas of government officials to manipulate discourse.

The report also mentioned instances of AI being used to generate realistic emails, translate communications, and impersonate senior officials, including public figures such as Secretary of State Marco Rubio, illustrating the growing sophistication of AI-enabled attacks.

Response from China

A spokesperson for the Chinese embassy in Washington, D.C., did not immediately respond to requests for comment on the report.

Implications for Cybersecurity

Experts say this development signals a new era of AI-driven cyber threats, where automated systems can act at scale with minimal human oversight. Companies and governments may need to invest in stronger AI security measures, including monitoring for malicious AI activity and protecting sensitive data from AI-enhanced attacks.

“As AI agents become more sophisticated, the threat landscape evolves. Defensive measures must advance at the same pace to prevent widespread harm,” Anthropic researchers concluded.

Leave a Reply

Your email address will not be published. Required fields are marked *