Russian-speaking hackers used gen AI tools to compromise 600 firewalls, Amazon says
Estimated reading time: 7 minutes
Key Takeaways:
- Russian-speaking threat actors utilized commercial generative AI tools to automate the compromise of over 600 Fortinet firewalls across 55 countries.
- AI is being used to democratize offensive capabilities, allowing medium-skilled actors to execute large-scale operations by automating reconnaissance and post-breach reporting.
- Campaigns like “Operation Olalampo” demonstrate a shift toward memory-safe languages like Rust and AI-assisted development to evade traditional security controls.
- Critical vulnerabilities in Roundcube Webmail and PDF platforms (Foxit/Apryse) remain high-priority targets for automated exploitation swarms.
Table of Contents:
- Technical Analysis: Compromising 600 Firewalls
- The Democratization of Cyber Capability Through AI
- Case Study: Operation Olalampo and AI-Augmented Malware
- Vulnerability Landscape: Roundcube and PDF Platforms
- Strategic Defensive Intelligence
- Operational and Technical Mitigations
- PurpleOps Capability Alignment
- Frequently Asked Questions
Russian-speaking hackers used gen AI tools to compromise 600 firewalls, Amazon says: Technical Analysis
The integration of generative artificial intelligence (GenAI) into offensive cyber operations has transitioned from theoretical risk to documented activity. In a recent report, Amazon’s threat-intelligence team identified a campaign where Russian-speaking hackers used gen AI tools to compromise 600 firewalls, specifically targeting Fortinet FortiGate devices. This activity, occurring between mid-January and mid-February, spanned more than 55 countries and demonstrated how commercial AI services allow low-to-medium-skilled actors to execute large-scale operations previously reserved for sophisticated state-sponsored groups.

The campaign observed by Amazon targeted FortiGate firewalls, which are common infrastructure components for managing network traffic and remote access. The threat actor, identified as financially motivated rather than state-aligned, utilized multiple commercial AI services to automate the lifecycle of the attack. Documentation recovered by researchers included AI-generated attack plans, operational checklists, and custom code designed to automate network scanning and post-breach reporting.
The technical focus of the campaign was not the exploitation of zero-day vulnerabilities. Instead, the actor targeted weak security configurations, exposed administrative access points, and weak authentication protocols. By automating the identification of these “low-hanging fruits,” the actor achieved a high volume of compromises in a short period. Once access was secured, the hackers extracted full device configurations, which included stored passwords and internal network architecture details. This data was subsequently used to facilitate lateral movement into internal systems, including Active Directory environments and backup systems-actions that typically precede ransomware deployment.
The Democratization of Cyber Capability Through AI
The use of AI in the Fortinet campaign reflects a broader trend in the cyber threat environment. AI distillation, as reported by Anthropic, has become a method for foreign entities to siphon capabilities from frontier models. Anthropic recently accused three Chinese AI labs-DeepSeek, Moonshot, and MiniMax-of conducting “industrial-scale campaigns” to distill the capabilities of the Claude model. These labs reportedly sent 16 million requests to bypass traditional training costs and safety guardrails.
When unprotected capabilities are fed into military or surveillance systems, the risk of automated offensive cyber operations increases. Distilled models often lack the necessary safety layers, allowing users to generate malicious code or disinformation without the restrictions imposed by original developers. This capability extraction
serves as a force multiplier for actors who lack the resources to build proprietary large language models but seek to weaponize AI-driven automation.
Case Study: Operation Olalampo and AI-Augmented Malware
The Iranian-aligned group MuddyWater (also known as Mango Sandstorm) provides a parallel example of how AI and custom tooling are being deployed in the Middle East and North Africa (MENA) region. Their recent campaign, “Operation Olalampo,” utilized new malware families alongside AI-assisted development techniques to target critical sectors.
The operation utilized several key tools:
- GhostFetch: A first-stage downloader that profiles the target system, validates mouse movements to evade sandboxes, and checks for virtual machine artifacts before executing secondary payloads in memory.
- HTTP_VIP: A native downloader that conducts system reconnaissance and communicates with a C2 server to deploy remote desktop software like AnyDesk.
- CHAR: A Rust-based backdoor controlled via a Telegram bot named “Olalampo.” This backdoor can execute commands via cmd.exe or PowerShell and is used to deploy SOCKS5 reverse proxies.
The use of Rust for malware development-seen in both CHAR and the related BlackBeard RAT-indicates a shift toward cross-platform, memory-safe languages that often evade traditional signature-based breach detection systems. MuddyWater’s continued adoption of AI technology for refining malware and diversifying command-and-control (C2) infrastructure mirrors the tactical shifts seen in the Russian-speaking actor’s firewall campaign.
Vulnerability Landscape: Roundcube and PDF Platforms
While AI-driven automation scales attacks, the targets remain centered on exploitable software flaws. CISA recently added two Roundcube Webmail vulnerabilities (CVE-2025-49113 and CVE-2025-68461) to its Known Exploited Vulnerabilities (KEV) Catalog. CVE-2025-49113 is a critical remote code execution (RCE) flaw, while CVE-2025-68461 involves cross-site scripting (XSS) via SVG documents.
Data from Shodan indicates over 46,000 Roundcube instances are currently accessible online, many of which remain unpatched. Similarly, researchers at Novee Security identified 16 zero-day vulnerabilities in Foxit and Apryse PDF platforms. Using a “human-agent symbiosis” approach-where AI swarms are taught to recognize the “scent” of specific vulnerability patterns-researchers identified critical flaws including:
- CVE-2025-70402: A flaw in Apryse WebViewer involving untrusted remote configuration files.
- CVE-2025-70401: A script injection vulnerability in PDF comments where a script executes as soon as a victim interacts with the document.
- CVE-2025-66500: A weakness in Foxit web plugins allowing for harmful script execution via fake messages.
These findings emphasize that modern PDF tools function more like complex application stacks than static documents, creating a significant “trust boundary” failure where software implicitly trusts data that should be strictly validated.
Strategic Defensive Intelligence
For organizations managing a diverse attack surface, the intersection of AI-assisted exploitation and unpatched vulnerabilities requires a shift in defensive strategy. A cyber threat intelligence platform is necessary to aggregate data from these disparate campaigns and provide actionable context for security engineers.
The Russian firewall campaign highlights the necessity of dark web monitoring service capabilities. When firewall configurations and passwords are stolen, they are frequently traded on specialized marketplaces. Utilizing underground forum intelligence allows organizations to identify if their internal network architecture details or administrative credentials have been leaked before they are used for lateral movement.
Furthermore, the involvement of Telegram-controlled backdoors like CHAR suggests that telegram threat monitoring should be integrated into standard security operations. Monitoring for bot-controlled C2 traffic can identify infected hosts that traditional perimeter defenses might miss.
Operational and Technical Mitigations
Based on the analysis of these concurrent threat campaigns, the following actions are necessary for maintaining infrastructure integrity:
Technical Mitigations for Engineers:
- Harden Administrative Access: Disable public-facing administrative interfaces on firewalls and VPN concentrators. Implement IP whitelisting for all management traffic.
- Credential Rotation and MFA: Following the extraction of device configurations, assume all passwords stored on network appliances are compromised. Rotate all administrative credentials and enforce hardware-based multi-factor authentication (MFA).
- Patch Management: Update Roundcube installations to versions 1.6.12 or 1.5.12 immediately. Apply patches for Foxit and Apryse PDF systems to mitigate the 16 identified zero-days.
- Configuration Audits: Use automated tools to scan for “low-hanging fruit” misconfigurations. The Amazon report confirms that AI-augmented actors prioritize these over complex exploits.
Operational Procedures for Business Leaders:
- Supply-Chain Risk Management: Assess the security posture of third-party security vendors. Utilize supply-chain risk monitoring to track the vulnerability status of critical software assets.
- Information Leak Detection: Implement brand leak alerting to monitor for unauthorized disclosure of internal documentation or credentials.
- Ransomware Readiness: Ensure that backups are immutable and stored off-network. Use real-time ransomware intelligence to track the tactics of financially motivated actors.
- API Security: Organizations using automated defensive tools should secure their live ransomware API integrations to prevent actors from mapping network holes.
PurpleOps Capability Alignment
The shift toward AI-augmented threats necessitates a sophisticated approach to breach detection and threat hunting. PurpleOps provides the technical expertise and platforms required to counter these high-volume, automated campaigns.
- Red Team Operations: Our teams simulate the tactics of AI-augmented actors to identify weak configurations before they are exploited.
- Penetration Testing: We conduct deep-dive assessments into application stacks to identify zero-day risks in software like PDF rendering engines.
- Supply Chain Information Security: We help organizations manage the risks associated with third-party software and security appliances.
- Dark Web Monitoring: Our services provide early warning of credential theft or C2 activity on underground forums and Telegram.
The Amazon report on Russian-speaking hackers using gen AI tools to compromise 600 firewalls confirms that the technical barrier for large-scale exploitation has been permanently lowered. To explore how PurpleOps can secure your infrastructure, view our full range of services:
Frequently Asked Questions
How did Russian-speaking hackers use GenAI to compromise firewalls?
The actors used commercial AI services to automate the identification of weak configurations, generate attack plans, create operational checklists, and develop custom code for network scanning and post-breach reporting.
Which devices were specifically targeted in this campaign?
The campaign primarily targeted Fortinet FortiGate firewalls, which are widely used for network management and VPN access.
What is AI distillation and why is it a security risk?
AI distillation involves siphoning the capabilities of advanced models (like Claude) into smaller models. This often removes safety guardrails, allowing malicious actors to generate exploit code or disinformation without restrictions.
What are the risks associated with Roundcube and PDF tools?
Recent vulnerabilities allow for remote code execution and script injection. AI swarms are now being used by researchers and threat actors alike to find these patterns in complex application stacks.
How can organizations protect themselves from AI-augmented attacks?
Focus on core security hygiene: harden administrative access, enforce hardware-based MFA, rotate credentials after any suspected breach, and utilize high-fidelity threat intelligence for dark web and Telegram monitoring.