New Research by Check Point Exposes Emerging Malware Strategies
In a significant development for the cybersecurity industry, Persistent Security researchers have identified the first documented case of malware using prompt injection to bypass AI-based detection. This groundbreaking discovery was made in June 2025, and it highlights the growing sophistication of malware authors and the need for continued vigilance in the face of evolving threats.
The malware in question, which was anonymously uploaded to VirusTotal from the Netherlands, affects Visual Studio Code, GitHub Copilot, and related platforms, as a critical vulnerability (CVE-2025-53773) was found. The malware contained several sandbox evasion techniques and included an embedded TOR client, making it difficult to trace and analyze.
The malware was tested against Check Point Research's MCP protocol-based analysis system. Interestingly, the string was embedded in the code, a clear indication that the text was crafted with the intention of influencing automated, AI-driven analysis, not to deceive a human looking at the code.
Despite the malware's attempts to manipulate AI models, the underlying model correctly flagged the file as malicious. However, this technique is likely a sign of things to come, and attacks like this are expected to become more polished. This marks the early stages of a new class of evasion strategies, referred to as AI Evasion.
As generative AI technologies become more deeply integrated into security workflows, anticipating and understanding adversarial inputs, including prompt injection, will be essential. Recognizing this emerging threat early allows for the development of strategies and detection methods tailored to identify malware that attempts to manipulate AI models.
Check Point Research's primary focus is to continuously identify new techniques used by threat actors, including emerging methods to evade AI-based detection. Unsuccessful attempts like this one are important signals of where attacker behavior is headed, and the industry must stay vigilant to stay ahead of these evolving threats.
What stood out was a string embedded in the code that appeared to be written for an AI, not a human. This discovery underscores the importance of understanding the potential for malicious actors to exploit AI technologies and the need for continuous research and development to stay ahead of the curve.