Skip to content

AI Threats Amplified Through "PromptFix" Assaults

AI-powered solution, named PromptFix, unveiled by Guardio as a fresh AI approach to the problem-solving platform ClickFix

AI vulnerabilities, specifically "PromptFix," could potentially intensify risks posed by agentic AI...
AI vulnerabilities, specifically "PromptFix," could potentially intensify risks posed by agentic AI systems.

AI Threats Amplified Through "PromptFix" Assaults

In a concerning development, researchers have engineered a new social engineering technique called PromptFix, a variation of ClickFix attacks. This innovative method exploits the limitations of AI models to carry out malicious actions, causing potential harm to unsuspecting users.

The technique was demonstrated in a test scenario known as Scamlexity, a complex new era of scams where AI convenience collides with a new, invisible scam surface. In this scenario, an AI agent was tricked into downloading a harmless file, but it could have easily downloaded a malicious payload instead.

The scam no longer needs to trick you; it only needs to trick your AI. When that happens, you're still the one who pays the price, as stated by security vendor Guardio. Guardio successfully tricked the AI agent into buying an item from a scam e-commerce site and even managed to get the AI agent to click on a link to a genuine phishing site in an email.

This manipulation of AI agents is possible due to their tendency to act without full context and their trusting nature. They follow instructions without applying human skepticism, making them easy targets for attackers. In an adversarial setting, where an AI agent may be exposed to untrusted input, this is an explosive combination, according to Lionel Litty, chief security architect at Menlo Security.

PromptFix tricks agentic AI into performing malicious actions using prompt injection. The attacker relies on the model's inability to fully distinguish between instructions and regular content within the same prompt. In a test scenario, PromptFix engineered an AI to cause a drive-by download attack by posing as a scammer sending a fake message from a 'doctor.' The injected narrative in PromptFix misleads the AI, making it believe it's solving an 'AI-friendly' CAPTCHA and clicking a button.

It's important to note that the AI-based browser used by the researchers from Guardio in their tests is manufactured by Microsoft, specifically the Microsoft 365 Copilot. PromptFix presents attacker instructions to the AI agent inside an invisible text box.

The security community is urging users to be vigilant and aware of the potential risks associated with AI-based systems. With the web in 2025 very much an adversarial setting, as per Litty's statement, it's crucial to prioritise cybersecurity measures to protect ourselves and our data.

Read also:

Latest