AI Agents Are 'Gullible': Zero-Click Attacks Can Hijack Cursor, Copilot, ChatGPT, Salesforce Without User Interaction
At RSAC 2026, Zenity CTO Michael Bargury demonstrated that enterprise AI agents are vulnerable to zero-click prompt injection attacks that can hijack them to leak secrets, steal data, and manipulate users — all without any interaction.
The Core Problem
"AI is just gullible. We are trying to shift the mindset from prompt injection — because it is a very technical term — and convince people that this is actually just persuasion. I'm just persuading the AI agent that it should do something else."
What's Vulnerable
| AI Agent | Attack Scenario |
|---|---|
| Cursor | Leak developer secrets via poisoned Jira tickets |
| Salesforce Agentforce | Send customer data to attacker server |
| ChatGPT | Steal Google Drive data, manipulate user long-term |
| Gemini | Zero-click hijacking |
| Microsoft Copilot | Data exfiltration |
| Einstein (Salesforce) | Unauthorized actions |
How It Works
- Attacker finds automated AI agent integrations (e.g., Jira ticket creation from emails)
- Sends malicious prompt embedded in normal-looking content
- Agent automatically processes the poisoned input
- Agent performs attacker's desired action
- Zero user interaction required
The Cursor Example
Zenity wanted Cursor to leak secrets and send them to a controlled endpoint. Cursor has guardrails preventing this. So instead of asking it to steal secrets, they told it it was participating in a treasure hunt — and Cursor happily complied.
Long-Term Manipulation
"I can get ChatGPT to manipulate you. ChatGPT is a trusted advisor. It can be manipulated to answer whatever I want — and not just in the specific conversation, but long term."
Source: The Register (RSAC 2026)