Snowflake Cortex AI Sandboxed Escape: Prompt Injection Bypasses Human-in-the-Loop

2026-03-18T17:21:40.000Z·2 min read
Security researchers discovered that Snowflake's Cortex Code CLI could be tricked into executing arbitrary commands outside its sandbox via indirect prompt injection, bypassing human approval steps and potentially exfiltrating data or dropping tables using the victim's active credentials.

Security researchers at PromptArmor have disclosed a vulnerability in Snowflake's Cortex Code CLI that allowed malware execution via indirect prompt injection — bypassing both the sandbox and human-in-the-loop approval mechanisms.

The Vulnerability

Snowflake Cortex Code CLI is a command-line coding agent (similar to Claude Code and OpenAI's Codex) with built-in SQL integration for Snowflake. Two days after release, researchers found a flaw in the command validation system that allowed specially constructed malicious commands to:

  1. Execute arbitrary commands without triggering human-in-the-loop approval
  2. Escape the Cortex CLI sandbox entirely
  3. Leverage the victim's active credentials to perform malicious actions in Snowflake (exfiltrate data, drop tables)

The Attack Chain

  1. User opens Cortex and enables sandbox mode
  2. User asks for help with a third-party open-source codebase found online
  3. A prompt injection hidden in the repository's README manipulates Cortex into running a dangerous command
  4. The command validation fails — Cortex does not validate commands inside prompt-injected content
  5. Malware executes outside the sandbox with the user's credentials

Key Issues Identified

The Fix

Snowflake's security team validated and remediated the vulnerability. The fix shipped in Cortex Code CLI version 1.0.25 on February 28, 2026.

Why It Matters

This is a textbook example of the emerging threat landscape for agentic AI tools:

The disclosure adds to growing evidence that securing agentic AI requires fundamentally new security architectures, not just traditional sandboxing.

Source: PromptArmor | HN Discussion

↗ Original source
← Previous: NVIDIA NemoClaw: OpenClaw Plugin for Sandboxed Autonomous Agent DeploymentNext: 'AI Coding Is Gambling': Why the Infinite Code Machine Leaves Developers Feeling Empty →
Comments0