NVIDIA NemoClaw: OpenClaw Plugin for Sandboxed Autonomous Agent Deployment
NVIDIA has released NemoClaw (alpha), an open-source plugin for OpenClaw that simplifies running always-on AI assistants safely by combining the NVIDIA OpenShell runtime with Nemotron models in a fully sandboxed environment.
What It Does
NemoClaw orchestrates a complete stack for autonomous agents:
| Component | Role |
|---|---|
| Plugin | TypeScript CLI for launch, connect, status, logs |
| Blueprint | Versioned Python artifact for sandbox creation and policy |
| Sandbox | Isolated OpenShell container with policy-enforced egress and filesystem |
| Inference | NVIDIA cloud model calls routed through OpenShell gateway |
Security Layers
The sandbox enforces four protection layers:
- Network — blocks unauthorized outbound connections, hot-reloadable at runtime
- Filesystem — prevents reads/writes outside
/sandboxand/tmp, locked at creation - Process — blocks privilege escalation and dangerous syscalls, locked at creation
- Inference — reroutes model API calls to controlled backends, hot-reloadable
When the agent tries to reach an unlisted host, OpenShell blocks the request and surfaces it in the TUI for operator approval.
Requirements
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 vCPU | 4+ vCPU |
| RAM | 8 GB | 16 GB |
| Disk | 20 GB free | 40 GB free |
| OS | Ubuntu 22.04+ | — |
| Node.js | 20+ | — |
| Docker | Installed and running | — |
Quick Start
curl -fsSL https://nvidia.com/nemoclaw.sh | bash
nemoclaw my-assistant connect
The onboard wizard handles sandbox creation, inference configuration, and security policy setup.
Inference Model
Uses nvidia/nemotron-3-super-120b-a12b via NVIDIA Cloud API. Inference requests never leave the sandbox directly — OpenShell intercepts and routes all calls.
Alpha Status
NemoClaw is explicitly labeled as early-stage alpha. Interfaces and APIs may change. NVIDIA is gathering feedback for iteration toward production readiness.
Source: GitHub - NVIDIA/NemoClaw | HN Discussion