OnPrem.LLM: Launch Autonomous AI Agents with Sandboxed Execution in 2 Lines of Code

2026-03-18T13:12:43.000Z·1 min read
OnPrem.LLM's AgentExecutor enables launching autonomous AI agents with 9 built-in tools (file ops, shell, web search) that work with both cloud and local models, featuring sandboxed execution and working directory isolation.

OnPrem.LLM has introduced the AgentExecutor — a pipeline for creating autonomous AI agents that can execute complex tasks using a variety of tools, with support for both cloud and local models.

How It Works

The AgentExecutor is implemented using their coding agent, PatchPal, and provides a simple interface for launching AI agents with sandboxed execution:

from onprem.pipelines import AgentExecutor

# Full access (all tools including shell):
executor = AgentExecutor(model='anthropic/claude-sonnet-4-5')

# Safer mode (no shell access):
executor = AgentExecutor(model='openai/gpt-5-mini', disable_shell=True)

Supported Models

The pipeline works with any LiteLLM-supported model that supports tool-calling:

9 Built-in Tools

  1. read_file — Read complete file contents
  2. read_lines — Read specific line ranges from files
  3. edit_file — Edit files via find/replace
  4. write_file — Write complete file contents
  5. grep — Search for patterns in files
  6. find — Find files by glob pattern
  7. run_shell — Execute shell commands
  8. web_search — Search the web for information
  9. web_fetch — Fetch and read content from URLs

Security Features

Use Cases

The system is designed for tasks like autonomous code generation, file manipulation, web research, and complex multi-step workflows — all from a local environment without sending data to external services (when using local models).

Source: OnPrem.LLM Documentation | GitHub | HN: 48pts

↗ Original source
← Previous: Python 3.15's JIT Compiler Is Back on Track — 11% Faster on ARMNext: Google DeepMind Proposes Cognitive Framework for Measuring Progress Toward AGI →
Comments0