LLM Sandbox Documentation¶
Welcome to LLM Sandbox¶
LLM Sandbox is a lightweight and portable sandbox environment designed to run Large Language Model (LLM) generated code in a safe and isolated mode. It provides a secure execution environment for AI-generated code while offering flexibility in container backends and comprehensive language support.
-
Secure Execution
Run untrusted LLM-generated code safely with customizable security policies and isolated container environments
-
Multiple Backends
Choose from Docker, Kubernetes, or Podman backends based on your infrastructure needs
-
Multi-Language
Execute code in Python, JavaScript, Java, C++, and Go with automatic dependency management
-
LLM Integration
Seamlessly integrate with LangChain, LangGraph, and LlamaIndex for AI-powered applications
Key Features¶
🛡️ Security First¶
- Isolated Execution: Code runs in isolated containers with no access to host system
- Security Policies: Define custom security policies to control code execution
- Resource Limits: Set CPU, memory, and execution time limits
- Network Isolation: Control network access for sandboxed code
🚀 Flexible Container Backends¶
- Docker: Most popular and widely supported option
- Kubernetes: Enterprise-grade orchestration for scalable deployments
- Podman: Rootless containers for enhanced security
📊 Advanced Features¶
- Artifact Extraction: Automatically capture plots and visualizations
- Library Management: Install dependencies on-the-fly
- File Operations: Copy files to/from sandbox environments
- Custom Images: Use your own container images
Quick Example¶
# ruff: noqa: T201
import base64
from pathlib import Path
from llm_sandbox import ArtifactSandboxSession
code = """
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('default')
# Generate data
x = np.linspace(0, 10, 100)
y1 = np.sin(x) + np.random.normal(0, 0.1, 100)
y2 = np.cos(x) + np.random.normal(0, 0.1, 100)
# Create plot
fig, axes = plt.subplots(2, 2, figsize=(12, 10))
axes[0, 0].plot(x, y1, 'b-', alpha=0.7)
axes[0, 0].set_title('Sine Wave')
axes[0, 1].scatter(x[::5], y2[::5], c='red', alpha=0.6)
axes[0, 1].set_title('Cosine Scatter')
axes[1, 0].hist(y1, bins=20, alpha=0.7, color='green')
axes[1, 0].set_title('Sine Distribution')
axes[1, 1].bar(range(10), np.random.rand(10), alpha=0.7)
axes[1, 1].set_title('Random Bar Chart')
plt.tight_layout()
plt.show()
print('Plot generated successfully!')
"""
# Create a sandbox session
with ArtifactSandboxSession(lang="python", verbose=True) as session:
# Run Python code safely
result = session.run(code)
print(result.stdout) # Output: Plot generated successfully!
for plot in result.plots:
with Path("docs/assets/example.png").open("wb") as f:
f.write(base64.b64decode(plot.content_base64))
Installation¶
Basic Installation¶
With Specific Backend¶
# For Docker support
pip install 'llm-sandbox[docker]'
# For Kubernetes support
pip install 'llm-sandbox[k8s]'
# For Podman support
pip install 'llm-sandbox[podman]'
Why LLM Sandbox?¶
The Challenge¶
As LLMs become more capable at generating code, there's an increasing need to execute this code safely. Running untrusted code poses significant security risks:
- System compromise through malicious commands
- Data exfiltration via network access
- Resource exhaustion from infinite loops
- File system damage from destructive operations
Our Solution¶
LLM Sandbox provides a secure, isolated environment that:
- Isolates code execution in containers
- Enforces security policies before execution
- Limits resource usage to prevent abuse
- Integrates seamlessly with LLM frameworks
Architecture Overview¶
graph TD
A[LLM Application] -->|Generated Code| B[LLM Sandbox]
B --> C{Security Check}
C -->|Pass| D[Container Backend]
C -->|Fail| E[Reject Execution]
D --> F[Docker]
D --> G[Kubernetes]
D --> H[Podman]
F --> J[Isolated Execution]
G --> J
H --> J
J --> K[Results & Artifacts]
K --> A
Getting Started¶
Ready to start using LLM Sandbox? Check out our Getting Started Guide for detailed setup instructions and your first sandbox session.
Documentation Overview¶
- Getting Started - Installation and basic usage
- Configuration - Detailed configuration options
- Security - Security policies and best practices
- Backends - Container backend details
- Languages - Supported programming languages
- Integrations - LLM framework integrations
- Existing Container Support - Connecting to existing containers/pods
- API Reference - Complete API documentation
- Examples - Real-world usage examples
Community & Support¶
- GitHub: github.com/vndee/llm-sandbox
- Issues: Report bugs or request features
- Discussions: Join the community
- PyPI: pypi.org/project/llm-sandbox
License¶
LLM Sandbox is open source software licensed under the MIT License. See the LICENSE file for details.