Secure your AI stack with Alprina. Request access or email hello@alprina.com.

Alprina Blog

Sandboxing LLM CLI Suggestions Before They Hit Bash

Cover Image for Sandboxing LLM CLI Suggestions Before They Hit Bash
Alprina Security Team
Alprina Security Team

Hook

You ask your terminal copilot, "how do I nuke Docker images?" It suggests sudo rm -rf /var/lib/docker. You paste it, the command succeeds, and suddenly your workstation's /var/lib/apt is gone too because the command expanded a glob differently on macOS. If the assistant had recommended rm -rf /, you might have bricked prod nodes. We cannot trust vibe-based CLI recipes without a harness.

The Problem Deep Dive

LLMs excel at approximating shell commands but ignore context: OS, permissions, directory structure, or safety nets. Pastes from chat to terminal lead to:

  • Running destructive commands with sudo.
  • Executing on wrong host (prod vs dev) because ssh context is implicit.
  • Hidden Unicode characters (Zero Width Space) altering commands.

Technical Solutions

Quick Patch: Paste Proxy

Pipe commands through a wrapper:

alias vibe='~/.local/bin/vibe-run'

vibe-run reads stdin, runs shellcheck, highlights risky patterns (wildcards, rm -rf), and prompts before execution.

Durable Fix: Policy + Sandbox Pipeline

  1. LLM output -> JSON plan. Instead of raw command, ask assistant for structured output:
{
  "description": "Remove dangling Docker images",
  "commands": [
    "docker image prune -f"
  ]
}
  1. Policy engine. Evaluate each command with Rego or bash AST parser. Deny rules like rm -rf /, chmod 777 -R, network calls outside allow list.

  2. Dry-run shell. Execute commands inside toolbox container or distrobox with read-only bind mounts:

podman run --rm -it \
  -v "$PWD:/workspace:ro" \
  -v /tmp/vibe:/scratch \
  localhost/vibe-shell:latest bash -lc "${CMD}"
  1. Approval gating. For high-risk actions (package install, cluster mutate), require y/n with context showing diff.

  2. Context stamping. Tag commands with HOST, PWD, git branch, ticket ID. Log decisions.

LLM prompt example:

Return JSON describing safe commands. Fields: commands[], requiresRoot, destructive.

Testing & Verification

  • Unit-test the policy engine with fixtures of dangerous commands.
  • Add integration tests that run sample commands inside the sandbox to ensure mount rules hold.
  • Hook into CI: run vibe-run --verify to ensure saved scripts comply with policies.

Common Questions

Does sandboxing slow me down? Slightly, but caching containers or using lightweight namespaces keeps latency low.

What about interactive tools? Allow only whitelisted commands to run outside sandbox (e.g., vim).

Can't we just trust ShellCheck? ShellCheck catches syntax, not intent. Use both.

How to handle remote contexts? Preface commands with ssh devbox and require explicit host allow lists.

Conclusion

AI CLI vibes are cool until they rm the wrong directory. Force structure, enforce policy, and run commands in sandboxes before they touch your shell. Confidence beats chaos.