Secure your AI stack with Alprina. Request access or email hello@alprina.com.

Alprina Blog

Pairing With AI Without Leaking Secrets

Cover Image for Pairing With AI Without Leaking Secrets
Alprina Security Team
Alprina Security Team

Hook

You fire up your favorite vibe-coding session: lo-fi beats, dark terminal, AI copilot cranking out boilerplate. Minutes later a teammate flags an audit log showing your editor uploaded config/secrets.yaml to the copilot server. Turns out the assistant grabbed the open buffer to build context and happily streamed API keys off your machine. You revoke tokens, but the damage is done.

The Problem Deep Dive

AI pair programmers ingest whatever is in scope: open files, command history, git diff, clipboard. Common leaks:

  • .env or config/*.json open in another tab.
  • Terminal output containing secrets; some assistants capture STDOUT for context.
  • Git histories where sensitive data was previously committed.
  • Debug logs inside ./tmp that get zipped and uploaded.

Vibe coding encourages flow, but flow equals reduced guardrails unless you automate them.

Technical Solutions

Quick Patch: Local Redaction Proxy

Route assistant traffic through a local proxy that strips patterns:

copilot-proxy \
  --redact "aws_access_key_id=.*" \
  --redact "secret_key\"?:\"[A-Za-z0-9/+=]+\"" \
  --upstream api.copilot.example.com

Point the editor at http://localhost:8787. This buys time but still relies on regex.

Durable Fix: Project Policies + Isolation

  1. Context allow lists. Configure the assistant to read only specific folders (source, docs). Block config, tmp, .git.
  2. Ephemeral tokens. Issue scoped API keys for AI tools via gh auth refresh --scopes=repo:read. Rotate daily.
  3. Editor tooling. Add a pre-upload hook to LSP/cmp: inspect buffer diffs, fail when secrets match rules from gitleaks.toml.
  4. Offline embeddings. Host the embedding/indexing service locally so raw code never leaves your machine; only derived tokens go to the LLM.
  5. Session labels. Tag assistant sessions with JIRA ticket + data classification; block uploads when working on restricted repositories.

Example VS Code setting:

"copilot.advanced": {
  "exclude": ["**/*.env", "config/**", "tmp/**"],
  "telemetry": false
}

Run assistants from a dev container with limited file mounts:

"mounts": ["source:src", "source:package.json"],
"secrets": []

Testing & Verification

  • Write automated tests (bash or Python) that scan repo for forbidden patterns and ensure your .copilotignore matches.
  • Simulate assistant requests via scripts to confirm proxy redaction works.
  • Use strace or fs_usage to see what files the assistant binary reads.
  • Run gitleaks detect --no-git . before starting a session; fail if secrets exist locally.

Common Questions

Does local-only mode kill suggestions? Quality dips but acceptable for restricted modules. Use hybrid: online for public repos, offline for sensitive ones.

Can we trust vendor ignore settings? Treat them as hints. Validate by tailing assistant traffic (Burp, mitmproxy) in staging.

What about clipboard? Disable clipboard sync or run separate desktop spaces for secure work.

Won't proxies violate TOS? Check vendor terms; many allow proxies for security. Otherwise, request enterprise features.

Conclusion

AI pair programming is fun, but secrets do not vibe. Fence the assistant's eyesight, rotate its keys, and test the guardrails like any other dependency. Flow state stays; incident tickets do not.