Secure your AI stack with Alprina. Request access or email hello@alprina.com.

Alprina Blog

Securing AI-Driven Notebooks Before They Whisper Customer Data

Cover Image for Securing AI-Driven Notebooks Before They Whisper Customer Data
Alprina Security Team
Alprina Security Team

Hook - The Copilot That Copied Your Customer Table

One of our data scientists plugged an "Explain Cell" extension into JupyterLab. She selected a PySpark cell and clicked the shiny new button. Behind the scenes the extension serialized the entire dataframe sitting in memory-names, emails, transaction IDs-and sent it to an external LLM so it could "explain" the transformation. Minutes later the security team spotted those rows in the vendor's logs. We built the feature to save analysts a few minutes of documentation; instead we leaked regulated data to a third party and had to notify legal.

Jupyter, Databricks, SageMaker, and VS Code notebooks feel self-contained, but any copilot that reads cell output or variable scopes is one df.head() away from an incident. Here's the set of guardrails we now require before letting AI inside notebooks.

Problem Deep Dive - How Notebooks Bleed

Notebook copilots typically hook in at three layers:

  1. Kernel inspection. They introspect globals() or the kernel's symbol table to autocomplete. Sensitivity: everything from model weights to API tokens stored in memory.
  2. Cell output capture. They serialize stdout, HTML tables, plots, and sometimes binary attachments. Anything printed is fair game.
  3. File access. They read CSVs, JSON, and check-point files to provide context.

Add prompt injection-think hidden markdown instructing the LLM to dump secrets-and you have an exfiltration pipeline. The usual countermeasure ("remind users not to paste secrets") doesn't help when the extension grabs data automatically.

Technical Solutions - Containment Without Killing Flow

1. Tag Dataframes With Sensitivity Metadata

Extend your kernel so variables carry labels. In Python:

from enum import Enum

class Sensitivity(Enum):
    PUBLIC = "public"
    INTERNAL = "internal"
    RESTRICTED = "restricted"

class GuardedValue:
    def __init__(self, value, level):
        self.value = value
        self.level = level

def guard(df, level):
    return GuardedValue(df, level)

Wrap sensitive data as soon as it's loaded:

pii_df = guard(spark.sql("SELECT * FROM customer"), Sensitivity.RESTRICTED)

Any extension attempting to serialize pii_df must check level; raise an exception if it isn't PUBLIC.

2. Kernel Middleware That Scrubs Prompt Payloads

Add a Comm manager that intercepts messages heading to the assistant:

from ipykernel.comm import CommManager

SENSITIVE_RE = [
    re.compile(r"[0-9]{3}-[0-9]{2}-[0-9]{4}"),
    re.compile(r"\b(?:4[0-9]{12}(?:[0-9]{3})?)\b"),
]

class GuardedComm(CommManager):
    def _dispatch_msg(self, msg):
        data = msg["content"].get("data", {})
        for key, value in data.items():
            for pattern in SENSITIVE_RE:
                if pattern.search(str(value)):
                    raise PermissionError("Blocked sensitive payload")
        super()._dispatch_msg(msg)

Extensions register with GuardedComm instead of the default manager, so every message is sanitized.

3. Summaries Instead of Rows

When the assistant needs to "understand" a dataframe, feed it schema summaries, not raw rows:

def summarize(df):
    return {
        "columns": df.columns.tolist(),
        "dtypes": df.dtypes.astype(str).to_dict(),
        "stats": df.describe(include="all").to_dict(),
        "row_count": df.count(),
    }

Tie this to the guard: only PUBLIC frames can provide samples; INTERNAL exposes stats; RESTRICTED exposes nothing.

4. Prompt Provenance and Audit

Record every request:

{
  "user": "alice",
  "workspace": "shared-lab",
  "notebook": "churn.ipynb",
  "cell": "5",
  "sensitivity": "internal",
  "action": "explain"
}

Push logs to an append-only store. Compliance can answer "what data left the notebook?" without guesswork.

5. Runtime Policies

  • Block display of restricted frames. Wrap display / show to refuse printing rows from RESTRICTED guard objects.
  • Scrub HTML outputs. For Databricks, intercept displayHTML calls.
  • Force local-only mode for restricted workspaces. Provide a switch-when on, completions and explanations execute locally (Code Llama, llama.cpp) with no network access.

Testing & Verification

  • Unit tests for guard wrappers: attempt to serialize GuardedValue at each level.
  • Integration tests with pytest-notebook: run notebooks that purposely print PII and assert prompts fail.
  • Red-team notebooks containing prompt-injection instructions; ensure middleware rejects them.
  • CI scans using gitleaks --no-git notebooks/ to catch stray datasets before pushes.

Common Questions

Will analysts hate labeling dataframes? Automate it. Pull table-level classifications from your catalog (Unity, DataHub) and wrap dataframes at load time.

Does local-only mode kill accuracy? The small drop in output quality is worth the zero-exfil guarantee for regulated datasets.

Can we trust vendor "enterprise" controls? Use them, but still scrub payloads locally and log everything yourself. Belt and suspenders.

Conclusion

Notebook copilots are powerful, but they need the same defenses as any data export path. Tag your dataframes with sensitivity metadata, scrub prompt payloads inside the kernel, summarize instead of sharing raw rows, and log every interaction. Combined with local-only modes for sensitive work, those steps let analysts keep vibe-coding notebooks without leaking your customer tables.