Container Isolation: Why It Matters for AI Execution

Container Isolation: Why It Matters for AI Execution

AI agents that can execute code are powerful. They can install dependencies, run scripts, modify files, and interact with APIs. That same power makes them dangerous if they run without proper isolation. Container isolation is not a nice-to-have for AI execution — it is a hard requirement.

The Risk of Uncontained AI Execution

When an AI agent runs on your local machine or a shared server, it has access to everything you have access to: your file system, your credentials, your network, and your running processes. A single mistake in a prompt — or an unexpected behavior from the model — can result in:

  • Data loss: The agent deletes or overwrites files it should not have touched.
  • Credential exposure: The agent reads SSH keys, API tokens, or environment variables and includes them in output logs.
  • System modification: The agent installs packages, changes system settings, or modifies configuration files.
  • Network exfiltration: The agent sends data to external endpoints, intentionally or as part of a tool call it did not fully understand.

These are not theoretical risks. Anyone who has worked with AI agents in unrestricted environments has seen at least one of these happen.

How Container Isolation Helps

Docker containers provide process-level isolation from the host system. When FlowKoi runs a workflow, it creates a fresh container with:

A clean filesystem. The container starts with only the base image and the workflow’s artifacts. No access to the host filesystem, no leftover state from previous runs.

Controlled network access. Containers can be configured with restricted or no network access. If a workflow does not need the internet, it does not get it.

Resource limits. CPU, memory, and disk usage can be capped per container. A runaway process cannot consume all available resources on the host.

Non-root execution. Workflows run as a non-root user inside the container. Even if the agent tries to perform privileged operations, the container’s user permissions prevent it.

Ephemeral lifecycle. When the workflow finishes, the container is destroyed. Nothing persists except the explicitly saved outputs. Any unintended changes, installed packages, or temporary files disappear.

Defense in Depth

Container isolation is one layer in a defense-in-depth strategy. FlowKoi combines it with several other measures:

Artifact-based file management. Only files defined as workflow artifacts are injected into the container and synced back. The agent cannot access or modify files outside its workspace.

Credential isolation. Each workflow session gets its own CLAUDE_CONFIG_DIR with scoped credentials. Credentials are not shared across workflows or users.

Output path separation. Files written to the configured output path (default output/) are treated differently from workspace files. The file watcher explicitly excludes output files from artifact sync, preventing accidental overwrites of workflow configuration.

When to Restrict Network Access

Not every workflow needs internet access. A code review workflow that analyzes files already present in the container can run with --network=none. A data processing workflow that reads from a mounted volume and writes results to the output directory has no reason to reach external endpoints.

Restrict network access by default and only enable it when the workflow explicitly requires it. This eliminates an entire category of risks — accidental or malicious data exfiltration — with a single flag.

The Cost of Not Isolating

Running AI agents without isolation is like giving a new employee admin access to production on their first day. It might work out fine most of the time. But when it goes wrong, it goes very wrong. Container isolation ensures that the blast radius of any AI agent mistake is limited to the container itself.

For AI execution, containers are not overhead. They are infrastructure.