ironcurtain

IronCurtain is a TypeScript runtime/CLI for autonomous AI agents that enforces a human-readable “constitution” (policy) compiled into deterministic rules. It mediates all agent tool calls via MCP servers (e.g., filesystem/git/github/workspace) and a policy engine that allows/denies/escalates actions for user approval, with agent code isolated in a V8 sandbox (builtin mode) or an external agent constrained by Docker + network/MCP mediation (docker mode).

Evaluated Mar 30, 2026 (21d ago)
Homepage ↗ Repo ↗ Security ai-ml agent security sandbox mcp policy trusted-process typescript cli
⚙ Agent Friendliness
53
/ 100
Can an agent use this?
🔒 Security
64
/ 100
Is it safe for agents?
⚡ Reliability
26
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
80
Documentation
75
Error Messages
0
Auth Simplicity
60
Rate Limits
10

🔒 Security

TLS Enforcement
95
Auth Strength
70
Scope Granularity
35
Dep. Hygiene
55
Secret Handling
60

Security posture is centered on defense-in-depth: mediation of every tool call through a policy engine, deny-by-default policy matching, and isolation (V8 isolate for builtin agent; Docker/no-network plus MITM proxy for docker mode). Authentication is supported for external MCP servers (GitHub PAT, Google OAuth) and LLM API keys. However, provided content does not include details on OAuth scopes granularity, formal threat-model coverage for all dependencies, explicit secret-redaction guarantees, or detailed runtime error-handling for policy bypass attempts.

⚡ Reliability

Uptime/SLA
0
Version Stability
35
Breaking Changes
20
Error Recovery
50
AF Security Reliability

Best When

You want autonomous agent functionality (including mutations) but require a boundary that routes risky actions through explicit policy and interactive approval, with defense-in-depth against prompt injection/drift.

Avoid When

You need a simple drop-in HTTP API service; IronCurtain is a local runtime/CLI with mediated tool calls and may require setup of multiple external integrations (LLM provider, optional GitHub/Google auth).

Use Cases

  • Autonomous code-related workflows (git operations, repo changes) with user-controlled escalation
  • Agent access control via natural-language policy for development/CI tooling
  • Running AI agents that can interact with external systems (GitHub, Google Workspace) under explicit allow/escalate/deny rules
  • Building personas/workspaces with different access levels and persistent memory

Not For

  • High-assurance regulated environments without independent security review/verification
  • Use where you need a stable, long-term API/contract for programmatic access (it appears to be a research/early-stage project)
  • Environments where you cannot provide/handle required LLM API keys or required third-party tokens (GitHub/Google)

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
Yes
Webhooks
No

Authentication

Methods: LLM provider API keys (Anthropic/Google/OpenAI) via environment variables or config GitHub Personal Access Token for GitHub MCP server OAuth setup for Google Workspace MCP server
OAuth: Yes Scopes: No

Auth model is primarily configuration-based for local runtime. The README mentions OAuth setup via `ironcurtain auth` for Google Workspace; GitHub uses a personal access token. No public fine-grained OAuth scope list was provided in the provided content.

Pricing

Free tier: No
Requires CC: No

Pricing is not described. Costs likely come from underlying LLM usage and optional third-party APIs/tokens.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • Default policy denies by default unless rules explicitly allow/escalate; agent may require policy compilation/constitution adjustments for desired actions.
  • Because escalations may be required for mutations, workflows that expect fully autonomous behavior may need auto-approval/whitelisting configuration.
  • Policy compilation uses an LLM pipeline to compile/verify; if the constitution or dynamic lists are ambiguous, enforcement outcomes may be surprising until revised.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for ironcurtain.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered