agentseal

AgentSeal is a local-first security toolkit/CLI and Python/TypeScript library for auditing AI agent configurations and prompts. It scans for dangerous “skill”/agent files, checks MCP server/tool configurations for poisoning, analyzes toxic data flows, provides prompt red-teaming via adversarial probes, and can continuously watch/alert on changes to agent config files.

Evaluated Mar 30, 2026 (0d ago)
Homepage ↗ Repo ↗ Security agent-security ai-security prompt-injection mcp mcp-security cli red-teaming supply-chain-security python typescript security-scanner
⚙ Agent Friendliness
59
/ 100
Can an agent use this?
🔒 Security
52
/ 100
Is it safe for agents?
⚡ Reliability
29
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
45
Documentation
70
Error Messages
0
Auth Simplicity
80
Rate Limits
20

🔒 Security

TLS Enforcement
85
Auth Strength
55
Scope Granularity
20
Dep. Hygiene
35
Secret Handling
60

Security intent is strong (local scanning, MCP poisoning checks, prompt red-teaming, Unicode/Base64/BiDi deobfuscation, baseline tracking). However, the README does not describe transport/security controls in detail (TLS, certificate validation), nor does it document how secrets are handled in logs, nor provides dependency/CVE hygiene or scope granularity details. It also performs security testing that may require contacting agent endpoints/models, so operational safety depends on your environment and LLM/provider choices.

⚡ Reliability

Uptime/SLA
0
Version Stability
45
Breaking Changes
40
Error Recovery
30
AF Security Reliability

Best When

You maintain local developer tooling/agent setups (VS Code extensions, agent CLIs, MCP tool servers) and want automated checks for configuration poisoning and prompt injection regressions in CI.

Avoid When

You need a formal assurance/compliance attestation rather than heuristic scanning and probe-based testing; or you cannot control network/LLM endpoints and need fully offline verification for prompt testing.

Use Cases

  • Pre-deployment scanning of local AI agent configurations (skills, MCP configs) for malicious or risky patterns
  • Prompt injection/red-teaming of system prompts with a deterministic trust score
  • Auditing live MCP servers for poisoned tool descriptions and hidden instructions
  • Continuous monitoring of agent config directories for supply-chain style modifications
  • Generating machine-readable security reports (e.g., JSON/SARIF) for CI integration

Not For

  • As a replacement for secure-by-design agent runtime controls and least-privilege tool access
  • Guaranteeing absence of vulnerabilities or bypassing all novel prompt attacks
  • Auditing third-party systems you cannot access locally or via an authorized endpoint/MCP connection

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
Yes
Webhooks
No

Authentication

Methods: Environment variables for provider API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, MINIMAX_API_KEY) No-auth/local operation for guard/shield; HTTP endpoint mode varies by target Optional custom agent function (no external auth by AgentSeal)
OAuth: No Scopes: No

No OAuth described. For cloud model usage, API keys are required; local/guard/shield do not require an API key. For HTTP endpoint scanning, auth is unspecified (depends on the endpoint you provide).

Pricing

Free tier: Yes
Requires CC: No

Cost drivers are the LLM calls used by `scan` and optional LLM classification in MCP scanning; guard/shield are offline. Exact Pro pricing and exact free limits are not provided in the README content.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • Probe-based scoring can produce false positives/negatives depending on prompt formatting, model behavior, and tool/runtime differences.
  • When scanning MCP servers, behavior may depend on runtime context (stdio vs SSE) and what tools actually expose; tool description poisoning is only one layer of safety.
  • `scan` relies on LLM responses for prompt tests when using cloud models; results may differ across providers/models and versions.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for agentseal.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

6397
Packages Evaluated
20006
Need Evaluation
586
Need Re-evaluation
Community Powered