{"id":"agentseal-agentseal","name":"agentseal","af_score":58.8,"security_score":52.0,"reliability_score":28.8,"what_it_does":"AgentSeal is a local-first security toolkit/CLI and Python/TypeScript library for auditing AI agent configurations and prompts. It scans for dangerous “skill”/agent files, checks MCP server/tool configurations for poisoning, analyzes toxic data flows, provides prompt red-teaming via adversarial probes, and can continuously watch/alert on changes to agent config files.","best_when":"You maintain local developer tooling/agent setups (VS Code extensions, agent CLIs, MCP tool servers) and want automated checks for configuration poisoning and prompt injection regressions in CI.","avoid_when":"You need a formal assurance/compliance attestation rather than heuristic scanning and probe-based testing; or you cannot control network/LLM endpoints and need fully offline verification for prompt testing.","last_evaluated":"2026-03-30T13:41:30.129836+00:00","has_mcp":true,"has_api":false,"auth_methods":["Environment variables for provider API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, MINIMAX_API_KEY)","No-auth/local operation for guard/shield; HTTP endpoint mode varies by target","Optional custom agent function (no external auth by AgentSeal)"],"has_free_tier":true,"known_gotchas":["Probe-based scoring can produce false positives/negatives depending on prompt formatting, model behavior, and tool/runtime differences.","When scanning MCP servers, behavior may depend on runtime context (stdio vs SSE) and what tools actually expose; tool description poisoning is only one layer of safety.","`scan` relies on LLM responses for prompt tests when using cloud models; results may differ across providers/models and versions."],"error_quality":0.0}