Agentic Radar

Agentic Radar is a security scanner by SPLX.ai that performs static analysis on agentic AI system codebases to identify vulnerabilities specific to AI workflows — prompt injection risks, PII leakage through tool outputs, insecure tool integrations, and over-privileged agent permissions. It supports multiple agent frameworks (OpenAI Agents SDK, CrewAI, LangGraph, n8n, AutoGen) and generates visual dependency graphs mapping the agent's tool and service exposure. The tool maps findings to OWASP LLM Top 10 categories and can run runtime adversarial prompt injection tests against live OpenAI Agents-based systems. It is designed to be run in CI/CD pipelines as a gate before deploying agentic systems to production.

Evaluated Mar 06, 2026 (0d ago) vcurrent
Homepage ↗ Repo ↗ Security security scanner owasp prompt-injection static-analysis agentic-ai crewai langgraph openai-agents n8n autogen ci-cd splx
⚙ Agent Friendliness
55
/ 100
Can an agent use this?
🔒 Security
77
/ 100
Is it safe for agents?
⚡ Reliability
68
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
0
Documentation
76
Error Messages
58
Auth Simplicity
78
Rate Limits
68

🔒 Security

TLS Enforcement
90
Auth Strength
75
Scope Granularity
68
Dep. Hygiene
80
Secret Handling
72

Agentic security scanning tool. Analyzes agent configurations for vulnerabilities. Security tool — protect its own credentials. Scan results may contain sensitive architectural info.

⚡ Reliability

Uptime/SLA
70
Version Stability
70
Breaking Changes
65
Error Recovery
68
AF Security Reliability

Best When

You need to audit the security posture of an agentic AI system before production deployment, especially when using popular frameworks like CrewAI, LangGraph, or OpenAI Agents.

Avoid When

You need an MCP server that provides tools to agents rather than scanning them, or you're using unsupported frameworks like custom LangChain or AWS Bedrock agents.

Use Cases

  • Pre-deployment security audit of agentic AI systems: scan the codebase for prompt injection vulnerabilities, PII exposure risks, and insecure tool configurations
  • Generating visual dependency graphs showing which external tools, APIs, and data sources an agent can reach — essential for scope assessment
  • CI/CD security gate: block deployment of agentic systems that fail minimum security thresholds
  • OWASP LLM Top 10 compliance reporting for AI governance and security reviews
  • Runtime adversarial testing of OpenAI Agents systems with automated prompt injection payloads
  • Auditing MCP server integrations within agent codebases for security misconfigurations

Not For

  • Providing MCP tools to agents — this scans agents, not empowers them
  • Scanning traditional web applications without agentic AI components
  • Agent frameworks not yet supported: custom LangChain setups, Vertex AI agents, AWS Bedrock agents
  • Runtime monitoring of production agents in real-time (it's a point-in-time scanner)

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
No
Webhooks
No

Authentication

Methods: api_key
OAuth: No Scopes: No

Requires OPENAI_API_KEY or AZURE_OPENAI_API_KEY for prompt hardening and runtime adversarial testing features. Static analysis and dependency graph generation work without any API keys.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

Apache 2.0 licensed — free for all use including commercial. Runtime adversarial testing features consume OpenAI API credits at standard rates. SPLX.ai offers commercial products separately.

Agent Metadata

Pagination
none
Idempotent
Full
Retry Guidance
Not documented

Known Gotchas

  • CRITICAL CATEGORY DISTINCTION: This tool scans agentic systems — it is not an MCP server or API for agents to call. It detects MCP servers within scanned code but does not provide MCP tools itself.
  • Runtime adversarial testing (prompt injection) only supports OpenAI Agents framework — not CrewAI, LangGraph, or others
  • CrewAI support requires Python 3.10-3.13 specifically — older or newer Python versions may fail silently
  • Framework detection is static: code that builds agent configurations dynamically at runtime may not be fully analyzed
  • The visual dependency graph is HTML output — not machine-readable JSON, limiting automated processing of the graph data
  • Scanning MCP server configurations requires the YAML config to be present — servers configured programmatically may be missed

Alternatives

Full Evaluation Report

Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for Agentic Radar.

$99

Scores are editorial opinions as of 2026-03-06.

5215
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered