AI-Infra-Guard

AI-Infra-Guard (A.I.G) is an AI red-teaming and security assessment platform that runs multiple scanners and evaluations, including OpenClaw security scanning, multi-agent workflow security scanning, MCP server/agent-skills scanning, AI infrastructure/component vulnerability scanning, and LLM jailbreak/prompt security evaluations. It exposes a web UI and a documented set of task-creation APIs (Swagger/docs) for running scans and retrieving results.

Evaluated Mar 29, 2026 (0d ago)
Homepage ↗ Repo ↗ Security ai-ml security red-team vulnerability-scanner mcp llm-evaluation agents openclaw
⚙ Agent Friendliness
41
/ 100
Can an agent use this?
🔒 Security
31
/ 100
Is it safe for agents?
⚡ Reliability
31
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
0
Documentation
75
Error Messages
0
Auth Simplicity
10
Rate Limits
10

🔒 Security

TLS Enforcement
70
Auth Strength
5
Scope Granularity
0
Dep. Hygiene
45
Secret Handling
45

Security-relevant notes from README: explicitly states it currently lacks an authentication mechanism and should not be deployed on public networks. Mentions a bug fix to mask token fields in a GetTaskDetail API response to prevent credential leakage. No details provided on TLS enforcement, secret storage practices, rate limiting, or fine-grained access controls in the provided excerpt.

⚡ Reliability

Uptime/SLA
0
Version Stability
55
Breaking Changes
40
Error Recovery
30
AF Security Reliability

Best When

You can run it in a trusted/internal environment (e.g., behind your network controls), and you want automated multi-component security scanning plus task-based APIs for integration into your internal security workflow.

Avoid When

When you cannot place it behind authentication/network controls, or when you need robust end-user security controls and strict compliance/data handling guarantees that are not clearly documented.

Use Cases

  • Scanning OpenClaw deployments for insecure configuration, skill risks, CVEs, and privacy leakage
  • Assessing security of agent workflows (e.g., Dify/Coze-style pipelines) against common attack classes
  • Scanning MCP servers and agent skills from source code or remote URLs for multiple vulnerability/risk categories
  • Inventorying AI infrastructure/framework components and matching known CVEs
  • Evaluating LLM/jailbreak robustness using curated attack datasets and comparing cross-model behavior
  • Running scheduled or on-demand security self-examinations for internal AI systems

Not For

  • Public internet deployment without compensating controls (the project states it lacks an authentication mechanism)
  • Applications requiring strong, standardized enterprise authn/authz out of the box
  • Environments that require guaranteed data residency/compliance guarantees not described in the provided docs

Interface

REST API
Yes
GraphQL
No
gRPC
No
MCP Server
No
SDK
No
Webhooks
No

Authentication

OAuth: No Scopes: No

README states the project currently lacks an authentication mechanism and should not be deployed on public networks. No auth methods, scopes, or OAuth are described in the provided content.

Pricing

Model: Free & open-source (Apache-2.0) with a referenced
Free tier: Yes
Requires CC: No

Pro version requires an invitation code; pricing details are not provided in the given README excerpt.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • README indicates no authentication mechanism; place behind internal network/WAF/reverse proxy with access controls
  • Task-based APIs may not be idempotent; repeated task creation could re-run expensive scans
  • Credential leakage concerns are noted (masking token fields in a specific API response), so agents should still treat scan outputs as sensitive

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for AI-Infra-Guard.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-29.

5365
Packages Evaluated
21038
Need Evaluation
586
Need Re-evaluation
Community Powered