@arizeai/phoenix-mcp

Phoenix is an open-source AI observability platform designed for experimentation, evaluation, and troubleshooting.

Evaluated Mar 18, 2026 (0d ago)
Homepage ↗ Repo ↗ Databases AI Observability Monitoring Evaluation
⚙ Agent Friendliness
74
/ 100
Can an agent use this?
🔒 Security
75
/ 100
Is it safe for agents?
⚡ Reliability
70
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
80
Documentation
80
Error Messages
--
Auth Simplicity
70
Rate Limits
80

🔒 Security

TLS Enforcement
100
Auth Strength
70
Scope Granularity
50
Dep. Hygiene
50
Secret Handling
100

No known vulnerabilities in dependencies.

⚡ Reliability

Uptime/SLA
50
Version Stability
80
Breaking Changes
100
Error Recovery
50
AF Security Reliability

Best When

Used in conjunction with popular LLM frameworks and tools.

Avoid When

When a simple monitoring solution is sufficient.

Use Cases

  • Tracing LLM applications
  • Evaluating application performance
  • Managing datasets for experimentation
  • Optimizing prompts and model comparisons

Not For

  • Users looking for a proprietary solution
  • Those who require extensive vendor support

Interface

REST API
Yes
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
Yes
Webhooks
No

Authentication

Methods: API Key OAuth2
OAuth: Yes Scopes: Yes

API key authentication is straightforward.

Pricing

Model: Freemium
Free tier: Yes
Requires CC: No

Free tier available for initial testing.

Agent Metadata

Pagination
Cursor-based
Idempotent
True
Retry Guidance
Documented

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for @arizeai/phoenix-mcp.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-18.

26756
Packages Evaluated
5902
Need Evaluation
2
Need Re-evaluation
Community Powered