Vibe Check MCP Server

Vibe Check is an MCP server that acts as a meta-mentor for AI agents, using Chain-Pattern Interrupts (CPI) to challenge assumptions, prevent tunnel vision and over-engineering, and enforce session-specific rules — research shows CPI roughly doubles agent task success rates in evaluation runs.

Evaluated Mar 07, 2026 (0d ago) vlatest
Homepage ↗ Repo ↗ Other mcp agent-safety meta-cognition anti-tunnel-vision llm gemini openai anthropic open-source node
⚙ Agent Friendliness
81
/ 100
Can an agent use this?
🔒 Security
75
/ 100
Is it safe for agents?
⚡ Reliability
71
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
82
Documentation
85
Error Messages
75
Auth Simplicity
82
Rate Limits
75

🔒 Security

TLS Enforcement
90
Auth Strength
72
Scope Granularity
65
Dep. Hygiene
78
Secret Handling
72

Community MCP server for AI code review/vibe checking. Minimal auth. Review before production use. Community-maintained — security depends on author's practices.

⚡ Reliability

Uptime/SLA
72
Version Stability
72
Breaking Changes
70
Error Recovery
70
AF Security Reliability

Best When

You have an AI agent working on complex, open-ended coding or research tasks where you've observed it going off-track, over-engineering, or getting stuck in a flawed strategy — and you want a lightweight oversight layer backed by research.

Avoid When

Your agent tasks are short, well-defined, and deterministic, where the overhead of a meta-mentor LLM call at each step would outweigh the benefit.

Use Cases

  • Adding a reflective pause mechanism to AI coding agents that catches runaway complexity and off-track strategies mid-task
  • Logging agent mistakes and successful patterns per-session to build a learning feedback loop via vibe_learn
  • Enforcing per-session behavioral rules (e.g., 'always prefer minimal solutions') that the agent must check via update_constitution

Not For

  • Latency-sensitive pipelines where adding an LLM oversight call at each step is prohibitive
  • Simple, well-bounded tasks where agent tunnel vision is unlikely
  • Teams without API keys for at least one supported LLM provider (Gemini, OpenAI, Anthropic, or OpenRouter)

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: api_key
OAuth: No Scopes: No

Requires at least one LLM provider API key (GEMINI_API_KEY recommended as default). Supports OPENAI_API_KEY, ANTHROPIC_API_KEY, OPENROUTER_API_KEY.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

MIT licensed open source. Costs are LLM provider API calls per vibe_check invocation.

Agent Metadata

Pagination
none
Idempotent
Partial
Retry Guidance
Not documented

Known Gotchas

  • Each vibe_check call incurs an LLM API cost — high-frequency agents need cost monitoring
  • Research claims (153-run evaluation) should be validated independently before relying on stated success rates
  • sessionId must be managed by the calling agent for history continuity
  • Optimal interrupt dosage of 10-20% of steps requires agent integration work to implement correctly
  • Multiple provider API keys increase secret management surface

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for Vibe Check MCP Server.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-07.

6373
Packages Evaluated
26150
Need Evaluation
173
Need Re-evaluation
Community Powered