cognition-wheel

Provides an MCP (Model Context Protocol) server that answers questions by consulting multiple LLM providers (Anthropic, Google, OpenAI) in parallel and synthesizing the results into a final response via a single tool (`cognition_wheel`).

Evaluated Mar 30, 2026 (21d ago)
Repo ↗ Ai Ml mcp model-context-protocol agents llm-orchestration typescript parallel-inference web-search-optional tooling
⚙ Agent Friendliness
67
/ 100
Can an agent use this?
🔒 Security
57
/ 100
Is it safe for agents?
⚡ Reliability
40
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
85
Documentation
78
Error Messages
0
Auth Simplicity
85
Rate Limits
35

🔒 Security

TLS Enforcement
80
Auth Strength
55
Scope Granularity
20
Dep. Hygiene
60
Secret Handling
70

TLS is implied for typical API use, but not explicitly stated for the MCP transport. Auth is via raw upstream API keys provided to the process; no evidence of scoped/least-privilege tokens for the MCP layer. The README emphasizes environment variables for keys (better than hardcoding), but does not describe secret redaction in logs or threat model for logging/debug output. Dependency list appears standard; no CVE posture is provided.

⚡ Reliability

Uptime/SLA
0
Version Stability
55
Breaking Changes
40
Error Recovery
65
AF Security Reliability

Best When

You want an MCP tool that aggregates several frontier model responses to produce a consolidated answer inside an agent-driven workflow.

Avoid When

You need strict cost/latency predictability, guaranteed privacy/data residency constraints, or you cannot manage multiple external API credentials.

Use Cases

  • AI-assisted answering where multiple model perspectives may improve quality
  • Agent workflows in MCP-compatible clients (e.g., Cursor/Claude Desktop)
  • Reasoning tasks benefiting from parallel exploration and synthesis
  • Optional web-enabled augmentation (via provider capabilities/configuration)

Not For

  • Use as a turnkey production service without confirming operational requirements (observability, scaling, SLAs)
  • Use where strict determinism or minimal cost/latency is required (multiple models called per request)
  • Use cases requiring a local-only/offline guarantee (depends on external LLM APIs)

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: Environment variables for provider API keys: ANTHROPIC_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENAI_API_KEY
OAuth: No Scopes: No

No user-facing OAuth described; auth is via supplying upstream provider API keys to the MCP server process.

Pricing

Free tier: No
Requires CC: No

README does not describe any pricing for the MCP server itself; it relies on pay-per-token provider APIs.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • Multiple upstream model calls per tool invocation can increase latency and cost.
  • Tool behavior may depend on environment configuration (API keys, optional internet search).
  • If one upstream model fails, behavior is described only generally as “graceful degradation”; specific error formats/retry behavior are not documented.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for cognition-wheel.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered