Corbell

Corbell is a local CLI tool (Python) that builds multi-repo architecture graphs from code (SQLite by default), indexes code embeddings, extracts design patterns from existing docs, and uses LLM providers to generate and review PRD-driven architecture/spec documents. It can also expose graph/code context via an MCP server and export tasks to Linear (and optionally Notion).

Evaluated Mar 30, 2026 (0d ago)
Repo ↗ DevTools ai-ml code-analysis knowledge-graph mcp-server architecture static-analysis-plugin embeddings devtools
⚙ Agent Friendliness
60
/ 100
Can an agent use this?
🔒 Security
45
/ 100
Is it safe for agents?
⚡ Reliability
21
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
65
Documentation
80
Error Messages
0
Auth Simplicity
85
Rate Limits
15

🔒 Security

TLS Enforcement
30
Auth Strength
55
Scope Granularity
20
Dep. Hygiene
60
Secret Handling
60

Local CLI reduces exposure of repo data to third parties, but the tool can call external LLM providers using API keys from environment variables. No auth model/scope model is described for the MCP server/UI. Rate-limiting and error guidance are not documented in the provided content. Dependencies include common AI/ML libraries; without lockfile/CVE data, hygiene can’t be fully verified from the manifest alone.

⚡ Reliability

Uptime/SLA
0
Version Stability
35
Breaking Changes
20
Error Recovery
30
AF Security Reliability

Best When

You have multiple backend repositories with established architecture patterns and you want repeatable, context-aware spec generation/review entirely from local code.

Avoid When

You require a strict, audited compliance workflow for AI outputs, or you cannot provide/handle repository source code locally.

Use Cases

  • Generate PRD-driven architecture/design docs that reflect an existing multi-repo codebase
  • Architecture review by cross-checking proposed specs against the built graph and extracted constraints
  • Assist with onboarding by surfacing relevant services, call paths, and method signatures from your own repos
  • Create Linear issues with richer code/service context for implementation planning
  • Provide AI agents an MCP-accessible view of architecture context (services, call paths, semantic code search)

Not For

  • Producing authoritative system designs without human review (LLM output may still be wrong)
  • Enforcing runtime architecture decisions or guaranteeing correctness of deployed code
  • Teams that need a fully hosted SaaS with centralized management
  • Organizations that cannot run local processes that scan repositories and build indexes

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: Environment variables for provider API keys (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY, BEDROCK_API_KEY, AZURE_OPENAI_API_KEY, GOOGLE_APPLICATION_CREDENTIALS) Optional use of local models via Ollama (no API key required for that provider)
OAuth: No Scopes: No

Auth is primarily for upstream LLM providers; there is no described auth for the local CLI/UI/MCP itself.

Pricing

Free tier: No
Requires CC: No

The README emphasizes local operation and shows token usage/estimated cost after LLM calls, but no fixed pricing tiers are provided.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • Local-first tool: agents must run in a filesystem context with the repositories accessible.
  • MCP tool behavior depends on the workspace state (graph built/indexed) and environment variables for the LLM provider when LLM-dependent tools are invoked.
  • Generated specs may require human approval; graph consistency/review steps are part of the intended workflow.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for Corbell.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

6428
Packages Evaluated
19975
Need Evaluation
586
Need Re-evaluation
Community Powered