codebase-context

codebase-context is a local-first MCP server/CLI that helps AI agents understand a codebase by performing hybrid search over indexed code, detecting team coding patterns/conventions (including golden files and trend signals), persisting “team memory” extracted from git commits/recorded decisions, and performing edit preflight checks with a decision card to indicate readiness/coverage gaps before code changes.

Evaluated Mar 30, 2026 (21d ago)
Homepage ↗ Repo ↗ DevTools mcp ai-coding code-search pattern-detection team-memory local-first hybrid-search preflight evidence-scoring typescript
⚙ Agent Friendliness
65
/ 100
Can an agent use this?
🔒 Security
37
/ 100
Is it safe for agents?
⚡ Reliability
42
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
78
Documentation
75
Error Messages
0
Auth Simplicity
95
Rate Limits
10

🔒 Security

TLS Enforcement
40
Auth Strength
20
Scope Granularity
10
Dep. Hygiene
55
Secret Handling
70

Local-first design and recommended .gitignore to keep generated index artifacts while persisting memory.json suggest reduced data exposure. However, HTTP mode is described without any auth guidance; TLS requirements/auth controls are not documented. OPENAI_API_KEY is environment-based (better than hardcoding), but the README does not describe log redaction or strict secret handling guarantees. No security advisories/CVE hygiene info is provided, so dependency hygiene can only be estimated.

⚡ Reliability

Uptime/SLA
0
Version Stability
65
Breaking Changes
55
Error Recovery
50
AF Security Reliability

Best When

Used locally by trusted developers/agents to index and retrieve from a single team codebase (or a small set of repos) while preserving privacy and consistency with team conventions.

Avoid When

Avoid exposing the HTTP mode to untrusted networks/clients without additional network controls; also avoid when you need guaranteed governance/compliance evidence beyond what the decision card provides.

Use Cases

  • Have an agent follow a team’s existing coding conventions and architecture
  • Provide evidence-ranked code search results (with patterns, relationships, and memory hints)
  • Guide edits with preflight/impact assessment and recommended follow-up searches
  • Persist and surface team decisions/gotchas across agent sessions
  • Multi-language code intelligence (Tree-sitter symbol extraction and indexing)

Not For

  • Serving as a hosted SaaS product for remote access to proprietary source code
  • Replacing a full code review/CI gate (preflight is evidence-based guidance, not formal verification)
  • Use cases requiring strong authentication/authorization for untrusted multi-tenant access

Interface

REST API
Yes
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

OAuth: No Scopes: No

README describes HTTP mode for local server access but does not mention auth, API keys, or scope-based authorization for the MCP/HTTP endpoints.

Pricing

Free tier: No
Requires CC: No

No hosted pricing model described; this is an open-source/local package.

Agent Metadata

Pagination
none
Idempotent
True
Retry Guidance
Not documented

Known Gotchas

  • HTTP mode at localhost may be reachable by multiple clients; without documented auth, restrict network exposure
  • Index freshness depends on refresh_index/incremental updates; stale index may cause confusing results
  • When multiple projects are configured and active project is ambiguous, responses require an agent retry using availableProjects/selection paths

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for codebase-context.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered