lumen
Lumen is a local semantic code search engine for AI coding agents. It indexes a local codebase into semantic chunks (via AST/tree-sitter), embeds them using locally hosted embedding backends (e.g., Ollama/LM Studio), stores vectors in SQLite with sqlite-vec, and exposes an MCP tool (semantic_search) so agents can retrieve relevant code without reading entire files.
Score Breakdown
⚙ Agent Friendliness
🔒 Security
Local-first design reduces data exfiltration risk; indexing is stored outside the repo under the user’s local home directory. No auth model is described for the MCP interface. Connectivity to embedding backends is configured via localhost HTTP URLs (TLS not mentioned). Dependency/security posture beyond claims is not verifiable from the provided content.
⚡ Reliability
Best When
You have a local embedding backend (Ollama/LM Studio), want offline semantic code retrieval for an agent (e.g., Claude Code via MCP), and care about incremental indexing and local data retention.
Avoid When
You need a hosted service with managed infrastructure, or you cannot run local embeddings/SQLite indexing due to policy or performance constraints.
Use Cases
- • Semantic code search for AI coding agents (retrieve relevant functions/types/modules by meaning)
- • Reducing token usage/cost in code-editing workflows by limiting context to relevant chunks
- • Working offline/local-only indexing for compliance-sensitive environments
- • Fast incremental re-indexing using Merkle-tree change detection
- • Supporting multiple languages via AST/tree-sitter chunking
- • Worktree-aware indexing to reuse existing indices across git worktrees
Not For
- • Cloud-hosted, multi-tenant SaaS use where remote access and hosted operation are required
- • Environments that cannot run a local embedding model/backend
- • Use cases needing a public REST/GraphQL API for external clients (it’s primarily a local MCP tool + CLI)
- • Requirements for strict commercial SLAs (no SLA described)
Interface
Authentication
No authentication/authorization mechanism is described for the MCP tool; the setup appears local-first (embedding backend is accessed via localhost URLs).
Pricing
Open-source/local usage; costs depend on embedding compute and local LLM backend usage rather than SaaS pricing.
Agent Metadata
Known Gotchas
- ⚠ Requires local embedding backend connectivity (e.g., Ollama server running and model pulled); otherwise tool use may fail
- ⚠ First indexing can be slow for large projects; subsequent runs are faster due to incremental updates
- ⚠ Switching embedding models creates a separate index (old index is not auto-deleted), which can increase disk usage
Alternatives
Full Evaluation Report
Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for lumen.
AI-powered analysis · PDF + markdown · Delivered within 30 minutes
Package Brief
Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.
Delivered within 10 minutes
Score Monitoring
Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.
Continuous monitoring
Scores are editorial opinions as of 2026-03-30.