{"id":"ory-lumen","name":"lumen","homepage":"https://www.ory.com/blog/ory-lumen-semantic-search-claude-code","repo_url":"https://github.com/ory/lumen","category":"devtools","subcategories":[],"tags":["devtools","search","ai-ml","semantic-search","mcp","local-first","code-indexing","sqlite-vec"],"what_it_does":"Lumen is a local semantic code search engine for AI coding agents. It indexes a local codebase into semantic chunks (via AST/tree-sitter), embeds them using locally hosted embedding backends (e.g., Ollama/LM Studio), stores vectors in SQLite with sqlite-vec, and exposes an MCP tool (semantic_search) so agents can retrieve relevant code without reading entire files.","use_cases":["Semantic code search for AI coding agents (retrieve relevant functions/types/modules by meaning)","Reducing token usage/cost in code-editing workflows by limiting context to relevant chunks","Working offline/local-only indexing for compliance-sensitive environments","Fast incremental re-indexing using Merkle-tree change detection","Supporting multiple languages via AST/tree-sitter chunking","Worktree-aware indexing to reuse existing indices across git worktrees"],"not_for":["Cloud-hosted, multi-tenant SaaS use where remote access and hosted operation are required","Environments that cannot run a local embedding model/backend","Use cases needing a public REST/GraphQL API for external clients (it’s primarily a local MCP tool + CLI)","Requirements for strict commercial SLAs (no SLA described)"],"best_when":"You have a local embedding backend (Ollama/LM Studio), want offline semantic code retrieval for an agent (e.g., Claude Code via MCP), and care about incremental indexing and local data retention.","avoid_when":"You need a hosted service with managed infrastructure, or you cannot run local embeddings/SQLite indexing due to policy or performance constraints.","alternatives":["Keyword/code search tools (ripgrep, Sourcegraph, etc.) for simpler retrieval","Open-source semantic search stacks (e.g., embedding + vector DB) where you build your own indexing pipeline","Existing agent code-lookup plugins that read project files directly rather than indexing (token-heavy)"],"af_score":61.0,"security_score":40.5,"reliability_score":35.0,"package_type":"mcp_server","discovery_source":["github"],"priority":"high","status":"evaluated","version_evaluated":null,"last_evaluated":"2026-03-30T13:53:05.351310+00:00","interface":{"has_rest_api":false,"has_graphql":false,"has_grpc":false,"has_mcp_server":true,"mcp_server_url":null,"has_sdk":false,"sdk_languages":[],"openapi_spec_url":null,"webhooks":false},"auth":{"methods":["No API-key auth described (local-only tool usage via Claude Code MCP plugin)"],"oauth":false,"scopes":false,"notes":"No authentication/authorization mechanism is described for the MCP tool; the setup appears local-first (embedding backend is accessed via localhost URLs)."},"pricing":{"model":null,"free_tier_exists":false,"free_tier_limits":null,"paid_tiers":[],"requires_credit_card":false,"estimated_workload_costs":null,"notes":"Open-source/local usage; costs depend on embedding compute and local LLM backend usage rather than SaaS pricing."},"requirements":{"requires_signup":false,"requires_credit_card":false,"domain_verification":false,"data_residency":["Local only (index stored on the machine under ~/.local/share/lumen/...)"],"compliance":[],"min_contract":null},"agent_readiness":{"af_score":61.0,"security_score":40.5,"reliability_score":35.0,"mcp_server_quality":70.0,"documentation_accuracy":75.0,"error_message_quality":0.0,"error_message_notes":null,"auth_complexity":95.0,"rate_limit_clarity":5.0,"tls_enforcement":60.0,"auth_strength":20.0,"scope_granularity":0.0,"dependency_hygiene":50.0,"secret_handling":80.0,"security_notes":"Local-first design reduces data exfiltration risk; indexing is stored outside the repo under the user’s local home directory. No auth model is described for the MCP interface. Connectivity to embedding backends is configured via localhost HTTP URLs (TLS not mentioned). Dependency/security posture beyond claims is not verifiable from the provided content.","uptime_documented":0.0,"version_stability":45.0,"breaking_changes_history":40.0,"error_recovery":55.0,"idempotency_support":"false","idempotency_notes":"Indexing is incremental via Merkle diffs and separate indexes per model/version; however, explicit idempotency guarantees for repeated tool calls are not stated.","pagination_style":"none","retry_guidance_documented":false,"known_agent_gotchas":["Requires local embedding backend connectivity (e.g., Ollama server running and model pulled); otherwise tool use may fail","First indexing can be slow for large projects; subsequent runs are faster due to incremental updates","Switching embedding models creates a separate index (old index is not auto-deleted), which can increase disk usage"]}}