ogham-mcp
Ogham MCP is a Python-based Model Context Protocol (MCP) server that provides persistent, cross-client “shared memory” for AI coding agents. It stores memories in a Postgres-compatible database (Supabase/Postgres), generates embeddings via configurable providers (OpenAI/Ollama/Mistral/Voyage/Gemini or local ONNX), and exposes MCP tools for memory storage/retrieval plus additional capabilities like search, graph/profiles, and import/export. It can run per-client in stdio mode or as a shared persistent server via SSE transport, and includes CLI utilities (init/health/search/store/list/export/import/serve/openapi).
Score Breakdown
⚙ Agent Friendliness
🔒 Security
Strengths: README claims secret masking when capturing tool activity via hooks (API keys/tokens/passwords/JWTs masked before storing). Risks/unknowns: MCP-level authentication/authorization is not described; SSE mode uses http://127.0.0.1 in examples but does not state TLS requirements or auth. Database credentials (e.g., Supabase service role key) are powerful—use least-privileged alternatives where possible and tightly restrict network access. Dependency hygiene cannot be fully verified from provided content; optional extras include third-party embedding and reranking libraries.
⚡ Reliability
Best When
You want agent-accessible persistent memory with semantic + keyword retrieval, using Postgres/Supabase as the source of truth, and you can manage database + embedding provider credentials.
Avoid When
You need strong multi-tenant security boundaries enforced by the MCP server (beyond database/security posture), or you cannot run/connect to a database. Also avoid exposing the SSE endpoint broadly without network controls.
Use Cases
- • Persistent coding-agent memory across sessions and across different MCP-capable clients (Claude Code, Cursor, OpenCode, etc.)
- • Team/project shared context: store decisions, gotchas, and architectural patterns; retrieve them later
- • Multi-agent setups where multiple agents share one memory backend via a single long-lived server (SSE mode)
- • Self-hosted or privacy-preserving memory with local embeddings (ONNX/BGE-M3)
- • Migration/backup workflows using export/import and cleanup
Not For
- • No-premise, no-database environments (it requires a database backend before use)
- • Use as a general-purpose document database without embeddings/search needs
- • Environments requiring strict fine-grained authorization/tenant isolation at the application layer (no explicit tenant auth model is described)
Interface
Authentication
The README documents how to configure provider/database credentials, but does not describe MCP-level auth (no API keys, user auth, or scopes for MCP tools). If exposed over SSE, rely on network controls and database permissions.
Pricing
No SaaS pricing is described; appears self-hosted with external embedding/database provider costs.
Agent Metadata
Known Gotchas
- ⚠ If running in SSE mode, clients point to the shared server URL; ensure you run it persistently and control access to the SSE endpoint.
- ⚠ Embedding provider configuration (dimensionality/EMBEDDING_DIM and schema vector(N)) must match; mismatches can lead to failures when storing/searching.
- ⚠ Temporal query parsing is mostly local (parsedatetime) but can fall back to an LLM if configured; this can change cost/latency.
- ⚠ Secret masking is described for hook-driven inscription, but agents should still avoid sending raw secrets in memory-related tool calls if possible.
Alternatives
Full Evaluation Report
Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for ogham-mcp.
AI-powered analysis · PDF + markdown · Delivered within 30 minutes
Package Brief
Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.
Delivered within 10 minutes
Score Monitoring
Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.
Continuous monitoring
Scores are editorial opinions as of 2026-03-30.