mem-agent-mcp

mem-agent-mcp is a Python MCP (Model Context Protocol) server that connects an “obsidian-like” local Markdown memory store (user.md + entities/*.md) to LLM clients (e.g., Claude Desktop, Lm Studio, Claude Code) for memory retrieval and memory-related tool operations. It also documents memory import/connector workflows (chatgpt, notion, nuclino, github, google-docs) and ways to run the model backend locally (MLX on macOS or vLLM/OpenAI-compatible via LiteLLM proxy).

Evaluated Mar 30, 2026 (22d ago)
Repo ↗ Ai Ml mcp memory obsidian-style rag local-first python tool-integration file-system connectors vllm mlx llm-clients
⚙ Agent Friendliness
47
/ 100
Can an agent use this?
🔒 Security
40
/ 100
Is it safe for agents?
⚡ Reliability
24
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
70
Documentation
55
Error Messages
0
Auth Simplicity
55
Rate Limits
10

🔒 Security

TLS Enforcement
40
Auth Strength
25
Scope Granularity
45
Dep. Hygiene
55
Secret Handling
45

Strengths: local-first design with user-managed tokens; Google Drive scope guidance is explicit (drive.readonly), and GitHub token scopes are described (public_repo vs repo). Risks/unknowns: MCP server authentication/authorization is not documented; MCP over HTTP could be exposed without access control. Token handling and logging practices are not described; ensure tokens are not logged and that file permissions for the memory directory are restricted. TLS is only mentioned implicitly (HTTP endpoint example); no explicit 'HTTPS required' guidance is provided.

⚡ Reliability

Uptime/SLA
0
Version Stability
35
Breaking Changes
30
Error Recovery
30
AF Security Reliability

Best When

You want local-first private memory with an MCP client, and you can run/point to a local model server (MLX/vLLM or an OpenAI-compatible proxy).

Avoid When

You need strict security boundaries for remote access or you cannot control tokens, local storage permissions, and logs; also avoid if you need standardized HTTP APIs beyond MCP.

Use Cases

  • Connect an Obsidian-style local Markdown memory directory to an MCP-capable client
  • RAG-like memory Q&A over imported conversations and entities
  • Maintaining/updating personal or team memory from exports (ChatGPT/Notion/Nuclino)
  • Live memory augmentation from GitHub repositories and Google Docs via Drive APIs
  • Applying retrieval-time filters provided through the prompt (<filter> tags)
  • Using MCP over stdio or HTTP for different client environments

Not For

  • Public multi-tenant production deployments without proper network isolation/authn/authz
  • Handling highly sensitive data without a threat model for local file access and logs
  • Environments that cannot run a local model backend or an OpenAI-compatible proxy
  • Use cases requiring a formal, documented REST/OpenAPI developer platform (this is primarily MCP/file-based)

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: Environment-variable configuration for model proxy/backend connectivity (VLLM_HOST/VLLM_PORT) Connector tokens for GitHub (TOKEN) Google Drive OAuth access tokens (ACCESS TOKEN / access token guidance) Google Docs connector uses Drive API access token Interactive authentication setup via memory wizard (tokens/scopes described)
OAuth: No Scopes: Yes

No explicit MCP-auth mechanism is described in the README. Connector access relies on user-provided tokens (GitHub classic token; Google Drive OAuth scopes like drive.readonly). If MCP-over-HTTP is exposed (e.g., via ngrok), MCP itself appears to be unauthenticated unless the deployment adds external auth.

Pricing

Free tier: No
Requires CC: No

Self-hosted/local model usage; costs depend on model/backend hardware or any OpenAI-compatible proxy you run (e.g., LiteLLM/OpenRouter). No hosted pricing described.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • MCP server functionality appears tied to local filesystem layout (memory/user.md and memory/entities/*.md); incorrect directory structure may break retrieval/tool behavior.
  • On ARM64 Linux, vLLM may not be installed by default; recommended path is OpenAI-compatible via LiteLLM proxy, which can surprise agents expecting a local vLLM binary.
  • Some workflows (filters, connectors) modify local .filters and memory content; lack of described idempotency means repeated runs could duplicate/overwrite depending on connector implementation.
  • If MCP-over-HTTP is publicly exposed (e.g., via ngrok), the README does not describe authentication/authorization safeguards for MCP endpoints.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for mem-agent-mcp.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered