just-prompt
just-prompt is an MCP (Model Control Protocol) server that exposes a unified interface to multiple LLM providers (OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama). It provides MCP tools to send prompts to one or more models, run prompts loaded from files (optionally writing outputs to disk), list providers/models, and run a multi-model “board” workflow where a “CEO” model decides based on board member responses.
Score Breakdown
⚙ Agent Friendliness
🔒 Security
TLS is not discussed for MCP/stdio or provider-to-provider calls. Auth is based on environment-provided provider API keys with no documented MCP-level authorization or scopes; this limits isolation for multi-user environments. Secret handling is only implied by using environment variables; there is no explicit guidance about logging/redaction. Dependency hygiene is assumed but not verifiable from the provided content (dependencies listed; no vulnerability/SBOM/CVE info provided).
⚡ Reliability
Best When
You want a local MCP server that an agent can call to route prompts across multiple LLM backends with minimal integration effort.
Avoid When
You need strong organizational security controls (authZ, audit logs), guaranteed idempotency for file-writing operations, or well-specified operational/SLA guarantees.
Use Cases
- • Unified prompt execution across multiple LLM providers
- • Benchmarking or fallback across multiple models/providers
- • Generating outputs from prompt templates stored in files
- • Running a multi-model consensus/selection workflow (board/CEO pattern)
- • Local and remote model usage (including Ollama via a host URL)
Not For
- • Producing an internet-accessible hosted API for external clients (it’s an MCP server intended to be run locally/in-process)
- • Use cases requiring strict data residency or compliance guarantees (not documented here)
- • Use cases needing fine-grained API-level RBAC/tenant security (not documented here)
Interface
Authentication
Authentication/authorization is described as local API-key configuration via environment variables. The MCP interface itself does not document additional auth, scopes, or multi-tenant access controls.
Pricing
Pricing is not provided by just-prompt; costs depend on the underlying provider usage (and local Ollama is typically self-hosted).
Agent Metadata
Known Gotchas
- ⚠ File path parameters require absolute paths (abs_file_path/abs_output_dir); agents should ensure host filesystem access matches the server environment.
- ⚠ Model names require provider prefixes (e.g., openai:o3:high). The server mentions automatic model name correction based on default models, but exact behavior is not fully specified.
- ⚠ Provider availability depends on which API keys are present; missing keys mean some providers become unavailable while the server still starts. Agents should handle partial provider sets.
Alternatives
Full Evaluation Report
Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for just-prompt.
AI-powered analysis · PDF + markdown · Delivered within 30 minutes
Package Brief
Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.
Delivered within 10 minutes
Score Monitoring
Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.
Continuous monitoring
Scores are editorial opinions as of 2026-03-30.