just-prompt

just-prompt is an MCP (Model Control Protocol) server that exposes a unified interface to multiple LLM providers (OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama). It provides MCP tools to send prompts to one or more models, run prompts loaded from files (optionally writing outputs to disk), list providers/models, and run a multi-model “board” workflow where a “CEO” model decides based on board member responses.

Evaluated Mar 30, 2026 (22d ago)
Repo ↗ Ai Ml mcp llm routing multi-provider tooling prompting
⚙ Agent Friendliness
62
/ 100
Can an agent use this?
🔒 Security
48
/ 100
Is it safe for agents?
⚡ Reliability
21
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
88
Documentation
75
Error Messages
0
Auth Simplicity
80
Rate Limits
20

🔒 Security

TLS Enforcement
60
Auth Strength
45
Scope Granularity
10
Dep. Hygiene
55
Secret Handling
70

TLS is not discussed for MCP/stdio or provider-to-provider calls. Auth is based on environment-provided provider API keys with no documented MCP-level authorization or scopes; this limits isolation for multi-user environments. Secret handling is only implied by using environment variables; there is no explicit guidance about logging/redaction. Dependency hygiene is assumed but not verifiable from the provided content (dependencies listed; no vulnerability/SBOM/CVE info provided).

⚡ Reliability

Uptime/SLA
0
Version Stability
35
Breaking Changes
20
Error Recovery
30
AF Security Reliability

Best When

You want a local MCP server that an agent can call to route prompts across multiple LLM backends with minimal integration effort.

Avoid When

You need strong organizational security controls (authZ, audit logs), guaranteed idempotency for file-writing operations, or well-specified operational/SLA guarantees.

Use Cases

  • Unified prompt execution across multiple LLM providers
  • Benchmarking or fallback across multiple models/providers
  • Generating outputs from prompt templates stored in files
  • Running a multi-model consensus/selection workflow (board/CEO pattern)
  • Local and remote model usage (including Ollama via a host URL)

Not For

  • Producing an internet-accessible hosted API for external clients (it’s an MCP server intended to be run locally/in-process)
  • Use cases requiring strict data residency or compliance guarantees (not documented here)
  • Use cases needing fine-grained API-level RBAC/tenant security (not documented here)

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: Environment-variable API keys for supported providers (OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, GROQ_API_KEY, DEEPSEEK_API_KEY) Ollama connection via OLLAMA_HOST (no explicit auth described)
OAuth: No Scopes: No

Authentication/authorization is described as local API-key configuration via environment variables. The MCP interface itself does not document additional auth, scopes, or multi-tenant access controls.

Pricing

Free tier: No
Requires CC: No

Pricing is not provided by just-prompt; costs depend on the underlying provider usage (and local Ollama is typically self-hosted).

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • File path parameters require absolute paths (abs_file_path/abs_output_dir); agents should ensure host filesystem access matches the server environment.
  • Model names require provider prefixes (e.g., openai:o3:high). The server mentions automatic model name correction based on default models, but exact behavior is not fully specified.
  • Provider availability depends on which API keys are present; missing keys mean some providers become unavailable while the server still starts. Agents should handle partial provider sets.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for just-prompt.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered