fast-agent

fast-agent is a Python CLI-first framework for interacting with LLMs and building coding agents, workflows, and evaluation pipelines. It supports MCP servers/clients (including stdio and streamable HTTP/SSE transports), shell-mode, interactive TUI prompts, and Python agent definitions (decorator-based) that can call MCP tools and chain workflows. It also includes MCP OAuth (PKCE) integration and optional MCP ping support.

Evaluated Mar 29, 2026 (0d ago)
Homepage ↗ Repo ↗ Ai Ml ai-ml devtools automation mcp agents cli python
⚙ Agent Friendliness
60
/ 100
Can an agent use this?
🔒 Security
75
/ 100
Is it safe for agents?
⚡ Reliability
39
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
85
Documentation
70
Error Messages
0
Auth Simplicity
78
Rate Limits
15

🔒 Security

TLS Enforcement
80
Auth Strength
85
Scope Granularity
45
Dep. Hygiene
70
Secret Handling
90

Security-relevant details present in the README: OAuth for MCP uses PKCE and supports in-memory token storage (no tokens written to disk) plus optional OS keychain persistence via keyring. This reduces token leakage risk. However, TLS requirements and network security headers for any server-mode exposure are not explicitly documented in the provided text; dependency hygiene (CVEs) and actual token access controls are not verifiable from the provided content alone.

⚡ Reliability

Uptime/SLA
0
Version Stability
55
Breaking Changes
50
Error Recovery
50
AF Security Reliability

Best When

You want a CLI and Python framework to orchestrate LLMs with MCP tool ecosystems, including interactive workflows and/or running locally with configurable transports and OAuth for MCP connections.

Avoid When

You need a stable, standardized REST/GraphQL/SDK surface for programmatic usage by other systems; you mainly want a turnkey managed cloud service with predictable SLAs.

Use Cases

  • Build coding/development agents that call MCP tools (LSPs, tool hooks, utilities).
  • Create and run multi-step agent workflows (chains, agent-as-tools orchestration, maker/voting error reduction).
  • Evaluate and test MCP-enabled agents and workflows.
  • Expose an agent as an MCP server using transport configuration.
  • Use interactive terminal experiences for agent prompting/completions/menus.

Not For

  • Simple single-purpose API wrappers where a REST/SDK interface is required.
  • Environments where opening OAuth flows and token storage via OS keychain is disallowed or cannot be configured.
  • Applications requiring strict server-side uptime/SLA guarantees without observing the project’s operational status.

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: CLI/config-based OAuth integration for MCP (PKCE with local callback / redirect flow) OS keychain-backed token persistence via keyring (with in-memory fallback)
OAuth: Yes Scopes: No

OAuth is described as enabled by default for SSE/HTTP MCP servers, configurable per server. The README discusses PKCE and token persistence behavior, but does not specify fine-grained scopes from the application perspective (it mentions optional server default scopes).

Pricing

Free tier: No
Requires CC: No

No pricing model for the library/framework is described in the provided content. Costs depend on the selected LLM provider(s) and any MCP server backends.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • CLI-first workflow: programmatic integration may require adopting their Python API/decorators or invoking the CLI rather than using a standardized REST interface.
  • MCP transport and OAuth behavior can vary by environment (e.g., keychain availability uses in-memory token storage), which may affect repeatability across sessions.
  • LLM/model/provider differences (and provider-specific model query overrides) may lead to inconsistent outputs if not pinned/configured carefully.
  • Chained/parallel tool calling can amplify failures if upstream MCP servers are unreliable; behavior under partial tool failures is not described in the provided material.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for fast-agent.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-29.

5365
Packages Evaluated
21038
Need Evaluation
586
Need Re-evaluation
Community Powered