About Assay

What is Assay?

Assay is the quality layer for agentic software. It rates software packages on how well they work with AI agents — not just whether they have an API, but whether that API is actually usable by autonomous software.

Every package gets an AF Score (Agent-Friendliness Score) that captures how easy it is for an agent to authenticate, call, interpret errors from, and reliably use that package.

Why?

AI agents are choosing and calling software packages autonomously. But most API documentation, SDKs, and tooling were designed for human developers. Agents need structured, predictable, well-documented interfaces. Assay helps agents (and the humans building them) find the best tools for the job.

Multi-Dimensional Scorecard

Every package receives three independent scores (0–100), each measuring a different quality dimension:

⚙ Agent Friendliness

Can an agent actually use this tool effectively?

  • MCP Quality — MCP server existence, maturity, documentation
  • Documentation — API docs quality, examples, completeness
  • Error Messages — Structured errors with codes and recovery guidance
  • Auth Simplicity — How easy to authenticate programmatically
  • Rate Limit Clarity — Clear documentation + response headers

🔒 Security

Is it safe to let an agent use this tool?

  • TLS Enforcement — HTTPS required for all communication
  • Auth Strength — Mechanism strength (API keys, OAuth2, etc.)
  • Scope Granularity — Fine-grained permission controls
  • Dependency Hygiene — Clean dependencies, no known CVEs
  • Secret Handling — Credentials via env vars/vault, never in logs

⚡ Reliability

Does it work consistently over time?

  • Uptime/SLA — Published SLA, status page, uptime history
  • Version Stability — Stable releases, semver adherence
  • Breaking Changes — History of breaking changes, migration guides
  • Error Recovery — Retry guidance, idempotent operations
80+ Excellent
60–79 Good
<60 Needs Work

Methodology

Each package is evaluated through a combination of automated analysis and structured LLM evaluation. The evaluation examines real API behavior, documentation, error handling, and MCP server implementations.

Evaluations are versioned and auditable. Each run records the model used, tokens consumed, and raw output for full transparency.

Current Coverage

4643
Evaluated
of 14956 cataloged
16
Categories
1442
MCP Servers
65.1
Avg AF Score

Score Disputes & Corrections

If you maintain a package and believe its Assay score is inaccurate or based on outdated information, we want to hear from you. We evaluate thousands of packages and mistakes happen.

To request a correction or re-evaluation:

We aim to review disputes within 7 days. If a factual error is confirmed, the package will be re-evaluated and scores updated promptly.

Disclaimer

Assay scores are editorial opinions, not statements of fact. They reflect our evaluation methodology applied to publicly available information about each package as of the evaluation date shown on each package page. Scores are provided as-is for informational purposes and do not constitute a warranty, guarantee, or certification of any kind. A high score does not guarantee that a package is secure, reliable, or fit for any particular purpose. A low score does not imply that a package is defective or unsuitable. Users should perform their own due diligence before adopting any software package. Evaluation methodology, criteria, and scores are subject to change. Packages may have changed since their last evaluation.

Who We Are

Assay is built by AJ van Beest, a security engineer and AI practitioner who got tired of agents picking the wrong tools. The project is open-source, community-driven, and designed to be as useful to autonomous agents as it is to the humans building them.

Our scoring methodology is published openly. If you have questions, feedback, or want to contribute, reach out at hello@assay.tools.

For Agents

Assay data is available through multiple channels:

  • REST API — Full package data at /v1/packages
  • Agent Guide — Condensed format at /v1/packages/{id}/agent-guide
  • MCP Server — Native Model Context Protocol integration
4643
Packages Evaluated
10313
Need Evaluation
173
Need Re-evaluation
Community Powered