Instructor

Python library that makes it trivially easy to get structured Pydantic objects from LLMs. Patches the OpenAI client to add response_model parameter — specify a Pydantic class and Instructor handles function calling/JSON mode, validation, and automatic retry on validation failure. Supports OpenAI, Anthropic, Gemini, Mistral, and local models via LiteLLM.

Evaluated Mar 06, 2026 (0d ago) v1.x
Homepage ↗ Repo ↗ AI & Machine Learning python llm structured-output pydantic openai anthropic validation retry function-calling
⚙ Agent Friendliness
64
/ 100
Can an agent use this?
🔒 Security
90
/ 100
Is it safe for agents?
⚡ Reliability
82
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
88
Error Messages
82
Auth Simplicity
88
Rate Limits
80

🔒 Security

TLS Enforcement
100
Auth Strength
90
Scope Granularity
85
Dep. Hygiene
88
Secret Handling
88

Uses underlying LLM provider security. Validation error messages are sent back to LLM — avoid including sensitive data in error context. LLM responses may contain injected content — validate and sanitize outputs.

⚡ Reliability

Uptime/SLA
88
Version Stability
80
Breaking Changes
75
Error Recovery
85
AF Security Reliability

Best When

You want the simplest possible way to get typed, validated Pydantic objects from LLMs in Python without manually handling function calling schemas and parsing.

Avoid When

You need non-Python support or are working with models that don't support function calling well — use raw JSON mode with manual Pydantic parsing for more control.

Use Cases

  • Extract structured data from LLM outputs into Pydantic models with automatic validation and retry in agent pipelines
  • Parse unstructured text (emails, documents, chat logs) into typed Python objects for agent data extraction tasks
  • Build agent tool call interfaces where LLMs return typed function arguments validated against Pydantic schemas
  • Implement multi-step agent data extraction with Instructor's automatic retry on Pydantic validation failure
  • Generate structured agent configurations, plans, and outputs from LLM completions with type safety

Not For

  • Non-Python codebases — Instructor is Python-only; use structured output APIs directly for other languages
  • Models without function calling — Instructor works best with models supporting function calling/JSON mode; JSON parsing from non-JSON models is fragile
  • High-throughput scenarios where retry overhead matters — validation failures cause additional LLM API calls; design schemas that minimize validation failures

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
No

Authentication

Methods: api_key
OAuth: No Scopes: No

Uses underlying LLM provider auth (OpenAI API key, Anthropic API key, etc.). Instructor itself has no auth layer.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

MIT license. Created by Jason Liu. LLM API costs from providers are the primary cost.

Agent Metadata

Pagination
none
Idempotent
Partial
Retry Guidance
Documented

Known Gotchas

  • Validation failure triggers retry with error context sent to LLM — complex Pydantic validators that are hard for LLMs to satisfy cause multiple expensive API calls; design schemas LLMs can satisfy
  • max_retries parameter controls retry count for validation failures only — network/API errors use separate retry logic from the underlying client
  • Instructor patches the OpenAI client in-place (client.chat.completions.create becomes extended) — the patch is client-instance specific; patch the correct client instance in your agent
  • response_model must be a Pydantic BaseModel subclass — plain dataclasses, TypedDict, and Python primitives don't work directly; wrap in a Pydantic model
  • Streaming mode with response_model requires special handling (instructor.patch with mode=instructor.Mode.TOOLS) — standard streaming doesn't support structured output auto-validation
  • Field descriptions in Pydantic model docstrings and Field(description=...) are included in the JSON schema sent to the LLM — write clear descriptions to guide LLM output quality

Alternatives

Full Evaluation Report

Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for Instructor.

$99

Scores are editorial opinions as of 2026-03-06.

5173
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered