{"id":"simonw-llm","name":"llm","af_score":52.8,"security_score":56.5,"reliability_score":40.0,"what_it_does":"llm is a Python library and command-line tool for interacting with large language models (both via remote APIs like OpenAI/Anthropic/Gemini and via locally installed/self-hosted models through plugins). It supports running prompts from the CLI, managing API keys, chat, embeddings, structured extraction (schemas), tool execution, and logging to SQLite.","best_when":"You want a single CLI/Python interface to orchestrate multiple LLM providers and local models, including structured outputs and optional local execution via plugins.","avoid_when":"You require a turnkey hosted service with centralized auth, SLAs, or guaranteed data residency/compliance controls; you instead use a local CLI/library that will send prompts to whatever upstream providers you configure.","last_evaluated":"2026-03-29T14:54:27.391009+00:00","has_mcp":false,"has_api":false,"auth_methods":["Stored API keys managed via llm keys set (e.g., OpenAI/Gemini/Anthropic)","Passing keys via CLI options","Providing keys via environment variables"],"has_free_tier":false,"known_gotchas":["This is a CLI-first tool; as an agent you may need to call the Python API or shell out to the CLI (or rely on provider/plugin availability).","Behavior depends on installed plugins (e.g., llm-ollama, llm-gemini, llm-anthropic), so supported models/tools/headers can vary.","If you enable SQLite logging or schema/tool features, ensure your agent handles sensitive content appropriately to avoid unintended storage or leakage."],"error_quality":0.0}