{"id":"exo-explore-exo","name":"exo","af_score":18.8,"security_score":21.2,"reliability_score":25.0,"what_it_does":"exo is a local-first system for running LLM inference across multiple devices by automatically discovering peers and distributing model execution (tensor/pipeline parallelism) over the network, with an optional built-in dashboard and API compatible with common chat/response endpoints. On macOS it also describes RDMA-over-Thunderbolt support for reduced inter-device latency.","best_when":"You have multiple compatible local devices and want to distribute model inference while using the provided localhost API/dashboard; especially effective for macOS clusters with RDMA capability.","avoid_when":"You need robust security controls for a network-exposed API (auth, rate limits, TLS guarantees) but cannot isolate to localhost or a trusted network; also avoid RDMA clusters when device OS versions/hardware connections cannot be kept consistent.","last_evaluated":"2026-03-29T12:59:04.571137+00:00","has_mcp":false,"has_api":true,"auth_methods":["No authentication mechanisms described in provided README content (only mentions localhost API and environment-based configuration)."],"has_free_tier":false,"known_gotchas":["No MCP server is indicated, so agent integrations would rely on the described HTTP API endpoints.","The provided README does not document authentication, authorization, rate limits, pagination, or retry/idempotency semantics; agents should treat these as unknown until verified in the code/docs.","RDMA operation depends on specific macOS version matching (even beta versions) and correct Thunderbolt 5 cabling/port usage; misconfiguration can lead to discovery/connectivity issues."],"error_quality":null}