{"id":"dnhkng-glados","name":"GLaDOS","af_score":40.2,"security_score":37.5,"reliability_score":20.0,"what_it_does":"GLaDOS is an on-device (Python) voice assistant/agent framework that combines speech recognition, voice activity detection, text-to-speech, vision processing, an LLM core, and an MCP-based tool system to enable proactive/autonomous behavior (e.g., responding to camera/audio/time events) with long-term memory and configurable LLM backends (e.g., Ollama or OpenAI-compatible APIs).","best_when":"You want a self-hosted, hackable multimodal agent you can run locally (often with Ollama) and extend via MCP tools, and you can accept that tool execution and model backends require careful configuration.","avoid_when":"You need a hardened, turnkey assistant service with strong governance, minimal risk from tool calls, and well-specified API contracts; or you cannot control what tools can do once the agent is allowed to execute them.","last_evaluated":"2026-03-29T14:58:15.629710+00:00","has_mcp":true,"has_api":false,"auth_methods":["Optional API key for OpenAI-compatible completion backend (glados_config.yaml 'api_key')","Potential MCP server authentication depends on the MCP server implementation (not specified in README excerpt)"],"has_free_tier":false,"known_gotchas":["Tool execution is powerful (home automation/system actions); agent behavior and tool permissions should be constrained/sandboxed.","Concurrency/autonomy loop may issue tool calls while user speech is handled via a priority lane; race conditions or unintended simultaneous actions are possible.","LLM backend configuration (Ollama vs OpenAI-compatible) and streaming/latency behavior may affect timing-sensitive audio/vision flows."],"error_quality":0.0}