GLaDOS

GLaDOS is an on-device (Python) voice assistant/agent framework that combines speech recognition, voice activity detection, text-to-speech, vision processing, an LLM core, and an MCP-based tool system to enable proactive/autonomous behavior (e.g., responding to camera/audio/time events) with long-term memory and configurable LLM backends (e.g., Ollama or OpenAI-compatible APIs).

Evaluated Mar 29, 2026 (0d ago)
Repo ↗ Ai Ml ai-ml voice-assistant multimodal mcp self-hosted python tool-use vision memory autonomy
⚙ Agent Friendliness
40
/ 100
Can an agent use this?
🔒 Security
38
/ 100
Is it safe for agents?
⚡ Reliability
20
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
55
Documentation
60
Error Messages
0
Auth Simplicity
70
Rate Limits
0

🔒 Security

TLS Enforcement
40
Auth Strength
35
Scope Granularity
20
Dep. Hygiene
45
Secret Handling
50

Security posture cannot be fully determined from the provided README/manifest. TLS enforcement for any HTTP-based MCP or model calls is not specified (depends on configured URLs). Tool execution capability introduces substantial risk if MCP tools can access sensitive systems; the README does not describe permissioning, allowlists, or sandboxing. Dependency list is extensive; no vulnerability/SBOM/CVE posture is provided. An api_key field exists for the LLM backend, but secret handling practices (no logging, redaction) are not described.

⚡ Reliability

Uptime/SLA
0
Version Stability
30
Breaking Changes
30
Error Recovery
20
AF Security Reliability

Best When

You want a self-hosted, hackable multimodal agent you can run locally (often with Ollama) and extend via MCP tools, and you can accept that tool execution and model backends require careful configuration.

Avoid When

You need a hardened, turnkey assistant service with strong governance, minimal risk from tool calls, and well-specified API contracts; or you cannot control what tools can do once the agent is allowed to execute them.

Use Cases

  • Building a voice assistant with proactive (non-wake-word) behavior
  • Home automation or system control via MCP tools
  • Adding local vision (VLM) and reacting to scene changes
  • Creating an LLM-driven multi-module assistant with memory and tool use
  • Running an interactive text UI or voice UI for experimentation

Not For

  • Production deployments requiring a managed hosted service with strong operational guarantees
  • Environments where collecting/storing user audio, vision frames, or conversation history is unacceptable
  • Organizations needing standardized compliance/SOC2-like assurances from a vendor-hosted API
  • Security-sensitive integrations without careful threat modeling and sandboxing of tool execution

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: Optional API key for OpenAI-compatible completion backend (glados_config.yaml 'api_key') Potential MCP server authentication depends on the MCP server implementation (not specified in README excerpt)
OAuth: No Scopes: No

Authentication is primarily delegated to the configured LLM backend (e.g., OpenAI-compatible API key if needed) and to any MCP servers used for tools; the project configuration mentions an api_key field but does not describe OAuth/scopes or auth flows for MCP within the provided content.

Pricing

Free tier: No
Requires CC: No

No hosted pricing described; costs would depend on local compute and/or the selected LLM backend (Ollama local vs external OpenAI-compatible API).

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • Tool execution is powerful (home automation/system actions); agent behavior and tool permissions should be constrained/sandboxed.
  • Concurrency/autonomy loop may issue tool calls while user speech is handled via a priority lane; race conditions or unintended simultaneous actions are possible.
  • LLM backend configuration (Ollama vs OpenAI-compatible) and streaming/latency behavior may affect timing-sensitive audio/vision flows.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for GLaDOS.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-29.

5347
Packages Evaluated
21056
Need Evaluation
586
Need Re-evaluation
Community Powered