{"id":"log-bell-avakill","name":"avakill","af_score":73.2,"security_score":49.8,"reliability_score":38.8,"what_it_does":"AvaKill is an open-source safety “firewall” for AI agents: it intercepts tool calls, evaluates them against a YAML policy (deny-by-default, rule-based checks including shell/path/content scanning, rate limits, and approval gates), and blocks/kills dangerous operations before execution. It provides multiple enforcement paths: native agent hooks, an MCP proxy/wrapper, and OS-level sandboxing, with an optional daemon for shared evaluation and audit logging.","best_when":"You need local, deterministic, policy-based guardrails around agent tool use across multiple agent runtimes (hooks/MCP/OS sandbox) with an auditable trail and configurable deny-by-default rules.","avoid_when":"You cannot practically integrate/enable one of the enforcement paths (hooks/MCP wrapping/OS sandbox) or you require a hosted, managed service with centralized policy distribution and guaranteed uptime/SLA.","last_evaluated":"2026-04-04T19:32:42.222691+00:00","has_mcp":true,"has_api":false,"auth_methods":["CLI usage (local)","Python SDK Guard / protect decorator","Framework wrappers (GuardedOpenAIClient, GuardedAnthropicClient, LangChain callback handler)"],"has_free_tier":false,"known_gotchas":["Tool name normalization may need correct mapping per agent; policy uses canonical tool names but different agents may emit different tool identifiers without proper hook/MCP wrapping.","Shell/file/path safety checks depend on correct argument structures and available metadata; ambiguous tool arguments could lead to false positives/denies.","Approval workflows require human interaction; unattended runs may stall if policies use require_approval.","If OS sandboxing/hardening profiles are misconfigured or not supported on a platform, the corresponding enforcement path may be unavailable."],"error_quality":0.0}