ai-counsel
AI Counsel is a self-hostable MCP server that runs multi-participant, multi-round “deliberation” among AI models to debate, cast structured votes with confidence/rationale, and converge on a decision. It can optionally ground decisions using evidence-based tools (read/search/list/run safe commands) against a provided working directory, includes transcript/audit outputs, and can reuse context via semantic retrieval of past debates.
Score Breakdown
⚙ Agent Friendliness
🔒 Security
Security posture is primarily governed by tool controls rather than strong auth in the described content. The README discusses tool security controls (exclude_patterns defaulting to sensitive paths, file size limits for read_file, and a command whitelist for run_command) and working_directory isolation (with adapter-specific caveats). It also warns that Codex lacks true isolation and that Gemini requires include-directories configuration. No explicit guidance is provided about TLS, authentication/authorization, secret management, or comprehensive threat modeling for tool access. Dependency hygiene and CVE status cannot be verified from the provided content.
⚡ Reliability
Best When
You want a programmable, repeatable consensus process (with audit transcripts) and optional evidence grounding on local code/data using self-hosted model runners (or configurable cloud adapters), and you can configure tool security/working directory carefully.
Avoid When
You cannot configure or enforce tool access boundaries (working_directory, exclude_patterns, command whitelist) and you need to prevent any file/command exposure; also avoid when you need formal compliance guarantees without further evaluation.
Use Cases
- • Architecture and API design decisions (REST vs GraphQL, data store choices, hybrid approaches)
- • Policy/strategy deliberation with explicit voting and confidence
- • Code-review and implementation planning using repository evidence (read/search/list/run)
- • Consensus building across multiple model providers (local + cloud mixtures)
- • Searching and analyzing past decisions for contradictions and evolution
Not For
- • Untrusted inputs that could trigger unsafe tool actions without proper configuration
- • Environments requiring strong enterprise identity/authorization controls out-of-the-box
- • Guaranteed correctness or legal/medical decision-making without human review
- • Workloads needing a public SaaS with predictable SLAs and hosted infrastructure
Interface
Authentication
No SaaS-style authentication is described in the README. As a self-hosted MCP server, access control would typically be handled by the MCP client/server deployment setup, but the provided content does not specify auth mechanisms or scope enforcement.
Pricing
The project supports local models to avoid API costs; if cloud adapters are used, spend is determined by those providers.
Agent Metadata
Known Gotchas
- ⚠ Evidence/tooling requires correct working_directory; misconfiguration leads to file-not-found errors.
- ⚠ Some adapters have weaker isolation (README states Codex may access any file and Ollama/LMStudio have no file system access restrictions since they are HTTP adapters).
- ⚠ Tool security must be configured (exclude_patterns, max_file_size_bytes, command_whitelist) to reduce risk.
- ⚠ Local adapter reliability depends on the external runner (Ollama/LM Studio/others) and model availability.
- ⚠ Model output formatting/structured votes may degrade for small models (<3B mentioned as not recommended).
Alternatives
Full Evaluation Report
Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for ai-counsel.
AI-powered analysis · PDF + markdown · Delivered within 30 minutes
Package Brief
Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.
Delivered within 10 minutes
Score Monitoring
Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.
Continuous monitoring
Scores are editorial opinions as of 2026-03-30.