ai-counsel

AI Counsel is a self-hostable MCP server that runs multi-participant, multi-round “deliberation” among AI models to debate, cast structured votes with confidence/rationale, and converge on a decision. It can optionally ground decisions using evidence-based tools (read/search/list/run safe commands) against a provided working directory, includes transcript/audit outputs, and can reuse context via semantic retrieval of past debates.

Evaluated Mar 30, 2026 (21d ago)
Repo ↗ Ai Ml mcp ai-agents multi-agent-consensus structured-voting evidence-grounded self-hosted tooling python
⚙ Agent Friendliness
57
/ 100
Can an agent use this?
🔒 Security
29
/ 100
Is it safe for agents?
⚡ Reliability
28
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
65
Documentation
65
Error Messages
0
Auth Simplicity
80
Rate Limits
10

🔒 Security

TLS Enforcement
30
Auth Strength
20
Scope Granularity
15
Dep. Hygiene
45
Secret Handling
40

Security posture is primarily governed by tool controls rather than strong auth in the described content. The README discusses tool security controls (exclude_patterns defaulting to sensitive paths, file size limits for read_file, and a command whitelist for run_command) and working_directory isolation (with adapter-specific caveats). It also warns that Codex lacks true isolation and that Gemini requires include-directories configuration. No explicit guidance is provided about TLS, authentication/authorization, secret management, or comprehensive threat modeling for tool access. Dependency hygiene and CVE status cannot be verified from the provided content.

⚡ Reliability

Uptime/SLA
0
Version Stability
30
Breaking Changes
20
Error Recovery
60
AF Security Reliability

Best When

You want a programmable, repeatable consensus process (with audit transcripts) and optional evidence grounding on local code/data using self-hosted model runners (or configurable cloud adapters), and you can configure tool security/working directory carefully.

Avoid When

You cannot configure or enforce tool access boundaries (working_directory, exclude_patterns, command whitelist) and you need to prevent any file/command exposure; also avoid when you need formal compliance guarantees without further evaluation.

Use Cases

  • Architecture and API design decisions (REST vs GraphQL, data store choices, hybrid approaches)
  • Policy/strategy deliberation with explicit voting and confidence
  • Code-review and implementation planning using repository evidence (read/search/list/run)
  • Consensus building across multiple model providers (local + cloud mixtures)
  • Searching and analyzing past decisions for contradictions and evolution

Not For

  • Untrusted inputs that could trigger unsafe tool actions without proper configuration
  • Environments requiring strong enterprise identity/authorization controls out-of-the-box
  • Guaranteed correctness or legal/medical decision-making without human review
  • Workloads needing a public SaaS with predictable SLAs and hosted infrastructure

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

OAuth: No Scopes: No

No SaaS-style authentication is described in the README. As a self-hosted MCP server, access control would typically be handled by the MCP client/server deployment setup, but the provided content does not specify auth mechanisms or scope enforcement.

Pricing

Free tier: No
Requires CC: No

The project supports local models to avoid API costs; if cloud adapters are used, spend is determined by those providers.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • Evidence/tooling requires correct working_directory; misconfiguration leads to file-not-found errors.
  • Some adapters have weaker isolation (README states Codex may access any file and Ollama/LMStudio have no file system access restrictions since they are HTTP adapters).
  • Tool security must be configured (exclude_patterns, max_file_size_bytes, command_whitelist) to reduce risk.
  • Local adapter reliability depends on the external runner (Ollama/LM Studio/others) and model availability.
  • Model output formatting/structured votes may degrade for small models (<3B mentioned as not recommended).

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for ai-counsel.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered