kimi-code-mcp

Provides an MCP server that lets Claude Code delegate bulk codebase analysis to Kimi Code (Kimi K2.5, up to 256K context) by spawning the Kimi CLI and returning parsed JSON results via MCP tools such as kimi_analyze and kimi_resume.

Evaluated Mar 30, 2026 (21d ago)
Homepage ↗ Repo ↗ DevTools mcp claude-code kimi-code code-analysis developer-tools typescript ai-agent code-review
⚙ Agent Friendliness
56
/ 100
Can an agent use this?
🔒 Security
64
/ 100
Is it safe for agents?
⚡ Reliability
29
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
78
Documentation
70
Error Messages
0
Auth Simplicity
60
Rate Limits
15

🔒 Security

TLS Enforcement
100
Auth Strength
70
Scope Granularity
30
Dep. Hygiene
45
Secret Handling
70

TLS likely used for provider API calls (api.kimi.com over HTTPS). Secrets are recommended via environment variables and config references rather than hardcoding; however, the README does not describe how the MCP server handles/logs subprocess output, redacts secrets, or implements request/response filtering. No scope/granularity details are provided at the MCP server layer.

⚡ Reliability

Uptime/SLA
0
Version Stability
45
Breaking Changes
40
Error Recovery
30
AF Security Reliability

Best When

You have large, multi-file repositories and want Claude to reason/edit using compressed, cross-file analysis output rather than reading everything itself.

Avoid When

When you need every line verbatim from the analysis (detailed output approaches raw reading costs) or when external execution/subprocesses are disallowed.

Use Cases

  • Delegate large codebase architecture and dependency analysis to Kimi, then have Claude act on the report
  • Security/vulnerability review over an entire repository using structured findings from Kimi
  • PR/pre-merge review workflows where a bulk scan precedes targeted edits
  • Session-based analysis with resume and cached context to reduce repeated reading
  • Lightweight Q&A against a repository without full bulk context (kimi_query)

Not For

  • Small repos or single-file changes where direct reading by Claude is faster
  • Use cases requiring a first-party REST/GraphQL API surface for direct client integration
  • Environments that cannot run subprocesses (the MCP server shells out to the Kimi CLI)

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: Kimi Code CLI OAuth login (/login or /setup) with config stored in ~/.kimi/config.toml Manual API key configuration for Kimi Code (api.kimi.com/coding/v1) via ~/.kimi/config.toml or environment variable KIMICODE_API_KEY
OAuth: Yes Scopes: No

Authentication to this MCP server is indirect: it relies on local Kimi CLI authentication/config (OAuth or API key). Scopes are not described at the MCP server layer.

Pricing

Model: Kimi K2.5 (256K context) via Kimi Code / Moonshot
Free tier: Yes
Requires CC: No

Cost savings described as reduced Claude token consumption; actual spend depends on Kimi plan and usage frequency.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • Requires valid Kimi Code membership and a completed `kimi login` (or equivalent API key configuration) because the MCP server invokes the Kimi CLI.
  • The MCP tool name is `kimi_analyze`, but the underlying CLI uses flags like `--work-dir`, `--print`, `-p/--prompt` (no `kimi analyze` subcommand).
  • Non-interactive execution depends on subprocess behavior; failures may be surfaced via MCP without clear documented mapping.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for kimi-code-mcp.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered