deepwiki-mcp

An MCP server that takes a Deepwiki URL, crawls Deepwiki pages under deepwiki.com, sanitizes/rewrites the HTML into Markdown, and returns either a single aggregated Markdown document or per-page Markdown/structured output. It also provides an HTTP transport mode via localhost.

Evaluated Mar 30, 2026 (21d ago)
Repo ↗ DevTools mcp web-scraping documentation markdown crawler typescript cursor-mcp
⚙ Agent Friendliness
65
/ 100
Can an agent use this?
🔒 Security
41
/ 100
Is it safe for agents?
⚡ Reliability
38
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
70
Documentation
55
Error Messages
--
Auth Simplicity
100
Rate Limits
20

🔒 Security

TLS Enforcement
60
Auth Strength
20
Scope Granularity
10
Dep. Hygiene
55
Secret Handling
70

README indicates domain safety (only deepwiki.com) and HTML sanitization (strips headers/footers/navigation/scripts/ads) plus robots parsing mentioned in deps list. However, no auth is documented for the MCP/HTTP interfaces, and no rate limiting or abuse protections are described. TLS is not explicitly guaranteed; HTTP transport example uses http://localhost:3000 (likely fine for local dev, but not documented for external deployment).

⚡ Reliability

Uptime/SLA
0
Version Stability
45
Breaking Changes
45
Error Recovery
60
AF Security Reliability

Best When

You want to provide an MCP-capable coding assistant a reliable way to transform Deepwiki documentation into Markdown for downstream reasoning.

Avoid When

You need formal API specs/SDKs, strong authentication, or documented rate-limit behavior; or you require operation over sites/domains other than deepwiki.com.

Use Cases

  • Ingest Deepwiki documentation into an LLM-friendly Markdown form
  • Let an editor/agent (Cursor, MCP clients) fetch and crawl a Deepwiki repo’s pages
  • Generate a single contextual document for Q&A over multiple Deepwiki pages
  • Search/understand a library by crawling relevant pages

Not For

  • Fetching content from domains other than deepwiki.com
  • High-assurance scraping/compliance workflows where you need strict contractual guarantees
  • Use as a general web crawler or arbitrary URL fetcher

Interface

REST API
Yes
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

OAuth: No Scopes: No

No authentication mechanism is described for either MCP or the provided HTTP transport example.

Pricing

Free tier: No
Requires CC: No

Repository appears MIT-licensed; pricing is not described for any hosted service (this is presented as an MCP server you run/install).

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Not documented

Known Gotchas

  • README warns the server may not work because DeepWiki blocks scraping; agent workflows may fail intermittently.
  • The tool is domain-restricted to deepwiki.com; requests to other domains should be rejected.
  • Large repositories can hit timeouts; concurrency and timeout are configurable via env vars.
  • Progress events are emitted during crawling; agents should handle streaming/progress without assuming final output arrives immediately.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for deepwiki-mcp.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered