Bottleneck

Lightweight Node.js task scheduler and rate limiter. Bottleneck controls both concurrency AND request rate (RPS/RPM) with configurable reservoir and reservoir refresh intervals. Unlike p-limit (concurrency only), Bottleneck can enforce 'max 10 RPS with max 5 concurrent' — crucial for agent systems consuming rate-limited APIs. Supports Redis-backed distributed limiting.

Evaluated Mar 06, 2026 (0d ago) v2.19+
Homepage ↗ Repo ↗ Developer Tools javascript typescript rate-limiting concurrency throttling queue redis
⚙ Agent Friendliness
67
/ 100
Can an agent use this?
🔒 Security
89
/ 100
Is it safe for agents?
⚡ Reliability
88
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
85
Error Messages
82
Auth Simplicity
95
Rate Limits
100

🔒 Security

TLS Enforcement
95
Auth Strength
88
Scope Granularity
85
Dep. Hygiene
88
Secret Handling
90

Local library — no external calls. Redis connection credentials must use environment variables. Bottleneck prevents agent systems from overwhelming rate-limited APIs.

⚡ Reliability

Uptime/SLA
92
Version Stability
88
Breaking Changes
85
Error Recovery
85
AF Security Reliability

Best When

Your agent needs to respect both concurrency AND RPS/RPM limits from external APIs — Bottleneck handles both dimensions in one library.

Avoid When

You only need concurrency control (use p-limit) or durable job queuing (use BullMQ).

Use Cases

  • Enforce API rate limits in agent systems that must respect RPS and RPM constraints from LLM providers
  • Control both concurrency and throughput in agent batch processors calling rate-limited external APIs
  • Implement distributed rate limiting for agent API calls across multiple worker processes using Redis
  • Queue agent tasks with priority levels and rate constraints — high-priority agent requests execute first
  • Throttle agent webhook dispatchers to respect downstream system rate limits

Not For

  • Simple concurrency-only limiting — p-limit is simpler for just controlling parallelism
  • Persistent job queues with retry — use BullMQ for durable queuing with retries
  • Applications where eventual consistency of distributed limits is not acceptable

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
No

Authentication

Methods: none
OAuth: No Scopes: No

Local utility library — no authentication. Redis connection for distributed mode uses Redis auth.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

Completely free and open source.

Agent Metadata

Pagination
none
Idempotent
Partial
Retry Guidance
Documented

Known Gotchas

  • reservoir vs maxConcurrent: reservoir is a token bucket (refills over time), maxConcurrent is concurrent slots — both can be set simultaneously for combined control
  • Jobs submitted to a full queue (maxWaitingJobs exceeded) are immediately rejected — agents must handle rejection by implementing backpressure or retrying submission
  • Redis distributed mode uses Lua scripts for atomic operations — Redis version 5.0+ required for Lua WAIT command support
  • Priority ordering requires integer priority values — lower numbers = higher priority; missing priority defaults to 5
  • limiter.schedule() wraps a function (not a promise) — pass a function that returns a promise, not the promise itself
  • stopAll() cancels all queued jobs — use disconnect() to cleanly shut down Redis connection; stopping without disconnect leaks Redis connections

Alternatives

Full Evaluation Report

Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for Bottleneck.

$99

Scores are editorial opinions as of 2026-03-06.

5208
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered