arq

Lightweight async job queue for Python using Redis, designed for asyncio. Simpler than Celery: define async functions as jobs, enqueue them from any async code, and run workers that execute them. Supports cron scheduling, job deduplication, priorities, and result storage in Redis. The modern asyncio alternative to Celery for Python.

Evaluated Mar 06, 2026 (0d ago) v0.26+
Homepage ↗ Repo ↗ Developer Tools python redis async task-queue background-jobs asyncio lightweight cron
⚙ Agent Friendliness
62
/ 100
Can an agent use this?
🔒 Security
83
/ 100
Is it safe for agents?
⚡ Reliability
80
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
80
Error Messages
78
Auth Simplicity
90
Rate Limits
88

🔒 Security

TLS Enforcement
88
Auth Strength
82
Scope Granularity
78
Dep. Hygiene
88
Secret Handling
82

Redis URL contains credentials — use environment variables. TLS Redis connections supported. Job payloads stored in Redis — avoid putting sensitive data in job arguments.

⚡ Reliability

Uptime/SLA
80
Version Stability
82
Breaking Changes
80
Error Recovery
80
AF Security Reliability

Best When

You're building an async Python (FastAPI/asyncio) application and need background job processing with Redis — arq is simpler and more asyncio-native than Celery.

Avoid When

You need complex task workflows (chains, chords), multiple broker support, or are not using asyncio — use Celery for more complex distributed task needs.

Use Cases

  • Queue async LLM API calls as background jobs from FastAPI agent endpoints using arq's Redis-backed queue
  • Run async Python agent tasks (API calls, DB writes, notifications) without blocking HTTP request handlers
  • Schedule recurring agent maintenance jobs (data sync, cache refresh) with arq's cron scheduling
  • Implement job deduplication in agent pipelines to prevent duplicate processing of webhook events
  • Build Python agent workers that process async tasks with full asyncio compatibility for concurrent I/O

Not For

  • Complex workflow orchestration with task chains, chords, and DAGs — Celery has richer workflow primitives
  • Environments without Redis — arq requires Redis as its only broker option
  • Synchronous Python codebases — arq is designed for asyncio; sync tasks require wrapping in executor calls

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
No

Authentication

Methods: none
OAuth: No Scopes: No

Redis URL includes auth. No application-level auth in arq itself.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

MIT license. Maintained by Samuel Colvin (creator of Pydantic).

Agent Metadata

Pagination
none
Idempotent
Partial
Retry Guidance
Documented

Known Gotchas

  • arq worker requires WorkerSettings class with functions list — all job functions must be registered in WorkerSettings.functions; unregistered functions fail silently at enqueue time
  • Job context (ctx dict) is passed as first argument to all job functions — function signatures must accept ctx; this differs from Celery's self parameter pattern
  • Job results are stored in Redis by default — results expire after job_completion_wait default; long-lived pipelines must poll results before expiry or extend TTL
  • arq uses asyncio.gather for concurrent job execution within a worker — CPU-bound jobs must use loop.run_in_executor() to avoid blocking the event loop
  • Cron jobs in arq are class-based (CronJob) and must be added to WorkerSettings.cron_jobs — mixing cron and regular jobs requires careful WorkerSettings configuration
  • Redis connection pool is shared in the worker — Redis connection errors affect all concurrent jobs; implement health checks and reconnection logic for production reliability

Alternatives

Full Evaluation Report

Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for arq.

$99

Scores are editorial opinions as of 2026-03-06.

5208
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered