arq
Lightweight async job queue for Python using Redis, designed for asyncio. Simpler than Celery: define async functions as jobs, enqueue them from any async code, and run workers that execute them. Supports cron scheduling, job deduplication, priorities, and result storage in Redis. The modern asyncio alternative to Celery for Python.
Score Breakdown
⚙ Agent Friendliness
🔒 Security
Redis URL contains credentials — use environment variables. TLS Redis connections supported. Job payloads stored in Redis — avoid putting sensitive data in job arguments.
⚡ Reliability
Best When
You're building an async Python (FastAPI/asyncio) application and need background job processing with Redis — arq is simpler and more asyncio-native than Celery.
Avoid When
You need complex task workflows (chains, chords), multiple broker support, or are not using asyncio — use Celery for more complex distributed task needs.
Use Cases
- • Queue async LLM API calls as background jobs from FastAPI agent endpoints using arq's Redis-backed queue
- • Run async Python agent tasks (API calls, DB writes, notifications) without blocking HTTP request handlers
- • Schedule recurring agent maintenance jobs (data sync, cache refresh) with arq's cron scheduling
- • Implement job deduplication in agent pipelines to prevent duplicate processing of webhook events
- • Build Python agent workers that process async tasks with full asyncio compatibility for concurrent I/O
Not For
- • Complex workflow orchestration with task chains, chords, and DAGs — Celery has richer workflow primitives
- • Environments without Redis — arq requires Redis as its only broker option
- • Synchronous Python codebases — arq is designed for asyncio; sync tasks require wrapping in executor calls
Interface
Authentication
Redis URL includes auth. No application-level auth in arq itself.
Pricing
MIT license. Maintained by Samuel Colvin (creator of Pydantic).
Agent Metadata
Known Gotchas
- ⚠ arq worker requires WorkerSettings class with functions list — all job functions must be registered in WorkerSettings.functions; unregistered functions fail silently at enqueue time
- ⚠ Job context (ctx dict) is passed as first argument to all job functions — function signatures must accept ctx; this differs from Celery's self parameter pattern
- ⚠ Job results are stored in Redis by default — results expire after job_completion_wait default; long-lived pipelines must poll results before expiry or extend TTL
- ⚠ arq uses asyncio.gather for concurrent job execution within a worker — CPU-bound jobs must use loop.run_in_executor() to avoid blocking the event loop
- ⚠ Cron jobs in arq are class-based (CronJob) and must be added to WorkerSettings.cron_jobs — mixing cron and regular jobs requires careful WorkerSettings configuration
- ⚠ Redis connection pool is shared in the worker — Redis connection errors affect all concurrent jobs; implement health checks and reconnection logic for production reliability
Alternatives
Full Evaluation Report
Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for arq.
Scores are editorial opinions as of 2026-03-06.