cachetools
Extensible memoizing collections and decorators for Python. Provides TTLCache, LRUCache, LFUCache, RRCache, and MRUCache implementations with dict-like interface. @cached and @cachedmethod decorators add memoization to functions and methods. Thread-safe with explicit lock parameter. Pure Python with no dependencies — the standard for in-memory caching with configurable eviction policies.
Score Breakdown
⚙ Agent Friendliness
🔒 Security
In-memory library — no network exposure. Cached sensitive data lives in process memory. No encryption. Zero dependencies reduces attack surface.
⚡ Reliability
Best When
You need simple, configurable in-memory caching with multiple eviction policies (LRU, TTL, LFU) in synchronous Python agent code.
Avoid When
You need persistence, distributed caching, or async support — use diskcache, Redis, or aiocache respectively.
Use Cases
- • Add TTL-based caching to agent API calls with @cached(cache=TTLCache(maxsize=1000, ttl=300)) to reduce redundant external API calls
- • Cache agent LLM responses for identical prompts using LRUCache to reduce API costs and latency for repeated queries
- • Implement bounded in-memory caching for agent result objects with automatic LRU eviction when memory limits are reached
- • Cache database query results in agent services with TTLCache to reduce DB load for frequently-repeated lookups
- • Use @cachedmethod for instance-level agent caching where the cache is stored on the instance for per-object cache isolation
Not For
- • Persistent caching that survives process restarts — cachetools is in-memory only; use diskcache or Redis for persistence
- • Distributed caching across multiple agent processes — cachetools is per-process; use Redis or Memcached for shared cache
- • Async agent code — cachetools is synchronous; use aiocache or async-lru for async function memoization
Interface
Authentication
In-memory library — no authentication.
Pricing
MIT license. Community-maintained.
Agent Metadata
Known Gotchas
- ⚠ cachetools @cached requires a hashable cache key — if your function arguments include lists, dicts, or other unhashable types, use typed=True with a custom key function
- ⚠ Thread safety requires explicitly passing a lock: @cached(cache=LRUCache(128), lock=RLock()) — without a lock, concurrent cache updates can corrupt the cache in multi-threaded agent code
- ⚠ TTLCache uses wall clock time for expiry — TTL is based on insertion time, not last access time; LRU eviction AND TTL both apply, whichever triggers first
- ⚠ Cache maxsize=None creates an unbounded cache — unbounded caches grow without limit and can exhaust memory in long-running agent services
- ⚠ @cachedmethod requires a function that returns the cache instance: @cachedmethod(lambda self: self._cache) — the lambda is called on every invocation, not just once
- ⚠ cachetools does not support async functions with @cached — wrapping async functions causes the cache to store coroutine objects, not results; use async_lru or aiocache for async
Alternatives
Full Evaluation Report
Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for cachetools.
AI-powered analysis · PDF + markdown · Delivered within 30 minutes
Package Brief
Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.
Delivered within 10 minutes
Score Monitoring
Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.
Continuous monitoring
Scores are editorial opinions as of 2026-03-07.