cachetools

Extensible memoizing collections and decorators for Python — provides LRU, LFU, TTL, and RR cache implementations with @cached decorator. cachetools features: LRUCache (Least Recently Used eviction), LFUCache (Least Frequently Used), TTLCache (time-to-live expiration), RRCache (Random Replacement), Cache base class, @cached(cache) decorator for memoization, @cachedmethod(lambda self: self.cache) for method caching, keys parameter for custom cache key functions, lock parameter for thread safety, getsizeof for weighted caching, and MRUCache (Most Recently Used). Pure Python, no dependencies, no subprocess or network.

Evaluated Mar 06, 2026 (0d ago) v5.x
Homepage ↗ Repo ↗ Developer Tools python cachetools LRU cache TTL memoize in-memory
⚙ Agent Friendliness
69
/ 100
Can an agent use this?
🔒 Security
92
/ 100
Is it safe for agents?
⚡ Reliability
92
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
90
Error Messages
85
Auth Simplicity
99
Rate Limits
99

🔒 Security

TLS Enforcement
92
Auth Strength
92
Scope Granularity
92
Dep. Hygiene
98
Secret Handling
88

Pure in-memory cache. No network calls, no external dependencies. Cached values live in process memory — avoid caching sensitive data without TTL; set appropriate maxsize to prevent memory exhaustion. Cache poisoning: untrusted input as cache keys may cause unexpected eviction patterns — validate inputs before use as keys.

⚡ Reliability

Uptime/SLA
92
Version Stability
92
Breaking Changes
92
Error Recovery
90
AF Security Reliability

Best When

In-process memoization with LRU/TTL eviction policies — cachetools is lightweight with no dependencies and pure Python, ideal for function-level caching within a process.

Avoid When

Persistent cache (use diskcache), distributed cache (use Redis), async functions (use async-lru or aiocache), or when functools.lru_cache is sufficient.

Use Cases

  • Agent LRU memoize — from cachetools import LRUCache, cached; cache = LRUCache(maxsize=128); @cached(cache); def fetch_user(user_id: int): return db.query(user_id) — LRU memoize; agent caches database results; 128 most recent user_ids cached; oldest evicted when full; @cached reuses cache across calls
  • Agent TTL cache — from cachetools import TTLCache, cached; cache = TTLCache(maxsize=100, ttl=300); @cached(cache); def get_config(key: str): return api.get(key) — 5-min TTL; agent caches API responses that expire; TTL in seconds; entries evict after ttl seconds regardless of access
  • Agent thread-safe cache — from cachetools import LRUCache, cached; from threading import Lock; cache = LRUCache(maxsize=50); lock = Lock(); @cached(cache, lock=lock); def expensive(x): return compute(x) — thread-safe; agent uses cache across threads; lock= parameter makes get/set atomic
  • Agent method cache — from cachetools import LRUCache, cachedmethod; class DataService: def __init__(self): self.cache = LRUCache(maxsize=32); @cachedmethod(lambda self: self.cache); def fetch(self, key: str): return db.get(key) — instance method; agent caches method results per instance; lambda self: self.cache provides instance-specific cache
  • Agent weighted cache — from cachetools import LRUCache, cached; cache = LRUCache(maxsize=1024*1024, getsizeof=lambda v: len(v)); @cached(cache); def load_content(url): return fetch_content(url) — size-limited; agent limits cache by bytes not count; getsizeof returns weight of each value; maxsize is total weight limit

Not For

  • Persistent caching — cachetools is in-memory only; cache lost on process restart; use diskcache for persistence
  • Distributed caching — cachetools is single-process; for shared cache use Redis via cachelib
  • Async functions — @cached does not work with async def; use async-lru or aiocache for async memoization

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
No

Authentication

Methods: none
OAuth: No Scopes: No

No auth — in-memory cache library.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

cachetools is MIT licensed. Free for all use.

Agent Metadata

Pagination
none
Idempotent
Full
Retry Guidance
Not documented

Known Gotchas

  • @cached does not work with async functions — @cached(cache)(async_fn) does not await; agent code for async memoization: use async-lru: from async_lru import alru_cache; @alru_cache(maxsize=128); or implement manually with asyncio.Lock; cachetools has no async support
  • Cache key must be hashable — @cached uses function arguments as cache key; unhashable args (list, dict, set) raise TypeError; agent code with list args: convert to tuple: @cached(cache, key=lambda *a, **k: (tuple(a[0]),)); or use custom key function that hashes the input
  • Thread safety requires explicit lock — LRUCache is not thread-safe by default; concurrent reads/writes cause data corruption; agent code in threaded context: always pass lock=Lock() to @cached; lock applies to entire get-or-compute-and-set; this serializes all cache operations
  • TTLCache does not eagerly expire — TTLCache(maxsize=100, ttl=300) does not run a background thread; entries remain in memory until accessed or evicted by maxsize; agent code checking cache.currsize: expired entries still counted until accessed; cache.expire() forces expiry of stale entries
  • @cachedmethod needs unique cache per instance — if all instances share one class-level cache: @cachedmethod(lambda self: MyClass.cache), keys collide between instances; agent code: use instance-level cache: self.cache = LRUCache(maxsize=32) in __init__; or include self in key function
  • Cache.clear() does not reset statistics — cachetools tracks hits/misses via cache.hits and cache.misses (added later); clearing cache with cache.clear() does not reset counters; for testing cache effectiveness: check (cache.hits / (cache.hits + cache.misses)) ratio; create new cache instance to reset all state

Alternatives

Full Evaluation Report

Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for cachetools.

$99

Scores are editorial opinions as of 2026-03-06.

5208
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered