cachetools

Extensible memoizing collections and decorators for Python. Provides TTLCache, LRUCache, LFUCache, RRCache, and MRUCache implementations with dict-like interface. @cached and @cachedmethod decorators add memoization to functions and methods. Thread-safe with explicit lock parameter. Pure Python with no dependencies — the standard for in-memory caching with configurable eviction policies.

Evaluated Mar 07, 2026 (0d ago) v5.3+
Homepage ↗ Repo ↗ Developer Tools cache python lru ttl in-memory memoization decorator eviction
⚙ Agent Friendliness
66
/ 100
Can an agent use this?
🔒 Security
82
/ 100
Is it safe for agents?
⚡ Reliability
87
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
85
Error Messages
82
Auth Simplicity
95
Rate Limits
95

🔒 Security

TLS Enforcement
85
Auth Strength
80
Scope Granularity
78
Dep. Hygiene
88
Secret Handling
80

In-memory library — no network exposure. Cached sensitive data lives in process memory. No encryption. Zero dependencies reduces attack surface.

⚡ Reliability

Uptime/SLA
88
Version Stability
88
Breaking Changes
85
Error Recovery
86
AF Security Reliability

Best When

You need simple, configurable in-memory caching with multiple eviction policies (LRU, TTL, LFU) in synchronous Python agent code.

Avoid When

You need persistence, distributed caching, or async support — use diskcache, Redis, or aiocache respectively.

Use Cases

  • Add TTL-based caching to agent API calls with @cached(cache=TTLCache(maxsize=1000, ttl=300)) to reduce redundant external API calls
  • Cache agent LLM responses for identical prompts using LRUCache to reduce API costs and latency for repeated queries
  • Implement bounded in-memory caching for agent result objects with automatic LRU eviction when memory limits are reached
  • Cache database query results in agent services with TTLCache to reduce DB load for frequently-repeated lookups
  • Use @cachedmethod for instance-level agent caching where the cache is stored on the instance for per-object cache isolation

Not For

  • Persistent caching that survives process restarts — cachetools is in-memory only; use diskcache or Redis for persistence
  • Distributed caching across multiple agent processes — cachetools is per-process; use Redis or Memcached for shared cache
  • Async agent code — cachetools is synchronous; use aiocache or async-lru for async function memoization

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
No

Authentication

Methods: none
OAuth: No Scopes: No

In-memory library — no authentication.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

MIT license. Community-maintained.

Agent Metadata

Pagination
none
Idempotent
Full
Retry Guidance
Not documented

Known Gotchas

  • cachetools @cached requires a hashable cache key — if your function arguments include lists, dicts, or other unhashable types, use typed=True with a custom key function
  • Thread safety requires explicitly passing a lock: @cached(cache=LRUCache(128), lock=RLock()) — without a lock, concurrent cache updates can corrupt the cache in multi-threaded agent code
  • TTLCache uses wall clock time for expiry — TTL is based on insertion time, not last access time; LRU eviction AND TTL both apply, whichever triggers first
  • Cache maxsize=None creates an unbounded cache — unbounded caches grow without limit and can exhaust memory in long-running agent services
  • @cachedmethod requires a function that returns the cache instance: @cachedmethod(lambda self: self._cache) — the lambda is called on every invocation, not just once
  • cachetools does not support async functions with @cached — wrapping async functions causes the cache to store coroutine objects, not results; use async_lru or aiocache for async

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for cachetools.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-07.

6470
Packages Evaluated
26150
Need Evaluation
173
Need Re-evaluation
Community Powered