diskcache
Disk and file backed cache library for Python — SQLite + filesystem storage providing persistent cache that survives process restarts. diskcache features: Cache (primary get/set/delete with SQLite+filesystem), Deque (persistent FIFO deque), Index (persistent sorted dict), FanoutCache (sharded for concurrent performance), DjangoCache integration, @memoize decorator, Cache.memoize() context manager, size-limited caching (disk quota), eviction policies (LRU/LFU), statistics, expire/clear/cull, and atomic transactions. Significantly faster than Redis for single-machine use cases.
Score Breakdown
⚙ Agent Friendliness
🔒 Security
Local filesystem cache. Pickle deserialization risk — only access cache from trusted code. Cache directory ACLs control multi-process access. Do not cache secrets without encryption. Value files in cache directory are raw bytes — protect directory permissions. SQLite WAL files persist until checkpoint — account for disk space.
⚡ Reliability
Best When
Single-machine persistent caching that survives process restarts — diskcache outperforms Redis for single-machine workloads while providing persistence that in-memory caches lack.
Avoid When
Multi-machine distributed caching (use Redis), sub-millisecond access (use cachetools), or when cache must be network-accessible.
Use Cases
- • Agent persistent memoization — from diskcache import Cache; cache = Cache('/tmp/agent_cache'); @cache.memoize(expire=3600); def expensive_computation(key): return compute(key) — persistent memoize; agent computation results persist across restarts; @memoize with expire= sets TTL; cache survives process crash
- • Agent result caching — from diskcache import Cache; with Cache('/tmp/cache') as cache: cache.set('result', data, expire=86400, tag='daily'); result = cache.get('result') — persistent cache; agent stores large computation results on disk; expire= in seconds; tag= for grouped invalidation; cache.evict('daily') removes all tagged entries
- • Agent concurrent cache — from diskcache import FanoutCache; cache = FanoutCache('/tmp/fanout', shards=8); cache.set('key', value) — sharded cache; agent with multiple threads uses FanoutCache for better concurrent performance; 8 SQLite shards reduce lock contention; same API as Cache
- • Agent size-limited cache — cache = Cache('/tmp/cache', size_limit=2**30) — 1GB limit; cache stores up to 1GB of data then evicts oldest entries; agent caches large files/models with automatic disk space management; disk_min_file_size controls inline vs filesystem storage
- • Agent persistent task deque — from diskcache import Deque; tasks = Deque(directory='/tmp/tasks'); tasks.appendleft({'job': 'process', 'data': url}); task = tasks.pop() — persistent deque; agent uses persistent FIFO queue that survives restarts; Deque is atomic and thread-safe; supports appendleft/append/pop/popleft
Not For
- • Distributed caching — diskcache is single-machine; for multi-node use Redis via cachelib
- • Sub-millisecond latency — disk access is slower than memory; for in-process speed use cachetools LRUCache
- • Network-accessible cache — diskcache is local filesystem only; for shared network cache use Redis/Memcached
Interface
Authentication
No auth — local filesystem cache. Filesystem permissions control access.
Pricing
diskcache is Apache 2.0 licensed. Free for all use.
Agent Metadata
Known Gotchas
- ⚠ Cache directory is created and must be consistent — Cache('/path') creates SQLite + files in /path; changing the path creates a new empty cache; agent code upgrading must migrate data or use same path; cache directory contains multiple files (cache.db, cache.db-shm, cache.db-wal, value files) — do not manually delete individual files
- ⚠ cache.get() returns None for miss, cache[key] raises KeyError — two access patterns for cache miss handling; agent code: use cache.get(key) for optional lookup returning None; use cache[key] in try/except KeyError for guaranteed-exist access; @memoize decorator handles misses automatically
- ⚠ Pickle is default serializer — diskcache pickles values by default; agent code storing user-controlled objects: arbitrary code execution if cache is accessed by different process; use diskcache with disk=DjangoDisk or custom disk class for JSON serialization; or validate cached objects on retrieval
- ⚠ Size limit eviction happens asynchronously — cache = Cache('/path', size_limit=1e9) enforces 1GB; eviction triggered after set() if over limit; not guaranteed before next operation; agent code needing precise size control: call cache.cull() explicitly after writes; or use cache.volume() to check current size
- ⚠ Tag-based eviction requires tag at set time — cache.set(key, value, tag='group1'); cache.evict('group1') removes all tagged entries atomically; tag must be specified at set time not evict time; agent code invalidating groups of related cache entries must use consistent tags; cache.evict() with unknown tag is no-op
- ⚠ Context manager closes cache on exit — with Cache('/path') as cache: ... closes SQLite connection on exit; agent code using cache across function boundaries should not use context manager — create Cache() at module level and close() explicitly; or use single-function cache via @cache.memoize() decorator which handles lifecycle
Alternatives
Full Evaluation Report
Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for diskcache.
Scores are editorial opinions as of 2026-03-06.