lz4
Python bindings for LZ4 compression algorithm — provides extremely fast compression and decompression with reasonable ratios. lz4 features: lz4.frame.compress()/decompress() for standard LZ4 frame format, lz4.block.compress()/decompress() for raw block compression, lz4.frame.open() for streaming compression, LZ4FrameCompressor/LZ4FrameDecompressor for incremental processing, compression_level parameter (0=fast, 9-16=high compression), block_size options, content_checksum for integrity, block_linked for better ratio, and store_size for size-prepended blocks. LZ4 is 2-5x faster than zlib/gzip at similar ratios.
Score Breakdown
⚙ Agent Friendliness
🔒 Security
Compression library with no network calls. LZ4 does not encrypt data — compress then encrypt separately for secure storage. content_checksum=True provides integrity but not authentication — use HMAC for authenticated integrity. Decompression of untrusted data: malicious LZ4 data can cause decompression bombs (large output) — validate uncompressed size before decompressing.
⚡ Reliability
Best When
Fast compression for agent caching, IPC, and binary data transfer — LZ4 provides 2-5x speed advantage over gzip with acceptable compression ratios, ideal for latency-sensitive agent data pipelines.
Avoid When
Maximum compression (use zstd), web responses (use gzip/brotli), or standard archive formats (use zipfile).
Use Cases
- • Agent cache compression — import lz4.frame; data = json.dumps(large_response).encode(); compressed = lz4.frame.compress(data); cache.set(key, compressed); restored = lz4.frame.decompress(cache.get(key)).decode() — fast cache compression; agent Redis cache stores compressed data; LZ4 decompresses faster than gzip; reduces Redis memory with minimal CPU overhead
- • Agent binary data transfer — payload = lz4.frame.compress(serialized_data, compression_level=0); send_to_worker(payload); received = lz4.frame.decompress(payload) — fast compression for IPC; agent sends large data between processes with minimal compression overhead; level=0 maximizes speed over ratio
- • Agent log compression — with lz4.frame.open('agent.log.lz4', 'wb') as f: f.write(log_data.encode()) — streaming compressed log; agent writes compressed log files; lz4.frame.open() provides file-like interface; logs readable with: lz4.frame.open('agent.log.lz4', 'rb').read().decode()
- • Agent high-ratio compression — compressed = lz4.frame.compress(data, compression_level=16) — HC (High Compression) mode; agent archiving large datasets with better ratio; HC mode is slower to compress but faster to decompress than gzip; decompression speed always same regardless of compression level
- • Agent streaming decompression — decompressor = lz4.frame.LZ4FrameDecompressor(); for chunk in receive_stream(): output = decompressor.decompress(chunk); process(output) — incremental decompression; agent processes compressed streaming data without buffering entire payload; LZ4FrameDecompressor maintains internal state between chunks
Not For
- • Maximum compression ratio — LZ4 prioritizes speed over ratio; for best compression use zstd or brotli
- • Text compression for web — LZ4 frame format not supported in browsers; for web use gzip or brotli
- • Archive compatibility — LZ4 is a binary format; for tar/zip archives use tarfile or zipfile modules
Interface
Authentication
No auth — local compression library.
Pricing
python-lz4 is BSD licensed. Free for all use.
Agent Metadata
Known Gotchas
- ⚠ lz4.block vs lz4.frame — block.compress() is fast but requires knowing output size for decompression: block.decompress(data, uncompressed_size=N); frame.compress() includes size metadata — simpler API; agent code using block mode must store original size separately; frame mode recommended for most use cases
- ⚠ store_size=True default in block mode — lz4.block.compress(data, store_size=True) (default) prepends 4-byte size to compressed output; lz4.block.decompress() reads this automatically; lz4.block.compress(data, store_size=False) without size: must pass uncompressed_size=N to decompress; mixing store_size=True and False causes RuntimeError
- ⚠ LZ4 frame not same as LZ4 block — lz4.frame and lz4.block produce different binary formats; data compressed with frame.compress() must be decompressed with frame.decompress(); mixing frame and block functions raises LZ4FrameError; agent code must consistently use same mode
- ⚠ content_checksum=False by default — LZ4 frame has optional content checksum for integrity; disabled by default for speed; agent storing compressed data on unreliable storage should enable: lz4.frame.compress(data, content_checksum=True); decompress verifies checksum and raises LZ4FrameError on corruption
- ⚠ return_bytearray for reduced allocations — lz4.frame.compress(data, return_bytearray=True) returns bytearray instead of bytes; agent processing many compressed chunks allocates less memory with bytearray; can be passed directly to most I/O functions; minor optimization for high-throughput scenarios
- ⚠ Compression level 0 vs default — lz4.frame.compress(data) uses default level (0 = fast mode); lz4.frame.compress(data, compression_level=9) uses HC mode for better ratio; compression_level 1-8 is same as 0 (LZ4 only has fast and HC, not gradient); levels 9-16 are HC mode with increasing effort
Alternatives
Full Evaluation Report
Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for lz4.
AI-powered analysis · PDF + markdown · Delivered within 30 minutes
Package Brief
Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.
Delivered within 10 minutes
Score Monitoring
Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.
Continuous monitoring
Scores are editorial opinions as of 2026-03-06.