archiver

Streaming archive creation library for Node.js. Creates ZIP, TAR, and TAR.GZ archives from files, directories, buffers, and streams. Uses Node.js streams for memory-efficient archive creation — files are streamed into the archive and piped to output without loading everything into memory. Commonly used for creating downloadable exports, bundling deployment artifacts, and compressing generated reports in Node.js applications.

Evaluated Mar 06, 2026 (0d ago) v7.x
Homepage ↗ Repo ↗ Developer Tools zip tar archive compression node streams file-processing typescript
⚙ Agent Friendliness
67
/ 100
Can an agent use this?
🔒 Security
96
/ 100
Is it safe for agents?
⚡ Reliability
84
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
85
Error Messages
80
Auth Simplicity
100
Rate Limits
100

🔒 Security

TLS Enforcement
100
Auth Strength
100
Scope Granularity
100
Dep. Hygiene
85
Secret Handling
90

Local file library — no network surface. Zip slip vulnerability risk when including user-provided paths — sanitize all paths before adding to archive.

⚡ Reliability

Uptime/SLA
88
Version Stability
85
Breaking Changes
85
Error Recovery
80
AF Security Reliability

Best When

You need to create ZIP or TAR archives programmatically in Node.js, especially when streaming to HTTP responses or writing large archives without memory constraints.

Avoid When

You need to extract/decompress archives (archiver is write-only), or need cross-runtime support (browser, Deno).

Use Cases

  • Create ZIP archives of user-generated files or report exports for download in web applications
  • Bundle deployment artifacts (built files, configs) into TAR.GZ archives in CI/CD pipeline scripts
  • Compress agent output files (generated code, reports, data exports) into archives for storage or transfer
  • Stream archive creation to HTTP response for on-the-fly download without creating temp files on disk
  • Create TAR backups of application data directories programmatically from Node.js agents

Not For

  • Decompression/extraction — archiver creates archives only; use node-tar or unzipper for extraction
  • Very large archives requiring maximum compression — archiver uses standard zlib; use specialized tools for extreme compression
  • Non-Node.js environments — pure Node.js streams API

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
No

Authentication

Methods: none
OAuth: No Scopes: No

No authentication — local file processing library.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

Fully free, MIT licensed.

Agent Metadata

Pagination
none
Idempotent
Full
Retry Guidance
Not documented

Known Gotchas

  • Must listen for 'error' event on archive stream — unhandled errors in streams cause unhandled promise rejections; always attach archive.on('error', handler)
  • archive.finalize() must be called after adding all files — forgetting to call finalize() leaves the archive stream open and the output file incomplete
  • archive.pipe(output) must be set up BEFORE adding files — piping after adding files may miss data that was already streamed
  • directory() method archives entire directories but paths in the archive depend on the second argument — misunderstanding the destPath parameter causes incorrect archive structure
  • File streams must be managed carefully — archiver reads files as streams; if a file is deleted or moved between archive.append() and finalize(), the archive will have an error
  • ZIP64 extension required for archives >4GB — archiver supports ZIP64 via zlib option; standard ZIP format has 4GB file/archive size limits

Alternatives

Full Evaluation Report

Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for archiver.

$99

Scores are editorial opinions as of 2026-03-06.

5210
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered