memory-profiler
Memory profiling for Python — tracks memory usage line-by-line and over time. memory_profiler features: @profile decorator for line-by-line memory usage, mprof run/plot for time-based memory tracking, memory_usage() for programmatic profiling, --include-children for subprocess memory, matplotlib integration for memory graphs, interval parameter for sampling frequency, backend selection (psutil, tracemalloc, posix), timestamps for tracking memory over time, and streaming output for long-running processes. Identifies memory leaks, peak usage, and objects causing growth.
Score Breakdown
⚙ Agent Friendliness
🔒 Security
Profiling tool that reads process memory — no network calls. @profile decorator accesses all variables in profiled scope including secrets — never profile functions handling credentials in production. mprof .dat files contain memory usage data, not application data — safe to store.
⚡ Reliability
Best When
Diagnosing memory leaks and peak memory usage in Python code — memory_profiler's line-by-line decorator and mprof time series are the most accessible tools for memory optimization.
Avoid When
CPU profiling (use pyinstrument/cProfile), production monitoring (use psutil), or object-level analysis (use objgraph).
Use Cases
- • Agent line-by-line memory — from memory_profiler import profile; @profile; def process_batch(items): data = [load_item(i) for i in items]; results = transform(data); return aggregate(results) — run: python -m memory_profiler script.py; agent identifies which line allocates most memory; line annotations show increment and total memory
- • Agent memory over time — mprof run python agent.py; mprof plot — command line time series; agent long-running process shows memory growth curve; mprof plot generates matplotlib graph; identify memory leaks by upward trend without plateau
- • Agent programmatic measurement — from memory_profiler import memory_usage; mem = memory_usage((process_function, (args,), {}), interval=0.1); print(f'Peak: {max(mem):.1f} MB, Min: {min(mem):.1f} MB') — measure function memory programmatically; agent test suite verifies function stays under memory budget; memory_usage returns list of memory samples
- • Agent before/after comparison — import tracemalloc; tracemalloc.start(); do_work(); snapshot = tracemalloc.take_snapshot(); stats = snapshot.statistics('lineno')[:10] — stdlib tracemalloc for allocation tracking; agent identifies specific objects causing memory growth; top 10 allocation sites by size
- • Agent memory budget test — from memory_profiler import memory_usage; peak = max(memory_usage((my_function, (input_data,), {}), interval=0.05)); assert peak < 500, f'Memory exceeded 500MB: {peak:.1f}MB' — automated memory budget assertion; agent integration test enforces memory SLO; fails build if function uses too much memory
Not For
- • CPU profiling — memory_profiler only tracks memory not CPU time; for CPU use cProfile or pyinstrument
- • Production runtime monitoring — memory_profiler adds significant overhead (10-100x slower); use psutil for lightweight production monitoring
- • Object-level memory analysis — memory_profiler tracks RSS memory not object sizes; for object-level use objgraph or pympler
Interface
Authentication
No auth — local profiling tool.
Pricing
memory_profiler is BSD licensed. Free for all use.
Agent Metadata
Known Gotchas
- ⚠ @profile only works with -m memory_profiler — running decorated script normally raises NameError: name 'profile' is not defined; correct invocation: python -m memory_profiler script.py; agent dev workflow must remember the -m flag; or add: try: profile; except NameError: profile = lambda f: f at top of file for optional profiling
- ⚠ Measures RSS not Python object size — memory_profiler measures resident set size from OS; RSS includes Python internals, shared libraries, and OS page rounding; agent code seeing 50MB RSS doesn't mean 50MB of Python objects; use tracemalloc for Python object allocation tracking
- ⚠ Multiprocessing memory not tracked — @profile on function that spawns subprocesses shows only parent process RSS; agent code using multiprocessing Pool or subprocess doesn't show child memory; use --include-children with mprof run for subprocess memory tracking
- ⚠ Line-level granularity misattributes lazy evaluation — generators and lazy iterators don't allocate until iterated; @profile shows memory at assignment line not at iteration; agent code using generators shows 0MB at gen = (x for x in big_list) and all memory at list(gen); interpret line numbers accordingly
- ⚠ mprof dat files accumulate — mprof run creates .dat file in current directory; multiple mprof runs create multiple files; mprof plot uses latest file; agent CI pipeline must clean .dat files between runs or use --output flag; .dat files shouldn't be committed to git
- ⚠ psutil required but not always auto-installed — memory_profiler uses psutil backend by default; pip install memory_profiler doesn't always pull psutil; if psutil not installed: backend falls back to slower alternatives or raises ImportError; agent requirements.txt should include: memory-profiler[psutil] or explicit psutil dependency
Alternatives
Full Evaluation Report
Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for memory-profiler.
AI-powered analysis · PDF + markdown · Delivered within 30 minutes
Package Brief
Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.
Delivered within 10 minutes
Score Monitoring
Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.
Continuous monitoring
Scores are editorial opinions as of 2026-03-06.