pymoo

Multi-objective optimization framework for Python — provides state-of-the-art evolutionary algorithms (NSGA-II, NSGA-III, MOEAD, R-NSGA-II) with visualization and decision-making tools. pymoo features: Problem class for problem definition, minimize() as entry point, NSGA2/NSGA3 algorithms, Result object with res.X (solutions) res.F (objective values) res.G (constraint violations), Pareto front visualization, callback functions for tracking convergence, mixed-variable optimization (real, integer, binary, choice), gradient-based support, elementwise vs vectorized evaluation, and hypervolume/IGD performance indicators.

Evaluated Mar 06, 2026 (0d ago) v0.6.x
Homepage ↗ Repo ↗ Developer Tools python pymoo multi-objective optimization nsga2 nsga3 pareto evolutionary
⚙ Agent Friendliness
64
/ 100
Can an agent use this?
🔒 Security
89
/ 100
Is it safe for agents?
⚡ Reliability
77
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
82
Error Messages
78
Auth Simplicity
98
Rate Limits
98

🔒 Security

TLS Enforcement
90
Auth Strength
90
Scope Granularity
88
Dep. Hygiene
85
Secret Handling
90

Local optimization library — no network calls, no secrets handling. Optimization results (Pareto front) are plain numpy arrays. No security concerns beyond standard Python dependency management.

⚡ Reliability

Uptime/SLA
82
Version Stability
75
Breaking Changes
72
Error Recovery
78
AF Security Reliability

Best When

Optimizing two or more genuinely competing objectives where no single solution is best — pymoo's NSGA-II provides the Pareto front showing all trade-off solutions for agent or human decision-making.

Avoid When

You have a single objective (use scipy/optuna), need gradient-based optimization, or require distributed/parallel HPO at production scale.

Use Cases

  • Agent multi-objective optimization — from pymoo.algorithms.moo.nsga2 import NSGA2; from pymoo.core.problem import Problem; class AgentProblem(Problem): def __init__(self): super().__init__(n_var=5, n_obj=2, xl=0, xu=1); def _evaluate(self, x, out, *args): out['F'] = np.column_stack([f1(x), f2(x)]); res = minimize(AgentProblem(), NSGA2(pop_size=100), ('n_gen', 200)) — optimize two competing objectives simultaneously; agent ML hyperparameter tuning balances accuracy vs training time with Pareto front
  • Agent Pareto front extraction — res = minimize(problem, algorithm, termination); pareto_X = res.X; pareto_F = res.F; print(f'{len(pareto_X)} non-dominated solutions found') — Pareto optimal solutions all represent different trade-off points; agent decision system presents Pareto front to user who selects based on preferences; no single 'best' solution in multi-objective problems
  • Agent constrained optimization — class ConstrainedProblem(Problem): def _evaluate(self, x, out): out['F'] = objective(x); out['G'] = constraint_violations(x) — G > 0 means constraint violated; agent resource allocation ensures power, budget, time constraints satisfied while optimizing multiple KPIs; pymoo handles constraints natively in NSGA-II
  • Agent convergence tracking — from pymoo.core.callback import Callback; class ConvergenceCallback(Callback): def notify(self, algorithm): self.data['n_evals'].append(algorithm.evaluator.n_eval); self.data['opt'].append(algorithm.opt.get('F').min()) — track optimization progress; agent stops early when convergence criteria met; visualize fitness improvement over generations
  • Agent mixed-variable optimization — from pymoo.core.variable import Real, Integer, Choice; class MixedProblem(ElementwiseProblem): vars = {'x': Real(bounds=(0,1)), 'n': Integer(bounds=(1,10)), 'algo': Choice(options=['A','B','C'])}; res = minimize(MixedProblem(), MixedVariableGA(pop_size=20)) — optimize real + integer + categorical variables; agent pipeline configuration selects algorithm, batch size, and continuous parameters simultaneously

Not For

  • Single-objective optimization — pymoo is multi-objective focused; for single-objective use scipy.optimize, optuna, or nevergrad which are simpler and faster
  • Gradient-based optimization — pymoo evolutionary algorithms are gradient-free; for differentiable objectives use scipy.optimize.minimize with gradient methods
  • Production HPO at scale — pymoo is research-oriented; for production hyperparameter optimization use Optuna or Ray Tune with distributed trials

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
No

Authentication

Methods: none
OAuth: No Scopes: No

No auth — local optimization library.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

pymoo is Apache 2.0 licensed. Free for all use including commercial.

Agent Metadata

Pagination
none
Idempotent
Partial
Retry Guidance
Not documented

Known Gotchas

  • res.X is None when no feasible solution — if all solutions violate constraints, res.X is None; agent code must check: if res.X is None: raise NoFeasibleSolution(); increase pop_size or relax constraints; constraint formulation as G <= 0 (not G >= 0) is pymoo convention — constraint violated when G > 0
  • Vectorized vs elementwise evaluation mismatch — Problem() default expects _evaluate(X) where X is (n_pop, n_var) matrix; if agent objective function is not vectorized, subclass ElementwiseProblem not Problem; calling element-wise function on batched X raises index/shape errors; use elementwise_evaluation=True parameter for non-vectorized objectives
  • Objectives must all be minimization — pymoo minimizes all objectives by default; to maximize objective k, negate it: out['F'][:, k] = -maximize_this(X); agent code forgetting negation maximizes wrong objective; no maximize parameter exists in pymoo Problem
  • pop_size must be divisible by 4 for NSGA-II — NSGA2(pop_size=100) works but pop_size=50 may cause issues in crossover; agent code should use multiples of 4 (100, 200, 400); odd pop_size raises ValueError in some versions or produces incorrect behavior
  • Termination criterion affects result quality — minimize(problem, algo, ('n_gen', 50)) stops at 50 generations regardless of convergence; agent should use: from pymoo.termination import get_termination; t = get_termination('n_eval', 10000); or use DefaultSingleObjectiveTermination for adaptive stopping; fixed generation count may stop before convergence
  • Callback data accumulation grows memory — ConvergenceCallback storing res.F each generation accumulates pop_size * n_obj * n_gen values; 1000-generation run with pop=100 stores 100K+ objective vectors; agent long-running optimizations should sample: store every 10th generation or only min/max statistics

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for pymoo.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-06.

5229
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered