Hyperopt

Bayesian hyperparameter optimization library for Python — optimizes any function over complex search spaces using TPE (Tree of Parzen Estimators). Hyperopt features: hp.choice(), hp.uniform(), hp.loguniform(), hp.quniform() for search space definition, fmin() for optimization loop, Trials() for experiment tracking, SparkTrials for distributed optimization, STATUS_OK/STATUS_FAIL return protocol, hp.sample() for visualization, mongoexp for MongoDB-backed distributed optimization, and pyll graph for lazy evaluation. Used with XGBoost, sklearn, and neural networks for automated hyperparameter tuning. Predecessor of Optuna with similar TPE algorithm.

Evaluated Mar 06, 2026 (0d ago) v0.2.x
Homepage ↗ Repo ↗ AI & Machine Learning python hyperopt hyperparameter bayesian optimization tpe random-search mlops
⚙ Agent Friendliness
60
/ 100
Can an agent use this?
🔒 Security
86
/ 100
Is it safe for agents?
⚡ Reliability
74
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
75
Error Messages
70
Auth Simplicity
95
Rate Limits
95

🔒 Security

TLS Enforcement
88
Auth Strength
88
Scope Granularity
85
Dep. Hygiene
80
Secret Handling
88

Local optimization library — no network calls except MongoDB distributed mode. MongoDB credentials for mongoexp should use environment variables. No data exfiltration risk for local use.

⚡ Reliability

Uptime/SLA
75
Version Stability
72
Breaking Changes
75
Error Recovery
72
AF Security Reliability

Best When

Hyperparameter tuning for sklearn, XGBoost, or custom ML models with a well-defined search space — Hyperopt's TPE algorithm converges faster than random search. Consider Optuna for new projects as it has better API and pruning support.

Avoid When

Starting new projects (use Optuna instead), need multi-objective optimization, or need trial pruning for neural network training.

Use Cases

  • Agent XGBoost tuning — from hyperopt import fmin, tpe, hp, Trials; space = {'max_depth': hp.quniform('max_depth', 3, 15, 1), 'learning_rate': hp.loguniform('lr', np.log(0.01), np.log(0.3))}; best = fmin(objective, space, algo=tpe.suggest, max_evals=100, trials=Trials()) — Bayesian hyperparameter tuning for XGBoost; agent MLOps pipeline automatically finds best hyperparameters
  • Agent sklearn pipeline tuning — space = {'clf__C': hp.loguniform('C', -3, 3), 'clf__kernel': hp.choice('kernel', ['rbf', 'linear'])}; trials = Trials(); best = fmin(score_pipeline, space, algo=tpe.suggest, max_evals=50, trials=trials); print(best) — tune sklearn pipeline hyperparameters; agent selects best kernel and regularization automatically
  • Agent experiment tracking — trials = Trials(); fmin(objective, space, algo=tpe.suggest, max_evals=100, trials=trials); df = pd.DataFrame(trials.results) — Trials() records all evaluations with parameters and losses; agent experiment tracker reviews trial history to understand hyperparameter importance
  • Agent distributed tuning with Spark — from hyperopt import SparkTrials; spark_trials = SparkTrials(parallelism=8); fmin(objective, space, algo=tpe.suggest, max_evals=200, trials=spark_trials) — parallel evaluation across Spark workers; agent hyperparameter search scales to 8 parallel evaluations on Databricks/Spark cluster
  • Agent conditional hyperparameter spaces — space = hp.choice('model', [{'type': 'svm', 'C': hp.loguniform('svm_C', -3, 3)}, {'type': 'rf', 'n_estimators': hp.quniform('rf_n', 10, 500, 10)}]) — nested hp.choice for conditional search spaces; agent selects model type and its specific hyperparameters jointly

Not For

  • Modern hyperparameter optimization — Optuna has superseded Hyperopt with better API, pruning, and visualization; prefer Optuna for new projects
  • Multi-objective optimization — Hyperopt minimizes single scalar; for Pareto-front multi-objective use Optuna with multi-objective or DEAP
  • Neural architecture search — Hyperopt search space too limited for NAS; use specialized NAS frameworks like NNI or AutoKeras

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
No

Authentication

Methods: none
OAuth: No Scopes: No

No auth — local optimization library. MongoDB-based distributed mode requires MongoDB connection.

Pricing

Model: open_source
Free tier: Yes
Requires CC: No

Hyperopt is BSD licensed. Free for all use.

Agent Metadata

Pagination
none
Idempotent
Partial
Retry Guidance
Not documented

Known Gotchas

  • Objective function must return dict with 'loss' key — def objective(params): return {'loss': score, 'status': STATUS_OK} is required; returning a bare float or missing 'loss' key raises Exception: invalid value for loss; agent objective functions must always return dict with 'loss' (minimized) and 'status' keys
  • hp.quniform returns float not int — hp.quniform('depth', 3, 15, 1) returns float like 7.0 not integer 7; agent code passing hyperparameters to sklearn/XGBoost must cast: int(params['depth']); apply int() to all q-uniform integer hyperparameters before passing to model
  • hp.choice returns index not value — best from fmin contains index for hp.choice: best['model'] = 0 means first choice; use hyperopt.space_eval(space, best) to convert best dict to actual hyperparameter values; agent code doing params['kernel'] = best['kernel'] gets integer not 'rbf'
  • No trial pruning — Hyperopt evaluates each trial to completion; long-running objective functions (neural network training) waste time on clearly bad hyperparameters; Optuna supports pruning via callbacks; agent neural network HPO should use Optuna not Hyperopt for pruning support
  • Trials object not thread-safe — Trials() for sequential optimization; parallel execution with multiple workers requires SparkTrials or mongoexp; agent code using Python multiprocessing with shared Trials() causes corruption; use SparkTrials(parallelism=n) for agent parallel tuning
  • max_evals budget includes all trials — fmin(max_evals=100) runs exactly 100 trials total including failed ones; agent objective that fails 20% of the time effectively gets 80 useful evaluations from 100 budget; add try/except in objective returning {'loss': float('inf'), 'status': STATUS_FAIL} for failed evaluations

Alternatives

Full Evaluation Report

Detailed scoring breakdown, competitive positioning, security analysis, and improvement recommendations for Hyperopt.

$99

Scores are editorial opinions as of 2026-03-06.

5173
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered