Maze Product Research Platform API

Maze product research platform REST API for product teams and UX researchers to run rapid prototype testing, usability studies, surveys, and tree testing with quantitative metrics and AI-powered insight synthesis. Enables AI agents to manage study creation and configuration for research automation, handle prototype import from Figma and InVision for design validation automation, access task success rate and misclick metrics for usability analytics automation, retrieve heatmap and click path data for interaction analysis automation, manage participant recruitment from Maze panel or custom lists for research access automation, handle survey and card sorting setup for information architecture automation, access AI insight generation from qualitative responses for synthesis automation, retrieve study result export for research reporting automation, manage project and folder organization for research portfolio automation, and integrate Maze with Figma, Notion, Jira, and design tools for end-to-end product research workflow.

Evaluated Mar 07, 2026 (0d ago) vcurrent
Homepage ↗ Developer Tools maze product-research usability-testing prototype-testing UX-research design-validation
⚙ Agent Friendliness
53
/ 100
Can an agent use this?
🔒 Security
74
/ 100
Is it safe for agents?
⚡ Reliability
66
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
11
Documentation
68
Error Messages
65
Auth Simplicity
73
Rate Limits
63

🔒 Security

TLS Enforcement
93
Auth Strength
70
Scope Granularity
63
Dep. Hygiene
70
Secret Handling
72

Product research. GDPR, SOC2. API key. EU/US. Research and participant data.

⚡ Reliability

Uptime/SLA
70
Version Stability
68
Breaking Changes
63
Error Recovery
63
AF Security Reliability

Best When

A product or UX team wanting AI agents to automate unmoderated prototype testing, task success measurement, heatmap analysis, and insight synthesis through Maze's rapid research platform integrated with Figma.

Avoid When

PROTOTYPE SYNC REQUIRES FIGMA API INTEGRATION: Maze prototype import from Figma requires active Figma connection via OAuth; automated prototype testing pipeline must coordinate Figma design export with Maze prototype import; automated prototype update without Figma re-sync creates outdated prototype in active study. PARTICIPANT PANEL CREDIT SYSTEM FOR AUTOMATED RECRUITMENT: Maze panel recruitment uses credit system (credits per response); automated research at scale must track credit balance before launching studies; automated study launch without credit check creates partial data collection when credits are exhausted mid-study. STUDY RESULT PROCESSING TIME FOR AUTOMATED REPORTING: Maze qualitative response analysis (AI themes, sentiment) processes asynchronously after study completion; automated reporting pipeline must poll for analysis completion; automated report generation immediately after study close may return incomplete insight data for in-process qualitative analysis.

Use Cases

  • Testing prototypes from design validation automation agents
  • Measuring task success rates from UX metrics agents
  • Running rapid research studies from product research agents
  • Generating AI insights from qualitative research synthesis agents

Not For

  • Long-form moderated user interviews (use UserTesting or Lookback)
  • Full session replay and behavior analytics (use FullStory or Hotjar)
  • Quantitative survey research at scale (use Qualtrics or SurveyMonkey)

Interface

REST API
Yes
GraphQL
No
gRPC
No
MCP Server
No
SDK
No
Webhooks
Yes

Authentication

Methods: apikey
OAuth: No Scopes: No

Maze uses API key for authentication. REST API with JSON. Paris, France HQ (US operations). Founded 2018 by Jonathan Widawski and Thomas Mary. Backed by Felicis, Accel, Index Ventures ($40M+ raised). Products: Prototype testing, task analysis, heatmaps, card sorting, tree testing, AI insights, participant panel. Integrations: Figma, InVision, Notion, Jira, Confluence, Slack. GDPR. SOC2. Serves 100,000+ product teams. Competes with UserTesting, Useberry, and Lyssna for unmoderated research.

Pricing

Model: freemium
Free tier: Yes
Requires CC: No

Paris FR. Accel/Felicis backed. Free tier (limited). Per-seat/study subscription. Annual discount.

Agent Metadata

Pagination
page
Idempotent
Partial
Retry Guidance
Not documented

Known Gotchas

  • STUDY STATUS LIFECYCLE FOR AUTOMATED PROCESSING: Maze studies have status lifecycle (draft → active → closed → analyzing → analyzed); automated processing must track study status before accessing results; automated result access on non-analyzed studies returns incomplete or empty data
  • HEATMAP DATA REQUIRES MINIMUM RESPONSE THRESHOLD: Maze heatmaps require minimum response count for statistical validity (typically 5+); automated heatmap analysis on studies with few responses creates unreliable visual data; automated analysis should verify response count before extracting heatmap insights
  • TASK PATH vs OPTIMAL PATH COMPARISON: Maze captures user click paths through prototype tasks; automated success rate calculation must compare user path to defined optimal path; automated task success measurement without optimal path definition cannot calculate success rate or misclick rate
  • FIGMA PROTOTYPE VERSION PINNING: Maze imports specific Figma prototype versions; automated prototype update workflow must explicitly re-import updated Figma prototype to Maze; automated assumption that Maze auto-syncs Figma changes creates stale prototype testing after design updates
  • AI INSIGHT THEME CLUSTERING REQUIRES SUFFICIENT RESPONSES: Maze AI theme generation for open-ended responses requires sufficient qualitative data (typically 10+ responses); automated insight extraction on studies with few open-ended responses produces generic or absent AI themes; automated research must target minimum response count before triggering AI analysis

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for Maze Product Research Platform API.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-07.

6470
Packages Evaluated
26150
Need Evaluation
173
Need Re-evaluation
Community Powered