cursor-feedback-extension

Cursor Feedback is a Cursor IDE extension plus an MCP server that provides an interactive human-in-the-loop feedback step. The Cursor AI agent calls an MCP tool (interactive_feedback), the extension shows a sidebar UI to collect user feedback (text/images/files), and the MCP server returns the feedback to the agent so it can continue within the same conversation.

Evaluated Mar 30, 2026 (22d ago)
Repo ↗ Ai Ml mcp cursor-extension human-in-the-loop interactive-feedback sidebar-ui webview javascript open-vsx open-source ai-assistant
⚙ Agent Friendliness
57
/ 100
Can an agent use this?
🔒 Security
36
/ 100
Is it safe for agents?
⚡ Reliability
30
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
65
Documentation
60
Error Messages
0
Auth Simplicity
95
Rate Limits
10

🔒 Security

TLS Enforcement
60
Auth Strength
15
Scope Granularity
10
Dep. Hygiene
45
Secret Handling
60

No authentication/authorization model is described for the MCP tool/extension interactions. The workflow involves local project context (paths, images/files) and thus creates a risk of inadvertent data exposure to the AI workflow. README does not document data handling details, transport security for any HTTP API used by the extension, or whether secrets/tokens are ever logged.

⚡ Reliability

Uptime/SLA
0
Version Stability
35
Breaking Changes
30
Error Recovery
55
AF Security Reliability

Best When

You want a lightweight, IDE-native approval/review loop for Cursor-generated outputs, including rich feedback (images/files) from the local workspace.

Avoid When

You need formal enterprise-grade security/compliance evidence, or you cannot tolerate that local project context (paths and potentially selected files/images) may be sent through the agent workflow.

Use Cases

  • Gather user approval or edits after the AI produces a work summary
  • Create a repeatable review loop inside Cursor for iterative tasks
  • Collect feedback that includes screenshots/images and referenced project file paths
  • Avoid re-consuming Cursor monthly request quota by keeping interaction in one conversation

Not For

  • Production systems requiring strict guarantees about data handling or compliance
  • Use cases that need a public REST/GraphQL API for server-to-server automation
  • Scenarios where the user cannot safely provide local file paths/images to an AI workflow
  • Environments that require high assurance of dependency/vulnerability management

Interface

REST API
No
GraphQL
No
gRPC
No
MCP Server
Yes
SDK
No
Webhooks
No

Authentication

Methods: No explicit auth described for MCP tool calls; configured via local Cursor MCP config
OAuth: No Scopes: No

The README describes local MCP server configuration (npx/cursor-feedback@latest) and a tool interface (interactive_feedback) without any mention of authentication, API keys, or scope controls.

Pricing

Free tier: No
Requires CC: No

No pricing information for the extension/MCP server is provided. The README references Cursor request quotas, but that is not a pricing model of this package.

Agent Metadata

Pagination
none
Idempotent
False
Retry Guidance
Documented

Known Gotchas

  • Tool is interactive; the agent must wait for user feedback via the sidebar before continuing.
  • On timeout (default 300s), the extension/agent workflow supports auto-retry; ensure agent logic avoids duplicating work if feedback context changes.
  • Multi-window isolation is claimed; ensure the correct project_directory is provided when running across multiple Cursor windows.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for cursor-feedback-extension.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-30.

8642
Packages Evaluated
17761
Need Evaluation
586
Need Re-evaluation
Community Powered