{"id":"jfarcand-mirroir-mcp","name":"mirroir-mcp","af_score":61.2,"security_score":36.0,"reliability_score":30.0,"what_it_does":"mirroir-mcp is an MCP server that lets an AI agent observe and control a real iPhone (via macOS iPhone Mirroring). It provides tools such as describing the current screen (with OCR/icon/AI-vision backends) and executing actions like tap, swipe, and type, enabling closed-loop “observe, reason, act” workflows and skill generation/testing for mobile UI automation.","best_when":"You have a macOS 15+ machine with iPhone Mirroring enabled and want an MCP-based agent to interact with a real iPhone UI using screen understanding plus action tools.","avoid_when":"You cannot grant Screen Recording and Accessibility permissions, or you need strict rate limiting/role-based access control for tool execution.","last_evaluated":"2026-03-30T15:21:57.189047+00:00","has_mcp":true,"has_api":false,"auth_methods":["Local stdio MCP transport (per-client configuration via command like npx -y mirroir-mcp)"],"has_free_tier":false,"known_gotchas":["Requires macOS Screen Recording and Accessibility permissions; first-run prompts can block tool calls until granted.","Vision/semantic modes depend on availability of local models (YOLO .mlmodelc) or embedded embacle FFI linkage; behavior can change based on configuration and installed components.","Real-device interactions are sensitive to timing and transient dialogs; generated skills may require recalibration/adjustment.","Exploration is bounded (max_depth/max_screens/max_time), so complete traversal is not guaranteed."],"error_quality":0.0}