{"id":"jfarcand-mirroir-mcp","name":"mirroir-mcp","homepage":"https://mirroir.dev","repo_url":"https://github.com/jfarcand/mirroir-mcp","category":"devtools","subcategories":[],"tags":["mcp","iphone","mobile-testing","screen-automation","vision-ocr","macos","automation"],"what_it_does":"mirroir-mcp is an MCP server that lets an AI agent observe and control a real iPhone (via macOS iPhone Mirroring). It provides tools such as describing the current screen (with OCR/icon/AI-vision backends) and executing actions like tap, swipe, and type, enabling closed-loop “observe, reason, act” workflows and skill generation/testing for mobile UI automation.","use_cases":["Mobile UI exploration and workflow generation for apps on a real device","AI-assisted end-to-end testing of iPhone screens (including deterministic skill replay)","Agent-driven accessibility-like interactions: tap/type based on on-screen labels and structure","CI/mobile testing with compiled skills (coordinate/timing capture to reduce OCR overhead)","Interactive debugging/diagnosis when test steps fail (optional AI diagnosis agents)"],"not_for":["No-network, server-side automation at scale without a macOS host and iPhone Mirroring","Situations requiring strong audit/compliance guarantees for device interaction without additional controls","Headless environments where macOS screen recording/accessibility permissions cannot be granted","Use cases that need guaranteed safety boundaries (the agent can drive taps/types on a real device)"],"best_when":"You have a macOS 15+ machine with iPhone Mirroring enabled and want an MCP-based agent to interact with a real iPhone UI using screen understanding plus action tools.","avoid_when":"You cannot grant Screen Recording and Accessibility permissions, or you need strict rate limiting/role-based access control for tool execution.","alternatives":["Other UI automation frameworks (XCUITest for iOS, Appium) for device testing","Vision/OCR + RPA style local automation without MCP","Generic MCP tool wrappers around existing mobile automation stacks"],"af_score":61.2,"security_score":36.0,"reliability_score":30.0,"package_type":"mcp_server","discovery_source":["github"],"priority":"high","status":"evaluated","version_evaluated":null,"last_evaluated":"2026-03-30T15:21:57.189047+00:00","interface":{"has_rest_api":false,"has_graphql":false,"has_grpc":false,"has_mcp_server":true,"mcp_server_url":null,"has_sdk":false,"sdk_languages":[],"openapi_spec_url":null,"webhooks":false},"auth":{"methods":["Local stdio MCP transport (per-client configuration via command like npx -y mirroir-mcp)"],"oauth":false,"scopes":false,"notes":"The documentation describes local operation via stdio for MCP clients. For AI vision mode, it routes vision requests through already-authenticated CLI tools; it does not describe separate OAuth scopes for the MCP server itself."},"pricing":{"model":null,"free_tier_exists":false,"free_tier_limits":null,"paid_tiers":[],"requires_credit_card":false,"estimated_workload_costs":null,"notes":"Pricing is not described in the provided README; costs may depend on optional AI vision/diagnosis backends and any model subscriptions/API keys used by embedded/selected agents."},"requirements":{"requires_signup":false,"requires_credit_card":false,"domain_verification":false,"data_residency":[],"compliance":[],"min_contract":null},"agent_readiness":{"af_score":61.2,"security_score":36.0,"reliability_score":30.0,"mcp_server_quality":78.0,"documentation_accuracy":70.0,"error_message_quality":0.0,"error_message_notes":null,"auth_complexity":85.0,"rate_limit_clarity":10.0,"tls_enforcement":20.0,"auth_strength":35.0,"scope_granularity":10.0,"dependency_hygiene":55.0,"secret_handling":65.0,"security_notes":"Operates locally (stdIo MCP) and relies on macOS user-granted permissions (Screen Recording/Accessibility). The README mentions API keys for optional AI diagnosis/vision and environment-variable usage for keys, but does not describe how secrets are stored/cleared or how tool execution is constrained. No information is provided about transport security, RBAC/scope enforcement, or rate limiting.","uptime_documented":0.0,"version_stability":40.0,"breaking_changes_history":30.0,"error_recovery":50.0,"idempotency_support":"false","idempotency_notes":"Tools like tap/type/swipe are inherently stateful on a real device; README describes test retries/timeouts but does not claim idempotent tool semantics.","pagination_style":"none","retry_guidance_documented":false,"known_agent_gotchas":["Requires macOS Screen Recording and Accessibility permissions; first-run prompts can block tool calls until granted.","Vision/semantic modes depend on availability of local models (YOLO .mlmodelc) or embedded embacle FFI linkage; behavior can change based on configuration and installed components.","Real-device interactions are sensitive to timing and transient dialogs; generated skills may require recalibration/adjustment.","Exploration is bounded (max_depth/max_screens/max_time), so complete traversal is not guaranteed."]}}