{"id":"wuji-labs-nopua","name":"nopua","homepage":null,"repo_url":"https://github.com/wuji-labs/nopua","category":"ai-ml","subcategories":[],"tags":["ai-ml","ai-agent","prompt-engineering","python","agent-skill","coding-assistance","debugging","verification","no-pua"],"what_it_does":"nopua appears to be a Python “agent skill”/prompting component intended to change how AI agents behave (shifting from fear/threat-oriented prompting toward trust/self-respect framing) in order to improve debugging persistence and verification behavior. The provided README content describes philosophy, methodology, and benchmark claims, but does not include any concrete API/SDK interface details in the excerpt.","use_cases":["Load/use the NoPUA skill/prompting strategy to encourage AI agents to verify, self-correct, and search for hidden issues during software debugging and incident response","Agent workflow prompting to reduce fear/threat framing and encourage honest uncertainty and escalation when needed","Internal evaluation/experimentation with agent prompting strategies for code quality and verification"],"not_for":["As a standalone secure network service (no evidence of an externally callable API in the provided content)","Production systems requiring a guaranteed contractual SLA based on the provided materials","Use as a reliable source of truth for benchmark results without re-running/validating the claims"],"best_when":"You are already building/operating an AI coding/debugging agent and can integrate the skill into the model’s prompt/system instructions or agent policy layer.","avoid_when":"You need a documented, versioned programmatic API (REST/SDK/OpenAPI) with explicit auth, rate limits, and structured errors, based solely on the provided README excerpt.","alternatives":["General prompt-engineering techniques for verification (tool use, test running, evidence requirements)","Other agent “skills” or frameworks that implement verification/checklist behavior","Building a custom agent policy that enforces: run tests, search for root cause, document evidence, and escalate with context"],"af_score":30.2,"security_score":14.0,"reliability_score":20.0,"package_type":"skill","discovery_source":["openclaw"],"priority":"high","status":"evaluated","version_evaluated":null,"last_evaluated":"2026-03-30T13:25:28.525604+00:00","interface":{"has_rest_api":false,"has_graphql":false,"has_grpc":false,"has_mcp_server":false,"mcp_server_url":null,"has_sdk":false,"sdk_languages":["Python"],"openapi_spec_url":null,"webhooks":false},"auth":{"methods":[],"oauth":false,"scopes":false,"notes":"No authentication mechanism is described in the provided README excerpt; this appears to be a local library/skill integration rather than a network API."},"pricing":{"model":null,"free_tier_exists":false,"free_tier_limits":null,"paid_tiers":[],"requires_credit_card":false,"estimated_workload_costs":null,"notes":"No pricing information is provided in the excerpt."},"requirements":{"requires_signup":false,"requires_credit_card":false,"domain_verification":false,"data_residency":[],"compliance":[],"min_contract":null},"agent_readiness":{"af_score":30.2,"security_score":14.0,"reliability_score":20.0,"mcp_server_quality":0.0,"documentation_accuracy":40.0,"error_message_quality":0.0,"error_message_notes":null,"auth_complexity":100.0,"rate_limit_clarity":0.0,"tls_enforcement":0.0,"auth_strength":0.0,"scope_granularity":0.0,"dependency_hygiene":40.0,"secret_handling":40.0,"security_notes":"Security cannot be meaningfully assessed from the provided excerpt: there is no network surface described, no auth scheme, and no mention of how secrets are handled. As it is a Python skill, risk would mainly come from how you integrate it into an agent that can access code/tools (prompt injection/data exfiltration concerns are external to this skill unless explicitly addressed in code). Dependency hygiene is unknown from provided information; scored conservatively.","uptime_documented":0.0,"version_stability":30.0,"breaking_changes_history":30.0,"error_recovery":20.0,"idempotency_support":"false","idempotency_notes":null,"pagination_style":"none","retry_guidance_documented":false,"known_agent_gotchas":["Because this appears to be a prompting/skill integration (not an API), “integration” is likely a matter of how you wire it into your agent/prompt pipeline; without explicit docs/examples in the provided excerpt, the exact integration steps may be unclear.","Benchmark/philosophy content is not, by itself, a guarantee of behavior in your environment; model/provider differences and tool availability can affect outcomes.","If the skill encourages “verification,” you still need to ensure your agent has the necessary tooling (test runners, linters, repository access) to actually verify claims."]}}