{"id":"jingyaogong-minimind","name":"minimind","af_score":20.0,"security_score":13.5,"reliability_score":33.8,"what_it_does":"MiniMind is an open-source, end-to-end small LLM training and inference project (PyTorch-focused) covering model architecture (Dense + MoE), tokenizer training, data pipelines, pretraining, SFT, LoRA, and preference/RLHF-style training (e.g., DPO and other variants mentioned), plus a minimal OpenAI-compatible API server and a Streamlit web UI for chat/tool-calling style interactions.","best_when":"You want a transparent, PyTorch-native codebase to learn and run small-scale LLM training/inference locally and optionally integrate with common chat frontends via an OpenAI-like API.","avoid_when":"You need a well-specified, standards-compliant hosted API with clear SLAs, robust auth/key management, and documented operational guarantees.","last_evaluated":"2026-03-29T12:58:54.290009+00:00","has_mcp":false,"has_api":true,"auth_methods":["Self-hosted API server (authentication method not specified in provided README content)"],"has_free_tier":false,"known_gotchas":["No MCP server is mentioned; agent integrations likely need to use the OpenAI-compatible endpoint or local inference scripts.","The Streamlit web demo expects model weights in a specific directory structure; missing weights can cause startup failure (noted behavior).","Model compatibility/weight loading may change across releases (README includes notes about breaking compatibility for older models)."],"error_quality":null}