{"id":"facebookresearch-lingua","name":"lingua","af_score":37.8,"security_score":45.5,"reliability_score":33.8,"what_it_does":"Meta Lingua (lingua) is a minimal, research-focused LLM training and inference codebase built on PyTorch, providing reusable components (models, data loading, distributed training, checkpointing, profiling) and example “apps” and configuration templates for end-to-end training/evaluation on SLURM or locally (e.g., via torchrun).","best_when":"You have GPU/cluster access and want a modifiable research codebase to implement new training ideas with control over distributed strategy, data pipelines, and checkpoint formats.","avoid_when":"You need a simple public HTTP API/SDK for calling the model, or you require strongly documented operational semantics (SLA, error codes, stable backward-compatible APIs) rather than a research framework.","last_evaluated":"2026-03-29T14:59:18.344774+00:00","has_mcp":false,"has_api":false,"auth_methods":["Hugging Face access token for downloading tokenizer/data (via --api_key <HUGGINGFACE_TOKEN> in setup/download_tokenizer.py)"],"has_free_tier":false,"known_gotchas":["This is not an API-based product; interactions are via CLI/Python entrypoints and SLURM workflows, which may require environment setup and GPU/distributed configuration.","Configuration templates require user adaptation (paths, dump_dir, tokenizer path, etc.), so automated agents must edit configs rather than rely on fully turnkey defaults.","Distributed training failures are likely; while relaunching via SLURM is mentioned, there is no structured, machine-readable error protocol described."],"error_quality":0.0}