DevRadar
🤗 HuggingFaceSignificant

Xiaomi MiMo-V2.5 Open-Source LLM: Native Omni-Modal and Agent-Optimized Models Released

Xiaomi releases MiMo-V2.5 and MiMo-V2.5-Pro as open-source models under MIT license. MiMo-V2.5-Pro achieves #1 ranking among open-source models on GDPVal-AA and ClawEval benchmarks, optimized for complex agent and coding tasks. MiMo-V2.5 is a native omni-modal model with agent capabilities. Both variants support 1M-token context windows and permit commercial deployment, continued training, and fine-tuning without additional authorization. Weights available on HuggingFace.

Xiaomi MiMoMonday, April 27, 2026Original source

Xiaomi MiMo-V2.5 Open-Source LLM: Native Omni-Modal and Agent-Optimized Models Released

Summary

Xiaomi releases MiMo-V2.5 (omni-modal) and MiMo-V2.5-Pro (complex agent/coding) as open-source models under MIT license, both supporting 1M-token context windows with no restrictions on commercial deployment or fine-tuning. MiMo-V2.5-Pro claims #1 open-source ranking on GDPVal-AA and ClawEval benchmarks.

Integration Strategy

When to Use This?

MiMo-V2.5-Pro is Well-Suited For:

  • Autonomous coding agents requiring long-horizon task completion
  • Complex agent pipelines with extended memory requirements
  • Code generation, debugging, and refactoring workflows
  • Evaluation frameworks (given its benchmark focus)

MiMo-V2.5 is Better For:

  • Multi-modal applications requiring unified text/image processing
  • Agents that must reason across modalities
  • Applications where omni-modal capabilities outweigh specialized coding performance

How to Integrate?

  1. Access Weights: Download from the official HuggingFace collection at huggingface.co/collections/XiaomiMiMo/mimo-v25
  2. Deployment Options: Not specified by Xiaomi—likely standard HuggingFace Transformers/PEFT integration paths
  3. Fine-tuning: Permitted under MIT license; expect standard LoRA/QLoRA compatibility for resource-efficient adaptation
  4. Documentation: Blog posts available at mimo.xiaomi.com/index#blog for detailed integration guidance

Compatibility

  • Framework Support: Standard HuggingFace ecosystem compatibility expected (Transformers, vLLM, TGI)
  • Quantization: Not specified; assume standard INT8/INT4 quantization paths apply
  • Hardware Requirements: Not publicly disclosed—1M context suggests significant VRAM requirements for full precision

Source: @XiaomiMiMo Reference: MiMo Blog Published: 2025 DevRadar Analysis Date: 2026-04-27