MiniMax M2.7 is a reasoning-focused large language model developed by MiniMax (Shanghai), released on March 18, 2026. Built on a Sparse Mixture-of-Experts (MoE) architecture with 230 billion total parameters (10 billion active per token), it delivers strong coding and reasoning performance at low per-token cost. M2.7 introduces self-evolving capabilities, reduced hallucination rates, and native multi-agent collaboration support.
- Cost-Efficient Reasoning: Achieves intelligence scores comparable to GLM-5 and Kimi K2.5 while costing roughly one-third as much to run, with 20% fewer output tokens needed for equivalent results.
- Low Hallucination Rate: 34% hallucination rate on the AA-Omniscience Index, lower than Claude Sonnet 4.6 (46%) and Gemini 3.1 Pro Preview (50%).
- Multi-Agent Collaboration: Native support for multi-agent orchestration and complex skill coordination, including dynamic tool discovery and invocation at runtime.
- Self-Evolution: M2.7 can perform 30-50% of reinforcement learning research workflows autonomously, representing early steps toward model self-improvement.
- Autonomous Coding & Debugging: Strong SWE-Pro (56.2%) and Terminal-Bench 2 (57.0%) scores make it well-suited for live debugging, root cause analysis, and multi-file code generation.
- Cost-Sensitive Agent Workflows: At $0.30/$1.20 per million input/output tokens, it is ideal for high-volume agentic tasks where per-token cost matters.
- Document & Report Generation: Handles full document generation across Word, Excel, and PowerPoint formats, including financial modeling workflows.
| Capability | Description |
|---|
| Reasoning | AA Intelligence Index: 50 (ranked #1 of 136 models, as of March 2026). Strong system-level reasoning and trace analysis |
| Coding | SWE-Pro 56.2%, SWE-bench Verified 78%, Terminal-Bench 2 57.0%, PinchBench 86.2% |
| Multimodal | Text only. No image, audio, or video input |
| Response Speed | ~52.7 tokens/sec (slightly below median of 54.9 t/s for similar models). TTFT 2.05s |
| Context Window | 204.8K tokens |
| Max Output | 131.1K tokens |
| Tool Use | Dynamic tool search, multi-agent handoffs, dependency tracking across parallel workstreams |
| Multilingual | SWE Multilingual 76.5% |
- Text-only input; no multimodal (image/video) support.
- Some independent benchmarks (e.g., BridgeBench for vibe coding) show regression from M2.5.
- Open weights released under a non-commercial license; commercial use requires a separate agreement.
| Model | Input (Credits/Token) | Cache Write (Credits/Token) | Cache Read (Credits/Token) | Output (Credits/Token) |
|---|
| MiniMax M2.7 | 0.30 | 0.375 | 0.06 | 1.20 |