Ant Group open-sources Ling-2.5-1T and Ring-2.5-1T AI models
Ant Group has released two new trillion-parameter AI models—Ling-2.5-1T and Ring-2.5-1T—as the latest upgrades to its open-source model family. The company says both models are available now under open licenses on Hugging Face and ModelScope, expanding the Ling 2.0 series it first unveiled in October 2025.
What Ant released
Ling-2.5-1T is positioned as Ant’s flagship “non-thinking” model focused on higher reasoning efficiency, “fine-grained preference alignment,” and native support for agent-style interaction. Ant also claims it supports context lengths up to 1 million tokens, a spec aimed at long-document and long-session use cases.
Ring-2.5-1T is framed differently: Ant calls it the world’s first hybrid linear-architecture “thinking” model, built for advanced reasoning tasks.
The benchmark claims Ant is highlighting
Ant is leaning heavily on efficiency and academic-style benchmarks to sell the upgrade.
For Ling-2.5-1T, Ant says that on AIME 2026 it matches the performance of “frontier thinking models” that typically use 15,000–23,000 tokens, while using about 5,890 tokens.
For Ring-2.5-1T, Ant cites “gold-tier results” on competition benchmarks, including IMO 2025: 35/42 (Gold Medal standard) and CMO 2025: 105/126 (surpassing China’s national team cutoff).
The bigger model family: Ling, Ring, and Ming
Ant describes these releases as a broader refresh of its open-sourced model lineup, organized into three series:
- Ling (non-thinking models)
- Ring (reasoning-optimized thinking models)
- Ming (multimodal series)
It also notes that on Feb. 11, it released Ming-Flash-Omni-2.0, which it calls the industry’s first model to unify speech, audio, and music in a single architecture.
Where developers can access the models
Ant says the models are available under open licenses on:
- Hugging Face
- ModelScope
Why it matters for crypto
- Open models are becoming “infrastructure,” not just research. Large, openly available models can be integrated into wallets, exchanges, compliance tooling, and onchain analytics—especially where teams want control over deployment and data handling.
- Ant is explicitly tying AI to fintech-scale systems. Ant describes itself as a digital technology provider using AI and blockchain to support partners—this kind of model release signals continued investment in that stack.
- Long-context and agent interaction are relevant to real workflows. If the 1M-token context and “native agent interaction” claims hold up, it’s aimed at tasks like policy-heavy document processing, ops automation, and complex decision support—areas crypto firms increasingly care about.
What to watch next
- Independent validation of the benchmark and efficiency claims, especially the AIME token-usage comparison and the “hybrid linear-architecture” benefits in production settings.
- Adoption signals in open-source ecosystems (fine-tunes, forks, and toolchains built around Ling/Ring/Ming).
- Whether Ant expands model access and documentation beyond listings, with clearer guidance on enterprise deployment and safety controls for agent use cases.
Source: Ant Group Press Release