Ling-2.6 series is designed for real-world agents that require fast responses, strong execution, and high token efficiency, with several sized SKUs.
AI & ML interests
None defined yet.
Recent Activity
View all activity
Papers
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model
DR-Venus: Towards Frontier Edge-Scale Deep Research Agents with Only 10K Open Data
The newest flagship non-reasoning model series.
Ming is the multi-modal series of any-to-any models developed by Ant Ling team.
-
inclusionAI/Ming-flash-omni-2.0
Any-to-Any • Updated • 5.44k • 265 -
inclusionAI/Ming-omni-tts-16.8B-A3B
Text-to-Speech • 18B • Updated • 192 • 34 -
inclusionAI/Ming-omni-tts-0.5B
Text-to-Speech • 2B • Updated • 4.68k • 36 -
inclusionAI/Ming-omni-tts-tokenizer-12Hz
Audio-to-Audio • 0.8B • Updated • 26 • 9
-
Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception
Paper • 2602.11858 • Published • 63 -
inclusionAI/ZwZ-4B
Image-Text-to-Text • 5B • Updated • 264 • 32 -
inclusionAI/ZwZ-8B
Image-Text-to-Text • 9B • Updated • 390 • 45 -
inclusionAI/ZwZ-RL-VQA
Viewer • Updated • 111k • 1.98k • 13
-
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model
Paper • 2604.20796 • Published • 239 -
inclusionAI/LLaDA2.0-Uni
Any-to-Any • 16B • Updated • 1.81k • 243 -
inclusionAI/LLaDA2.0-Uni-FP8
Any-to-Any • 16B • Updated • 37 • 3 -
LLaDA2.0: Scaling Up Diffusion Language Models to 100B
Paper • 2512.15745 • Published • 88
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
The Agent Runtime for Self-Improvement
-
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Paper • 2506.09344 • Published • 32 -
inclusionAI/Ming-Lite-Omni
Any-to-Any • 19B • Updated • 51 • 199 -
inclusionAI/Ming-Lite-Omni-1.5
Any-to-Any • Updated • 155 • 86 -
inclusionAI/Ming-UniAudio-16B-A3B
Any-to-Any • 18B • Updated • 81 • 79
Ling-2.6 series is designed for real-world agents that require fast responses, strong execution, and high token efficiency, with several sized SKUs.
The newest flagship non-reasoning model series.
Ming is the multi-modal series of any-to-any models developed by Ant Ling team.
-
inclusionAI/Ming-flash-omni-2.0
Any-to-Any • Updated • 5.44k • 265 -
inclusionAI/Ming-omni-tts-16.8B-A3B
Text-to-Speech • 18B • Updated • 192 • 34 -
inclusionAI/Ming-omni-tts-0.5B
Text-to-Speech • 2B • Updated • 4.68k • 36 -
inclusionAI/Ming-omni-tts-tokenizer-12Hz
Audio-to-Audio • 0.8B • Updated • 26 • 9
-
Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception
Paper • 2602.11858 • Published • 63 -
inclusionAI/ZwZ-4B
Image-Text-to-Text • 5B • Updated • 264 • 32 -
inclusionAI/ZwZ-8B
Image-Text-to-Text • 9B • Updated • 390 • 45 -
inclusionAI/ZwZ-RL-VQA
Viewer • Updated • 111k • 1.98k • 13
-
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model
Paper • 2604.20796 • Published • 239 -
inclusionAI/LLaDA2.0-Uni
Any-to-Any • 16B • Updated • 1.81k • 243 -
inclusionAI/LLaDA2.0-Uni-FP8
Any-to-Any • 16B • Updated • 37 • 3 -
LLaDA2.0: Scaling Up Diffusion Language Models to 100B
Paper • 2512.15745 • Published • 88
A collection of TwinFlow-accelerated diffusion models
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
The Agent Runtime for Self-Improvement
GroveMoE is an open-source family of large language models developed by the AGI Center, Ant Research Institute.
AReaL-boba-2
-
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Paper • 2506.09344 • Published • 32 -
inclusionAI/Ming-Lite-Omni
Any-to-Any • 19B • Updated • 51 • 199 -
inclusionAI/Ming-Lite-Omni-1.5
Any-to-Any • Updated • 155 • 86 -
inclusionAI/Ming-UniAudio-16B-A3B
Any-to-Any • 18B • Updated • 81 • 79