Pangu Pro MoE: Mixture of Grouped Experts for Efficient Sparsity
Abstract
Mixture of Grouped Experts (MoGE) improves expert load balancing and execution efficiency for large language models, enhancing throughput and cost-to-performance on Ascend NPUs.
The surgence of Mixture of Experts (MoE) in Large Language Models promises a small price of execution cost for a much larger model parameter count and learning capacity, because only a small fraction of parameters are activated for each input token. However, it is commonly observed that some experts are activated far more often than others, leading to system inefficiency when running the experts on different devices in parallel. Therefore, we introduce Mixture of Grouped Experts (MoGE), which groups the experts during selection and balances the expert workload better than MoE in nature. It constrains tokens to activate an equal number of experts within each predefined expert group. When a model execution is distributed on multiple devices, this architectural design ensures a balanced computational load across devices, significantly enhancing throughput, particularly for the inference phase. Further, we build Pangu Pro MoE on Ascend NPUs, a sparse model based on MoGE with 72 billion total parameters, 16 billion of which are activated for each token. The configuration of Pangu Pro MoE is optimized for Ascend 300I Duo and 800I A2 through extensive system simulation studies. Our experiments indicate that MoGE indeed leads to better expert load balancing and more efficient execution for both model training and inference on Ascend NPUs. The inference performance of Pangu Pro MoE achieves 1148 tokens/s per card and can be further improved to 1528 tokens/s per card by speculative acceleration, outperforming comparable 32B and 72B Dense models. Furthermore, we achieve an excellent cost-to-performance ratio for model inference on Ascend 300I Duo. Our studies show that Ascend NPUs are capable of training Pangu Pro MoE with massive parallelization to make it a leading model within the sub-100B total parameter class, outperforming prominent open-source models like GLM-Z1-32B and Qwen3-32B.
Community
The paper on Huawei's first open model - Pangu Pro MoE 🔥
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Pangu Ultra MoE: How to Train Your Big MoE on Ascend NPUs (2025)
- dots.llm1 Technical Report (2025)
- EAQuant: Enhancing Post-Training Quantization for MoE Models via Expert-Aware Optimization (2025)
- MegaScale-MoE: Large-Scale Communication-Efficient Training of Mixture-of-Experts Models in Production (2025)
- Faster MoE LLM Inference for Extremely Large Models (2025)
- Unlocking Personalized Knowledge in Federated Large Language Model: The Power of Mixture of Experts (2025)
- Mixture of Weight-shared Heterogeneous Group Attention Experts for Dynamic Token-wise KV Optimization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper