IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding
Abstract
Known by more than 1.5 billion people in the Indian subcontinent, Indic languages present unique challenges and opportunities for natural language processing (NLP) research due to their rich cultural heritage, linguistic diversity, and complex structures. IndicMMLU-Pro is a comprehensive benchmark designed to evaluate Large Language Models (LLMs) across Indic languages, building upon the MMLU Pro (Massive Multitask Language Understanding) framework. Covering major languages such as Hindi, Bengali, Gujarati, Marathi, Kannada, Punjabi, Tamil, Telugu, and Urdu, our benchmark addresses the unique challenges and opportunities presented by the linguistic diversity of the Indian subcontinent. This benchmark encompasses a wide range of tasks in language comprehension, reasoning, and generation, meticulously crafted to capture the intricacies of Indian languages. IndicMMLU-Pro provides a standardized evaluation framework to push the research boundaries in Indic language AI, facilitating the development of more accurate, efficient, and culturally sensitive models. This paper outlines the benchmarks' design principles, task taxonomy, and data collection methodology, and presents baseline results from state-of-the-art multilingual models.
Community
IndicMMLU-Pro is a benchmark designed to evaluate Large Language Models (LLMs) across nine major Indic languages, adapting the MMLU-Pro framework to assess linguistic comprehension, reasoning, and generative capabilities.
Comprehensive Indic NLP Benchmark: Introduces IndicMMLU-Pro, a multilingual benchmark for nine Indic languages (Hindi, Bengali, Telugu, Marathi, Tamil, Gujarati, Urdu, Kannada, and Punjabi), adapted from MMLU-Pro for robust AI evaluation.
High-Quality Translation & Evaluation Pipeline: Utilizes IndicTrans2 for dataset creation, back-translation for quality assurance, and multiple validation metrics (chrF++, BLEU, METEOR, TER, SacreBLEU) to ensure linguistic fidelity.
Baseline Model Performance Analysis: Establishes performance benchmarks across state-of-the-art multilingual models (GPT-4o, IndicBERT, MuRIL, XLM-RoBERTa, etc.), revealing substantial performance gaps and highlighting areas for improvement in Indic NLP.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper