You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

BharatGen

BhashaBench-Ayur (BBA): Pioneering India’s Ayurvedic AI Benchmark

GitHub ArXiv CC BY 4.0

Overview

BhashaBench-Ayur (BBA) is India's first comprehensive benchmark designed to evaluate AI models on traditional Ayurvedic knowledge and practice. As the world's oldest holistic healing system, Ayurveda requires deep understanding of ancient texts, therapeutic principles, herbal medicine, and clinical applications. BBA rigorously tests AI models' ability to comprehend and apply Ayurvedic concepts, drawing from authentic government examinations, institutional assessments, and standardized Ayurvedic education curricula across India.

Key Features

  • Languages: English and Hindi (with plans for more Indic languages including Sanskrit)
  • Exams: 50+ authentic Ayurvedic government and institutional exams across India
  • Domains: 15+ specialized Ayurvedic disciplines spanning classical texts to modern practice
  • Questions: 14,963 rigorously validated, exam-based questions
  • Difficulty Levels: Easy (7,944), Medium (6,314), Hard (705)
  • Question Types: Multiple Choice Questions (MCQ), Fill in the Blanks, Match the Column, Assertion-Reasoning
  • Focus: Traditional knowledge systems, clinical applications, herbal pharmacology, and holistic healthcare approaches

Dataset Statistics

Metric Count
Total Questions 14,963
English Questions 9,348
Hindi Questions 5,615
Subject Domains 15+
Government Exams Covered 50+

Question Type Distribution

Question Type Count
Multiple Choice (MCQ) 14,717
Fill in the Blanks 178
Match the Column 41
Assertion or Reasoning 27

Dataset Structure

Test Set

The test set consists of the BhashaBench-Ayur (BBA) benchmark, which contains approximately 14,963 questions across 2 Indic languages (English and Hindi), with plans to expand to Sanskrit and other regional languages.

Subject Domains in BBA

Subject Domain Count
Kayachikitsa (General Medicine & Internal Medicine) 3,134
Dravyaguna & Bhaishajya (Pharmacology & Therapeutics) 2,972
Samhita & Siddhanta (Fundamental Principles) 1,541
Sharir (Anatomy & Physiology) 1,346
Panchakarma & Rasayana (Detoxification & Rejuvenation) 1,308
Stri Roga & Prasuti Tantra (Gynecology & Obstetrics) 847
Shalakya Tantra (ENT, Ophthalmology & Dentistry) 734
Kaumarbhritya & Pediatrics (Child Health) 714
Agad Tantra & Forensic Medicine (Toxicology) 587
Shalya Tantra (Surgery) 526
Swasthavritta & Public Health (Preventive Medicine) 453
Research & Statistics 210
Ayurvedic Literature & History 204
Yoga & Psychology 188
Administration, AYUSH & Miscellaneous 119
Roga Vigyana (Diagnostics & Pathology) 80

Difficulty Level Distribution

Difficulty Level Count
Easy 7,944
Medium 6,314
Hard 705

Usage

Since this is a gated dataset, after your request for accessing the dataset is accepted, you can set your HuggingFace token:

export HF_TOKEN=YOUR_TOKEN_HERE

To load the BBA dataset for a specific language:

from datasets import load_dataset
language = 'Hindi'
# Use 'test' split for evaluation
split = 'test'
language_data = load_dataset("bharatgenai/BhashaBench-Ayur", data_dir=language, split=split, token=True)
print(language_data[0])

Evaluation Results Summary

  • 29 models evaluated, including state-of-the-art LLMs like Qwen3-235B, Gemma-2-27B, Llama-3.1-8B series, and specialized healthcare models like Ayur Param.

  • Overall Performance:

    • Combined (BBA): Best model achieved 58.2% accuracy
    • English subset: 60.25% top performance, showing better comprehension in English
    • Hindi subset: 54.78% top performance, indicating significant challenges in traditional knowledge representation in Indic languages
  • Performance by Domain:

    • Strongest domains:

      • Research & Statistics: 91.43% (top performing domain)
      • Roga Vigyana (Diagnostics & Pathology): 82.5%
      • Swasthavritta & Public Health: 82.56%
      • Yoga & Psychology: 75.53%
    • Most challenging domains:

      • Panchakarma & Rasayana: 49.54% (lowest performing domain)
      • Dravyaguna & Bhaishajya: 49.43%
      • Ayurvedic Literature & History: 55.88%
  • Performance by Difficulty:

    • Easy questions: 65.18% accuracy
    • Medium questions: 50.74% accuracy
    • Hard questions: 46.24% accuracy
  • Performance by Question Type:

    • Fill in the blanks: 62.96% (best performing format)
    • Assertion or Reasoning: 59.26%
    • Match the column: 58.34%
    • Multiple Choice Questions (MCQ): 51.69%
  • Key Insights:

    • Significant performance gap between research/statistics domains vs. traditional therapeutic practices
    • Models show better performance on factual recall than on complex reasoning about traditional treatments
    • Language barrier remains significant - English performance consistently outperforms Hindi
    • Specialized Ayurvedic models (Ayur Param) show competitive but not superior performance to general-purpose large models
    • Question difficulty and format significantly impact model performance
  • Implications:

    • Models struggle most with traditional therapeutic knowledge and classical text interpretation
    • Need for enhanced training on Sanskrit terminology and Ayurvedic principles
    • Integration challenges between traditional knowledge systems and modern AI architectures
    • Opportunity for domain-specific fine-tuning to bridge performance gaps

For detailed results and analysis, please refer to our blog.

Significance

BhashaBench-Ayur addresses a critical gap in AI evaluation for traditional knowledge systems. As interest in integrative medicine grows globally and India strengthens its traditional healthcare sector through AYUSH initiatives, this benchmark provides essential tools for:

  • Evaluating AI systems for traditional medicine applications
  • Developing culturally-aware healthcare AI solutions
  • Preserving and digitizing ancient medical knowledge
  • Supporting evidence-based integration of traditional and modern medicine

Citation

Please cite our benchmark if used in your work:

@misc{bhashabench-ayur-2025,
  title  = {BhashaBench-Ayur (BBA): Pioneering India’s Ayurvedic AI Benchmark},
  author = {BharatGen Research Team},
  year   = {2025},
  howpublished = {\url{https://huggingface.co/datasets/bharatgenai/bhashabench-ayur}},
  note   = {Accessed: YYYY-MM-DD}
}

License

This dataset is released under the CC BY 4.0.

Contact

For any questions or feedback, please contact:

Links

Downloads last month
57