You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

BharatGen

BhashaBench-Finance (BBF): Benchmarking AI on Indian Financial Knowledge

GitHub ArXiv CC BY 4.0

Overview

BhashaBench-Finance (BBF) is the first comprehensive benchmark designed to evaluate AI models on Indian financial knowledge and practices. Tailored for India's diverse financial ecosystem, regulatory framework, and economic landscape, BBF draws from 25+ official government and institutional financial exams to assess models' ability to provide accurate, policy-compliant, and contextually relevant financial advice and analysis.

Key Features

  • Languages: English and Hindi (with plans for more Indic languages)
  • Exams: 25+ unique financial government and institutional exams across India
  • Domains: 30+ financial and allied domains, spanning comprehensive financial knowledge
  • Questions: 19,433 rigorously validated, exam-based questions
  • Difficulty Levels: Easy (7,111), Medium (9,348), Hard (2,974)
  • Question Types: Multiple Choice, Assertion-Reasoning, Match the Column, Rearrange the Sequence, Fill in the Blanks, Reading Comprehension, Essay
  • Focus: Practical, regulation-aware, India-specific financial knowledge essential for financial professionals and consumers

Dataset Statistics

Metric Count
Total Questions 19,433
English Questions 13,451
Hindi Questions 5,982
Subject Domains 30+
Government Exams Covered 25+

Dataset Structure

Test Set

The test set consists of the BhashaBench-Finance (BBF) benchmark, which contains approximately 19,433 questions across 2 Indic languages (English and Hindi).
We will add support for more Indic languages in upcoming versions.

Subject Domains in BBF

Subject Domain Count
Problem Solving 5,686
Mathematics for Finance 4,845
Banking Services 1,171
Governance & Policy 1,064
Language & Communication 946
Corporate Finance & Investment 910
Commerce 863
Accounting 773
General Knowledge 539
Information Technology Finance 490
Economics & Development Studies 274
Rural Economics 261
Environmental Finance 168
Taxation & Regulatory Compliance 155
Interdisciplinary Finance 153
Data & Analytics in Finance 127
History, Sociology & Cultural Studies of Finance 127
Finance Education 118
Healthcare Economics 114
Science and Technology in Finance 101
International Finance & Trade 83
Business Management 83
Energy, Infrastructure & Finance 82
Behavioral Finance 67
Financial Markets 47
Sports, Media & Finance Linkages 45
Marketing Finance 42
Insurance & Risk Management 42
Legal Finance 34
Financial Technology 23

Question Distribution

By Difficulty Level

Level Count Percentage
Easy 7,111 36.6%
Medium 9,348 48.1%
Hard 2,974 15.3%

By Question Type

Type Count Percentage
MCQ 18,019 92.7%
Rearrange the sequence 708 3.6%
Fill in the blanks 286 1.5%
Assertion or Reasoning 215 1.1%
Match the column 119 0.6%
Reading Comprehension 86 0.4%

Usage

Since this is a gated dataset, after your request for accessing the dataset is accepted, you can set your HuggingFace token:

export HF_TOKEN=YOUR_TOKEN_HERE

To load the BBF dataset for a Language:

from datasets import load_dataset
language = 'Hindi'
# Use 'test' split for evaluation
split = 'test'
language_data = load_dataset("bharatgenai/BhashaBench-Finance", data_dir=language, split=split, token=True)
print(language_data[0])

Evaluation Results Summary

  • 40+ models evaluated, including GPT-based models, Gemma, Llama, Qwen, and specialized financial AI systems.

  • Language Performance Gap:

    • Consistent 5-7% performance drop from English to Hindi across most models
    • Best Hindi performance still significantly below English benchmarks
  • Domain Performance Insights:

    • Strongest domains (70%+ by top models):
      • Information Technology Finance (92.24%)
      • Business Management (84.34%)
      • History, Sociology & Cultural Studies of Finance (83.46%)
    • Most challenging domains (<50% by top models):
      • Problem Solving (47.12%)
      • Mathematics for Finance (58.04%)
      • Legal Finance (76.47%)
  • Difficulty Level Analysis:

    • Easy questions: Top models achieve 72-73% accuracy
    • Medium questions: Performance drops to 59% for best models
    • Hard questions: Significant challenge with only 40-41% accuracy
  • Question Type Performance:

    • Best: Fill in the blanks (81.82% top performance)
    • Moderate: MCQ (61.7%), Assertion/Reasoning (67.91%)
    • Challenging: Rearrange sequence (49.01%), Match column (69.75%)
  • Model Size vs Performance:

    • Larger models (Qwen3-235B, DeepSeek-v3) significantly outperform smaller models
    • Specialized financial fine-tuning shows mixed results compared to general large models

For detailed results and analysis, please refer to our blog.

Applications

BBF enables evaluation of AI systems for:

  • Financial advisory services in Indian context
  • Banking and insurance customer support systems
  • Regulatory compliance automation tools
  • Financial education platforms for Indian markets
  • Investment analysis systems understanding Indian financial instruments

Citation

Please cite our benchmark if used in your work:

@misc{bhashabench-finance-2025,
  title  = {BhashaBench-Finance: Benchmarking AI on Indian Financial Knowledge},
  author = {BharatGen Research Team},
  year   = {2025},
  howpublished = {\url{https://huggingface.co/datasets/bharatgenai/bhashabench-finance}},
  note   = {Accessed: YYYY-MM-DD}
}

License

This dataset is released under the CC BY 4.0.

Contact

For any questions or feedback, please contact:

Links

Downloads last month
77