The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
JEE/NEET LLM Benchmark Dataset
Dataset Description
This repository contains a benchmark dataset designed for evaluating the capabilities of Large Language Models (LLMs) on questions from major Indian competitive examinations:
- JEE (Main & Advanced): Joint Entrance Examination for engineering.
- NEET: National Eligibility cum Entrance Test for medical fields.
The questions are presented in image format (.png
) as they appear in the original papers. The dataset includes metadata linking each image to its corresponding exam details (name, year, subject, question type), and correct answer(s). The benchmark framework supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions.
Current Data:
- NEET 2024 (Code T3): 200 questions across Physics, Chemistry, Botany, and Zoology
- NEET 2025 (Code 45): 180 questions across Physics, Chemistry, Botany, and Zoology
- JEE Advanced 2024 (Paper 1 & 2): 102 questions across Physics, Chemistry, and Mathematics
- Total: 380+ questions with comprehensive metadata
Key Features
- πΌοΈ Multimodal Reasoning: Uses images of questions directly, testing the multimodal reasoning capability of models
- π Exam-Specific Scoring: Implements authentic scoring rules for different exams and question types, including partial marking for JEE Advanced
- π Robust API Handling: Built-in retry mechanism and re-prompting for failed API calls or parsing errors
- π― Flexible Filtering: Filter by exam name, year, or specific question IDs for targeted evaluation
- π Comprehensive Results: Generates detailed JSON and human-readable Markdown summaries with section-wise breakdowns
- π§ Easy Configuration: Simple YAML-based configuration for models and parameters
How to Use
Using datasets
Library
The dataset is designed to be loaded using the Hugging Face datasets
library:
from datasets import load_dataset
# Load the evaluation split
dataset = load_dataset("Reja1/jee-neet-benchmark", split='test') # Replace with your HF repo name
# Example: Access the first question
example = dataset[0]
image = example["image"]
question_id = example["question_id"]
subject = example["subject"]
correct_answers = example["correct_answer"]
print(f"Question ID: {question_id}")
print(f"Subject: {subject}")
print(f"Correct Answer(s): {correct_answers}")
# Display the image (requires Pillow)
# image.show()
Manual Usage (Benchmark Scripts)
This repository contains scripts to run the benchmark evaluation directly:
Clone the repository:
# Replace with your actual repository URL git clone https://github.com/your-username/jee-neet-benchmark cd jee-neet-benchmark # Ensure Git LFS is installed and pull large files if necessary # git lfs pull
Install dependencies:
# It's recommended to use a virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt
Configure API Key:
- Create a file named
.env
in the root directory of the project. - Add your OpenRouter API key to this file:
OPENROUTER_API_KEY=your_actual_openrouter_api_key_here
- Important: The
.gitignore
file is already configured to prevent committing the.env
file. Never commit your API keys directly.
- Create a file named
Configure Models:
- Edit the
configs/benchmark_config.yaml
file. - Modify the
openrouter_models
list to include the specific model identifiers you want to evaluate:openrouter_models: - "google/gemini-2.5-pro-preview-03-25" - "openai/gpt-4o" - "anthropic/claude-3-5-sonnet-20241022"
- Ensure these models support vision input on OpenRouter.
- You can also adjust other parameters like
max_tokens
andrequest_timeout
if needed.
- Edit the
Run the benchmark:
Basic usage (run all available models on all questions):
python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25"
Filter by exam and year:
# Run only NEET 2024 questions python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --exam_name NEET --exam_year 2024 # Run only JEE Advanced 2024 questions python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "anthropic/claude-3-5-sonnet-20241022" --exam_name JEE_ADVANCED --exam_year 2024
Run specific questions:
# Run specific question IDs python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" --question_ids "N24T3001,N24T3002,JA24P1M01"
Custom output directory:
python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --output_dir my_custom_results
Available filtering options:
--exam_name
: Choose fromNEET
,JEE_MAIN
,JEE_ADVANCED
, orall
(default)--exam_year
: Choose from available years (2024
,2025
, etc.) orall
(default)--question_ids
: Comma-separated list of specific question IDs to evaluate (e.g., "N24T3001,JA24P1M01")
Check Results:
- Results for each model run will be saved in timestamped subdirectories within the
results/
folder. - Each run's folder (e.g.,
results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230/
) contains:predictions.jsonl
: Detailed results for each question including:- Model predictions and ground truth
- Raw LLM responses
- Evaluation status and marks awarded
- API call success/failure information
summary.json
: Overall scores and statistics in JSON formatsummary.md
: Human-readable Markdown summary with:- Overall exam scores
- Section-wise breakdown (by subject)
- Detailed statistics on correct/incorrect/skipped questions
- Results for each model run will be saved in timestamped subdirectories within the
Scoring System
The benchmark implements authentic scoring systems for each exam type:
NEET Scoring
- Single Correct MCQ: +4 for correct, -1 for incorrect, 0 for skipped
JEE Main Scoring
- Single Correct MCQ: +4 for correct, -1 for incorrect, 0 for skipped
- Integer Type: +4 for correct, 0 for incorrect, 0 for skipped
JEE Advanced Scoring
- Single Correct MCQ: +3 for correct, -1 for incorrect, 0 for skipped
- Multiple Correct MCQ: Complex partial marking system:
- +4 for all correct options selected
- +3 for 3 out of 4 correct options (when 4 are correct)
- +2 for 2 out of 3+ correct options
- +1 for 1 out of 2+ correct options
- -2 for any incorrect option selected
- 0 for skipped
- Integer Type: +4 for correct, 0 for incorrect, 0 for skipped
Advanced Features
Retry Mechanism
- Automatic retry for failed API calls (up to 3 attempts with exponential backoff)
- Separate retry pass for questions that failed initially
- Comprehensive error tracking and reporting
Re-prompting System
- If initial response parsing fails, the system automatically re-prompts the model
- Uses the previous response to ask for properly formatted answers
- Adapts prompts based on question type (MCQ vs Integer)
Comprehensive Evaluation
- Tracks multiple metrics: correct answers, partial credit, skipped questions, API failures
- Section-wise breakdown by subject
- Detailed logging with color-coded progress indicators
Dataset Structure
data/metadata.jsonl
: Contains metadata for each question image with fields:image_path
: Path to the question imagequestion_id
: Unique identifier (e.g., "N24T3001")exam_name
: Exam type ("NEET", "JEE_MAIN", "JEE_ADVANCED")exam_year
: Year of the exam (integer)exam_code
: Paper/session code (e.g., "T3", "P1")subject
: Subject name (e.g., "Physics", "Chemistry", "Mathematics")question_type
: Question format ("MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER")correct_answer
: List of correct answer strings (e.g., ["A"], ["B", "C"], ["42"])
images/
: Contains subdirectories for each exam set:images/NEET_2024_T3/
: NEET 2024 question imagesimages/NEET_2025_45/
: NEET 2025 question imagesimages/JEE_ADVANCE_2024/
: JEE Advanced 2024 question images
src/
: Python source code for the benchmark system:benchmark_runner.py
: Main benchmark execution scriptllm_interface.py
: OpenRouter API interface with retry logicevaluation.py
: Scoring and evaluation functionsprompts.py
: LLM prompts for different question typesutils.py
: Utility functions for parsing and configuration
configs/
: Configuration files:benchmark_config.yaml
: Model selection and API parameters
results/
: Directory where benchmark results are stored (timestamped subdirectories)jee-neet-benchmark.py
: Hugging Facedatasets
loading script
Data Fields
The dataset contains the following fields (accessible via datasets
):
image
: The question image (datasets.Image
)question_id
: Unique identifier for the question (string)exam_name
: Name of the exam (e.g., "NEET", "JEE_ADVANCED") (string)exam_year
: Year of the exam (int)exam_code
: Paper/session code (e.g., "T3", "P1") (string)subject
: Subject (e.g., "Physics", "Chemistry", "Mathematics") (string)question_type
: Type of question (e.g., "MCQ_SINGLE_CORRECT", "INTEGER") (string)correct_answer
: List containing the correct answer strings.- For MCQs, these are option identifiers (e.g.,
["1"]
,["A"]
,["B", "C"]
). The LLM should output the identifier as it appears in the question. - For INTEGER type, this is the numerical answer as a string (e.g.,
["42"]
,["12.75"]
). The LLM should output the number. - For some
MCQ_SINGLE_CORRECT
questions, multiple answers in this list are considered correct if the LLM prediction matches any one of them. (list of strings)
- For MCQs, these are option identifiers (e.g.,
LLM Answer Format
The LLM is expected to return its answer enclosed in <answer>
tags. For example:
- MCQ Single Correct (Option A):
<answer>A</answer>
- MCQ Single Correct (Option 2):
<answer>2</answer>
- MCQ Multiple Correct (Options B and D):
<answer>B,D</answer>
- Integer Answer:
<answer>42</answer>
- Decimal Answer:
<answer>12.75</answer>
- Skipped Question: `SKIP
The system parses these formats. Prompts are designed to guide the LLM accordingly.
Troubleshooting
Common Issues
API Key Issues:
- Ensure your
.env
file is in the root directory - Verify your OpenRouter API key is valid and has sufficient credits
- Check that the key has access to vision-capable models
Model Not Found:
- Verify the model identifier exists on OpenRouter
- Ensure the model supports vision input
- Check your OpenRouter account has access to the specific model
Memory Issues:
- Reduce
max_tokens
in the config file - Process smaller subsets using
--question_ids
filter - Use models with smaller context windows
Parsing Failures:
- The system automatically attempts re-prompting for parsing failures
- Check the raw responses in
predictions.jsonl
to debug prompt issues - Consider adjusting prompts in
src/prompts.py
for specific models
Current Limitations
- Dataset Size: While comprehensive, the dataset could benefit from more JEE Main questions and additional years
- Language Support: Currently only supports English questions
- Model Dependencies: Requires models with vision capabilities available through OpenRouter
Citation
If you use this dataset or benchmark code, please cite:
@misc{rejaullah_2025_jeeneetbenchmark,
title={JEE/NEET LLM Benchmark},
author={Md Rejaullah},
year={2025},
howpublished={\url{https://huggingface.co/datasets/Reja1/jee-neet-benchmark}},
}
Contact
For questions, suggestions, or collaboration, feel free to reach out:
- X (Twitter): https://x.com/RejaullahmdMd
License
This dataset and associated code are licensed under the MIT License.
- Downloads last month
- 479