dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: problem_image_1
dtype: image
- name: problem_image_2
dtype: image
- name: answer_image_1
dtype: image
- name: answer_image_2
dtype: image
splits:
- name: LC219ALP038EV_problems
num_bytes: 9123784
num_examples: 50
- name: LC004ALP000IV_problems
num_bytes: 47181
num_examples: 53
- name: LC034ALP000EV_problems
num_bytes: 7750282
num_examples: 66
- name: LC065ALP000IV_problems
num_bytes: 3787936
num_examples: 72
- name: LC023ALP000EV_problems
num_bytes: 106057
num_examples: 189
- name: LC021ALP000IV_problems
num_bytes: 66166
num_examples: 135
- name: LC014ALP000EV_problems
num_bytes: 32705564
num_examples: 20
- name: LC032ALP000IV_problems
num_bytes: 6154578
num_examples: 31
- name: LC033ALP032EV_problems
num_bytes: 2911677
num_examples: 66
- name: LC568ALP000EV_problems
num_bytes: 2202668
num_examples: 28
- name: LC022ALP000EV_problems
num_bytes: 984663
num_examples: 137
- name: LC023ALP000IV_problems
num_bytes: 107414
num_examples: 180
- name: LC021ALP000EV_problems
num_bytes: 58562
num_examples: 135
- name: LC065ALP000EV_problems
num_bytes: 3307395
num_examples: 72
- name: LC014ALP000IV_problems
num_bytes: 32706612
num_examples: 20
- name: LC004ALP000EV_problems
num_bytes: 46230
num_examples: 53
- name: LC034ALP000IV_problems
num_bytes: 5970070
num_examples: 66
- name: LC219ALP038IV_problems
num_bytes: 10978459
num_examples: 50
- name: LC568ALP000IV_problems
num_bytes: 1934458
num_examples: 28
- name: LC033ALP032IV_problems
num_bytes: 2893013
num_examples: 66
- name: LC022ALP000IV_problems
num_bytes: 936247
num_examples: 137
- name: LC032ALP000EV_problems
num_bytes: 5782811
num_examples: 31
- name: LC003ALP100EV_problems
num_bytes: 18540
num_examples: 55
- name: LC003ALP100IV_problems
num_bytes: 20461
num_examples: 55
download_size: 68388663
dataset_size: 130600828
configs:
- config_name: default
data_files:
- split: LC004ALP000IV_problems
path: data/LC004ALP000IV_problems-*
- split: LC034ALP000EV_problems
path: data/LC034ALP000EV_problems-*
- split: LC065ALP000IV_problems
path: data/LC065ALP000IV_problems-*
- split: LC023ALP000EV_problems
path: data/LC023ALP000EV_problems-*
- split: LC021ALP000IV_problems
path: data/LC021ALP000IV_problems-*
- split: LC014ALP000EV_problems
path: data/LC014ALP000EV_problems-*
- split: LC032ALP000IV_problems
path: data/LC032ALP000IV_problems-*
- split: LC033ALP032EV_problems
path: data/LC033ALP032EV_problems-*
- split: LC568ALP000EV_problems
path: data/LC568ALP000EV_problems-*
- split: LC219ALP038EV_problems
path: data/LC219ALP038EV_problems-*
- split: LC022ALP000EV_problems
path: data/LC022ALP000EV_problems-*
- split: LC023ALP000IV_problems
path: data/LC023ALP000IV_problems-*
- split: LC021ALP000EV_problems
path: data/LC021ALP000EV_problems-*
- split: LC065ALP000EV_problems
path: data/LC065ALP000EV_problems-*
- split: LC014ALP000IV_problems
path: data/LC014ALP000IV_problems-*
- split: LC004ALP000EV_problems
path: data/LC004ALP000EV_problems-*
- split: LC034ALP000IV_problems
path: data/LC034ALP000IV_problems-*
- split: LC219ALP038IV_problems
path: data/LC219ALP038IV_problems-*
- split: LC568ALP000IV_problems
path: data/LC568ALP000IV_problems-*
- split: LC033ALP032IV_problems
path: data/LC033ALP032IV_problems-*
- split: LC022ALP000IV_problems
path: data/LC022ALP000IV_problems-*
- split: LC032ALP000EV_problems
path: data/LC032ALP000EV_problems-*
- split: LC003ALP100EV_problems
path: data/LC003ALP100EV_problems-*
- split: LC003ALP100IV_problems
path: data/LC003ALP100IV_problems-*
IRLBench: A Multi-modal, Culturally Grounded, Parallel Irish-English Benchmark for Open-Ended LLM Reasoning Evaluation
Overview
Recent advances in Large Language Models (LLMs) have demonstrated promising knowledge and reasoning abilities, yet their performance in multilingual and low-resource settings remains underexplored. Existing benchmarks often exhibit cultural bias, restrict evaluation to text-only, rely on multiple-choice formats, and, more importantly, are limited for extremely low-resource languages. To address these gaps, we introduce IRLBench, presented in parallel English and Irish, which is considered definitely endangered by UNESCO. Our benchmark consists of 12 representative subjects developed from the 2024 Irish Leaving Certificate exams, enabling fine-grained analysis of model capabilities across domains. By framing the task as long-form generation and leveraging the official marking scheme, it does not only support a comprehensive evaluation of correctness but also language fidelity. Our extensive experiments of leading closed-source and open-source LLMs reveal a persistent performance gap between English and Irish and critical insights, in which models produce valid Irish responses less than 80% of the time, and answer correctly 55.8% of the time compared to 76.2% in English for the best-performing model. We release IRLBench and an accompanying evaluation codebase to enable future research on robust, culturally aware multilingual AI development.
Usage
Load the dataset directly with the datasets
library:
from datasets import load_dataset
ds = load_dataset("ReliableAI/IRLBench")