Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
NativeQA / README.md
basma-b's picture
Update README.md
a1e0674 verified
metadata
dataset_info:
  features:
    - name: question_text
      dtype: string
    - name: choices
      dtype: string
    - name: correct_choice
      dtype: string
    - name: domain
      dtype: string
    - name: difficulty
      dtype: int64
  splits:
    - name: test
      num_bytes: 337397
      num_examples: 865
  download_size: 133986
  dataset_size: 337397
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

3LM Native STEM Arabic Benchmark

Dataset Summary

The 3LM Native STEM dataset contains 865 multiple-choice questions (MCQs) curated from real Arabic educational sources. It targets mid- to high-school level content in Biology, Chemistry, Physics, Mathematics, and Geography. This benchmark is designed to evaluate Arabic large language models on structured, domain-specific knowledge.

Motivation

While Arabic NLP has seen growth in cultural and linguistic tasks, scientific reasoning remains underrepresented. This dataset fills that gap by using authentic, in-domain Arabic materials to evaluate factual and conceptual understanding.

Dataset Structure

  • question_text: Arabic text of the MCQ (fully self-contained)
  • choices: List of four choices labeled "أ", "ب", "ج", "د"
  • correct_choice: Correct answer (letter only)
  • domain: Subject area (e.g., biology, physics)
  • difficulty: Score from 1 (easy) to 10 (hard)
{
  "question_text": "ما هو الغاز الذي يتنفسه الإنسان؟",
  "choices": ["أ. الأكسجين", "ب. ثاني أكسيد الكربون", "ج. النيتروجين", "د. الهيدروجين"],
  "correct_choice": "أ",
  "domain": "biology",
  "difficulty": 3
}

Data Sources

Collected from open-access Arabic textbooks, worksheets, and question banks sourced through web crawling and regex-based filtering.

Data Curation

  1. OCR Processing: Dual-stage OCR (text + math) using Pix2Tex for LaTeX support.
  2. Extraction Pipeline: Used LLMs to extract Q&A pairs.
  3. Classification: Questions tagged by type, domain, and difficulty.
  4. Standardization: Reformatted to MCQ and randomized correct answer positions.
  5. Manual Verification: All questions reviewed by Arabic speakers with STEM background.

Code and Paper

Licensing

Falcon LLM Licence

Citation

@article{boussaha2025threeLM,
  title={3LM: Bridging Arabic, STEM, and Code through Benchmarking},
  author={Boussaha, Basma El Amel and AlQadi, Leen and Farooq, Mugariya and Alsuwaidi, Shaikha and Campesan, Giulia and Alzubaidi, Ahmed and Alyafeai, Mohammed and Hacid, Hakim},
  journal={arXiv preprint arXiv:2507.15850},
  year={2025}
}