ZoneTwelve's picture
Update README.md
514fb9e
metadata
license: apache-2.0

๐Ÿ“š Traditional Chinese Examination Dataset ๐Ÿค—

This project is an collection of exam-related files (PDFs and MP3s) that can be used to train document/audio understanding models, evaluate datasets for indexing question-answer pairs, or OCR preprocessing.


๐Ÿ—‚๏ธ Project Structure

project/
โ”œโ”€โ”€ data/                     # Original exam files (PDFs, MP3s)
โ”œโ”€โ”€ convert.py                # Script to generate metadata
โ”œโ”€โ”€ main.py                   # Script to load dataset with datasets
โ”œโ”€โ”€ metadata.jsonl            # Output metadata file
โ””โ”€โ”€ README.md                 # Project documentation

๐Ÿ› ๏ธ Requirements

  • Python 3.7+
  • Install required packages:
pip install datasets

๐Ÿ” Generate Metadata

Run convert.py to extract metadata from filenames and generate a metadata.jsonl file.

python convert.py

This will read all files in the data/ directory and output a line-delimited JSON file (metadata.jsonl) describing each file with fields like:

{
  "id": "01-1131-2-ไธ€ๅ…ฌๆฐ‘-้กŒ็›ฎ",
  "serial": "01",
  "grade": "ไธ€",
  "subject": "ๅ…ฌๆฐ‘",
  "variant": null,
  "type": "้กŒ็›ฎ",
  "path": "data/01-1131-2-ไธ€ๅ…ฌๆฐ‘-้กŒ็›ฎ.pdf",
  "format": "pdf"
}

๐Ÿ“ฅ Metadata

Run main.py to load the dataset using the Hugging Face datasets library.

python main.py

This will load and print a few samples from your dataset.


๐Ÿ” Dataset Fields

  • id: Unique identifier from the filename
  • serial: Serial code from the filename
  • grade: ไธ€ (1st year), ไบŒ (2nd year), ไธ‰ (3rd year)
  • subject: e.g., ๅ…ฌๆฐ‘, ่‹ฑๆ–‡, ๆ•ธๅญธ
  • variant: Optional (e.g., ้ซ”, ้Ÿณ) from parentheses
  • type: ้กŒ็›ฎ (questions), ็ญ”ๆกˆ (answers), ๆ‰‹ๅฏซๅท (handwritten)
  • path: File path
  • format: pdf or mp3

๐Ÿง  Tips

  • You can use this dataset for training document/audio understanding models, indexing question-answer pairs, or OCR preprocessing.
  • Extend convert.py to extract text from PDFs or audio features if needed.
  • Easily upload the dataset to the Hugging Face Hub via datasets.Dataset.push_to_hub.

๐Ÿ“ฌ Future Work

  • PDF text extraction
  • Audio transcription
  • Tagging with more metadata (exam year, term, difficulty, etc.)
  • Train-test splitting

๐Ÿ“ License

Apache 2.0 License


๐Ÿค Contributions Welcome!

Feel free to fork, improve, and PR!

๐Ÿ™ Special Thanks

This project was made possible with the help of ChatGPT, an AI assistant by OpenAI, which provided:

  • Ez filename parsing and metadata extraction strategy
  • Dataset integration guidance
  • Project structuring and documentation
  • ๐Ÿง  Motivation to keep going when the files got messy

Thanks, AI buddy! ๐Ÿค–๐Ÿ’ก