--- license: apache-2.0 --- # 📚 Traditional Chinese Examination Dataset 🤗 This project is an collection of exam-related files (PDFs and MP3s) that can be used to train document/audio understanding models, evaluate datasets for indexing question-answer pairs, or OCR preprocessing. --- ## 🗂️ Project Structure ``` project/ ├── data/ # Original exam files (PDFs, MP3s) ├── convert.py # Script to generate metadata ├── main.py # Script to load dataset with datasets ├── metadata.jsonl # Output metadata file └── README.md # Project documentation ``` --- ## 🛠️ Requirements - Python 3.7+ - Install required packages: ```bash pip install datasets ``` --- ## 🔁 Generate Metadata Run `convert.py` to extract metadata from filenames and generate a `metadata.jsonl` file. ```bash python convert.py ``` This will read all files in the `data/` directory and output a line-delimited JSON file (`metadata.jsonl`) describing each file with fields like: ```json { "id": "01-1131-2-一公民-題目", "serial": "01", "grade": "一", "subject": "公民", "variant": null, "type": "題目", "path": "data/01-1131-2-一公民-題目.pdf", "format": "pdf" } ``` --- ## 📥 Metadata Run `main.py` to load the dataset using the Hugging Face `datasets` library. ```bash python main.py ``` This will load and print a few samples from your dataset. --- ## 🔍 Dataset Fields - `id`: Unique identifier from the filename - `serial`: Serial code from the filename - `grade`: 一 (1st year), 二 (2nd year), 三 (3rd year) - `subject`: e.g., 公民, 英文, 數學 - `variant`: Optional (e.g., 體, 音) from parentheses - `type`: 題目 (questions), 答案 (answers), 手寫卷 (handwritten) - `path`: File path - `format`: pdf or mp3 --- ## 🧠 Tips - You can use this dataset for training document/audio understanding models, indexing question-answer pairs, or OCR preprocessing. - Extend `convert.py` to extract text from PDFs or audio features if needed. - Easily upload the dataset to the Hugging Face Hub via `datasets.Dataset.push_to_hub`. --- ## 📬 Future Work - [ ] PDF text extraction - [ ] Audio transcription - [ ] Tagging with more metadata (exam year, term, difficulty, etc.) - [ ] Train-test splitting --- ## 📝 License Apache 2.0 License --- ## 🤝 Contributions Welcome! Feel free to fork, improve, and PR! ## 🙏 Special Thanks This project was made possible with the help of **ChatGPT**, an AI assistant by [OpenAI](https://openai.com/), which provided: - Ez filename parsing and metadata extraction strategy - Dataset integration guidance - Project structuring and documentation - 🧠 Motivation to keep going when the files got messy Thanks, AI buddy! 🤖💡