Dataset Card for ZamAI Pashto Processed Dataset
Dataset Summary
The ZamAI Pashto Processed Dataset provides 28,650 carefully curated Pashto-language records that were collected, cleaned, and normalized through the ZamAI Pashto Data Processing Pipeline. It enables reproducible experimentation for Pashto NLP tasks spanning instruction tuning, summarization, and general sequence-to-sequence modelling.
Dataset Details
- Curated by: ZamAI Team
- Language(s): Pashto (ps)
- License: CC-BY-ND-4.0
- Version: v1.0
- Last updated: 2025-06-23
- Source(s): BBC Pashto, Azadi Radio, public Pashto corpora, community submissions
- Pipeline Source: ZamAI Pashto Data Processing Pipeline
Dataset Structure
- Formats:
- CSV:
pashto_cleaned_full_dataset.csv,pashto_cleaned_train.csv,pashto_cleaned_val.csv - Instruction-tuning JSONL:
pashto_train_instruction.jsonl,pashto_val_instruction.jsonl - Prompt-completion JSONL:
pashto_train_prompt_completion.jsonl,pashto_val_prompt_completion.jsonl
- CSV:
- Fields:
- CSV:
title,text,source,prompt,completion - Instruction JSONL:
instruction,input,output - Prompt-completion JSONL:
prompt,completion
- CSV:
- Splits:
train: 25,785 samplesvalidation: 2,865 samplesfull: 28,650 samples (CSV + JSONL variants share the same counts)
Accessing the Data
Large files are stored with Git LFS. After cloning, run git lfs pull inside the repository to materialise the CSV and JSONL payloads. Without this step you will only see lightweight pointer files.
Data Collection Process
Gathering: Pashto language data was automatically collected from diverse online sources, including news websites (e.g., BBC-Pashto), public corpora, and open-access Pashto text repositories. The pipeline utilizes custom Python scripts to crawl, download, and aggregate raw textual data relevant for natural language processing tasks.
Cleaning: The cleaning process removes duplicate entries, irrelevant text, corrupted files, and non-Pashto content. Additional steps include eliminating extra whitespace, fixing encoding issues, stripping HTML tags or special symbols, and filtering out samples below a minimum length threshold to ensure quality and consistency.
Normalization: The text is standardized using Unicode normalization (NFKC), consistent sentence segmentation, and uniform punctuation. Pashto-specific characters and diacritics are normalized, and whitespace is harmonized across samples. The pipeline also optionally standardizes casing and applies consistent formatting to prepare the data for downstream tasks.
Tools Used: Python, pandas, regular expressions (
re), and custom data processing scripts contained within the ZamAI-Pashto-Data-Processing-Pipeline. Jupyter Notebooks are used for exploration, prototyping, and quality assurance.
Intended Use
- Fine-tuning Pashto seq2seq and causal language models
- Training instruction-following Pashto assistants
- Building evaluation sets for translation, summarisation, and dialogue experiments
Limitations and Considerations
- Coverage is skewed toward news-style prose; conversational utterances remain limited.
- Automated cleaning can occasionally trim salutations or remove markup remnants—manual spot checks are encouraged for high-stakes use.
- PIIs are filtered heuristically. Downstream deployments should still review outputs for sensitive details.
Citation
If you use this dataset, please cite:
@misc{zamai_pashto_processed_2025,
title = {ZamAI Pashto Processed Dataset},
author = {ZamAI Team},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/tasal9/ZamAI_Pashto_Dataset}}
}
- Downloads last month
- 6