Datasets:
Modalities:
Text
Formats:
csv
Size:
1K - 10K
Tags:
kinyarwanda
english
translation-quality
quality-estimation
human-annotations
direct-assessment
License:
| language: | |
| - en | |
| - rw | |
| license: apache-2.0 | |
| task_categories: | |
| - translation | |
| - text-classification | |
| tags: | |
| - kinyarwanda | |
| - english | |
| - translation-quality | |
| - quality-estimation | |
| - human-annotations | |
| - direct-assessment | |
| - mt-evaluation | |
| - african-languages | |
| - low-resource-languages | |
| size_categories: | |
| - 1K<n<10K | |
| # KinyCOMET Dataset — Kinyarwanda-English Translation Quality Estimation | |
|  | |
| ## Dataset Description | |
| This dataset contains 4,323 human-annotated translation quality assessments for Kinyarwanda ↔ English translation pairs. It was specifically created to train the [KinyCOMET model](https://huggingface.co/chrismazii/kinycomet_unbabel), addressing the critical need for reliable automatic evaluation of Kinyarwanda translations. | |
| ### Dataset Summary | |
| - **Total Samples**: 4,323 translation pairs with quality scores | |
| - **Languages**: Kinyarwanda (rw) ↔ English (en) | |
| - **Annotation Method**: Direct Assessment (DA) following WMT standards | |
| - **Annotators**: 15 linguistics students | |
| - **Quality Control**: Minimum 3 annotations per sample, high-variance samples removed | |
| - **Splits**: Train (3,497) / Validation (404) / Test (422) | |
| ### Why This Dataset Matters | |
| Rwanda's MT ecosystem lacks reliable evaluation data for Kinyarwanda, a morphologically rich language where traditional metrics like BLEU correlate poorly with human judgment. This dataset provides: | |
| - High-quality human judgments aligned with international standards | |
| - Bidirectional coverage (both translation directions) | |
| - Multiple MT systems evaluated (LLMs and traditional neural MT) | |
| - Diverse domains (education, tourism) | |
| ## Dataset Structure | |
| ### Data Instances | |
| Each sample contains: | |
| ```python | |
| { | |
| 'src': 'Umugabo ararya', # Source text | |
| 'mt': 'The man is eating', # Machine translation | |
| 'ref': 'The man is eating', # Reference translation | |
| 'score': 0.89, # Normalized DA score [0-1] | |
| 'direction': 'kin2eng' # Translation direction | |
| } | |
| ``` | |
| ### Data Fields | |
| - **src** (string): Source text (either Kinyarwanda or English) | |
| - **mt** (string): Machine translation output | |
| - **ref** (string): Human reference translation | |
| - **score** (float): Quality score normalized to [0, 1] range | |
| - Original scores were Direct Assessment [0-100] | |
| - Higher scores indicate better translation quality | |
| - **direction** (string): Translation direction | |
| - `kin2eng`: Kinyarwanda → English | |
| - `eng2kin`: English → Kinyarwanda | |
| ### Data Splits | |
| | Split | Samples | Percentage | | |
| |-------|---------|------------| | |
| | Train | 3,497 | 80% | | |
| | Validation | 404 | 10% | | |
| | Test | 422 | 10% | | |
| ## Dataset Creation | |
| ### Source Data | |
| Translation pairs were sourced from three high-quality parallel corpora: | |
| - **Mbaza Education Dataset**: Educational content | |
| - **Mbaza Tourism Dataset**: Tourism and cultural content | |
| - **Digital Umuganda Dataset**: General domain content | |
| ### Annotation Process | |
| **Methodology**: Direct Assessment (DA) following WMT evaluation standards | |
| **Annotators**: 15 linguistics students trained in translation quality assessment | |
| **Process**: | |
| 1. Each translation pair annotated by minimum 3 different annotators | |
| 2. Annotators scored translations on 0-100 scale based on adequacy and fluency | |
| 3. Quality control: Removed 410 samples (9.48%) with standard deviation > 20 | |
| 4. Final scores averaged and normalized to [0, 1] range | |
| ### Translation Systems | |
| Six diverse MT systems were evaluated to ensure comprehensive coverage: | |
| **LLM-based Systems**: | |
| - Claude 3.7-Sonnet | |
| - GPT-4o | |
| - GPT-4.1 | |
| - Gemini Flash 2.0 | |
| **Traditional Neural MT**: | |
| - Facebook NLLB-1.3B | |
| - Facebook NLLB-600M | |
| ### Data Distribution | |
| **Overall Statistics**: | |
| - Mean score (μ): 87.73 | |
| - Standard deviation (σ): 14.14 | |
| **By Direction**: | |
| - **English → Kinyarwanda**: μ=84.60, σ=16.28 | |
| - **Kinyarwanda → English**: μ=91.05, σ=10.47 | |
| The distribution pattern is similar to WMT datasets (2017-2022), indicating alignment with international evaluation standards. | |
| ## Usage | |
| ### Loading the Dataset | |
| ```python | |
| from huggingface_hub import hf_hub_download | |
| import pandas as pd | |
| # Download dataset files | |
| train_file = hf_hub_download( | |
| repo_id="chrismazii/kinycomet_dataset", | |
| filename="train.csv" | |
| ) | |
| val_file = hf_hub_download( | |
| repo_id="chrismazii/kinycomet_dataset", | |
| filename="valid.csv" | |
| ) | |
| test_file = hf_hub_download( | |
| repo_id="chrismazii/kinycomet_dataset", | |
| filename="test.csv" | |
| ) | |
| # Load the datasets | |
| train_df = pd.read_csv(train_file) | |
| val_df = pd.read_csv(val_file) | |
| test_df = pd.read_csv(test_file) | |
| print(f"Training samples: {len(train_df)}") | |
| print(f"Validation samples: {len(val_df)}") | |
| print(f"Test samples: {len(test_df)}") | |
| # Convert to list of dictionaries for COMET usage | |
| train_samples = train_df.to_dict('records') | |
| # Example sample structure | |
| print(train_samples[0]) | |
| ``` | |
| ### Using with COMET | |
| ```python | |
| from comet import load_from_checkpoint | |
| # Load KinyCOMET model | |
| model = load_from_checkpoint("chrismazii/kinycomet_unbabel") | |
| # Use your data | |
| data = [ | |
| { | |
| "src": sample['src'], | |
| "mt": sample['mt'], | |
| "ref": sample['ref'] | |
| } | |
| for sample in train_samples[:10] | |
| ] | |
| # Get predictions | |
| segment_scores, system_score = model.predict(data, gpus=0) | |
| print(f"System score: {system_score}") | |
| ``` | |
| ## Dataset Characteristics | |
| ### Domain Coverage | |
| - **Education**: Teaching materials, curriculum content | |
| - **Tourism**: Travel guides, cultural information | |
| - **General**: Mixed content from Digital Umuganda corpus | |
| ### Quality Metrics | |
| - **Inter-annotator Agreement**: High agreement achieved through careful annotator training | |
| - **Variance Filtering**: Samples with σ > 20 removed to ensure quality | |
| - **Multi-annotator**: Minimum 3 annotations per sample | |
| ### Translation Direction Balance | |
| The dataset includes both translation directions with careful attention to balance and quality: | |
| - Adequate representation of both Kinyarwanda→English and English→Kinyarwanda | |
| - Direction-specific evaluation possible | |
| - Reflects real-world translation challenges in both directions | |
| ## Considerations for Using the Data | |
| ### Strengths | |
| - High-quality human annotations following international standards | |
| - Multiple annotators per sample for reliability | |
| - Diverse MT systems represented | |
| - Rigorous quality control | |
| ### Limitations | |
| - **Domain Specificity**: Primarily education and tourism domains | |
| - **Standard Kinyarwanda**: May not capture all dialectal variations | |
| - **MT Systems**: Limited to six specific systems | |
| - **Time Period**: Reflects MT quality as of 2024-2025 | |
| ### Ethical Considerations | |
| - All source data from publicly available parallel corpora | |
| - Annotators properly compensated for their work | |
| - No personally identifiable information included | |
| - Open access to support African language technology development | |
| ## Additional Information | |
| ### Dataset Curators | |
| - Jan Nehring | |
| - Prince Chris Mazimpaka | |
| - 15 linguistics student annotators | |
| ### Licensing | |
| Released under Apache 2.0 License for maximum reusability. | |
| ### Citation | |
| ```bibtex | |
| @misc{kinycomet_dataset2025, | |
| title={KinyCOMET Dataset: Human-Annotated Quality Estimation for Kinyarwanda-English Translation}, | |
| author={Prince Chris Mazimpaka and Jan Nehring}, | |
| year={2025}, | |
| publisher={Hugging Face}, | |
| howpublished={\url{https://huggingface.co/datasets/chrismazii/kinycomet_dataset}} | |
| } | |
| ``` | |
| ### Related Resources | |
| - [KinyCOMET Model](https://huggingface.co/chrismazii/kinycomet_unbabel) | |
| - [COMET Framework](https://unbabel.github.io/COMET/html/index.html) | |
| - [WMT Evaluation Standards](https://www.statmt.org/wmt/) | |
| --- | |