--- license: cc-by-4.0 language: ti widget: - text: "" datasets: - fgaim/tigrinya-abusive-language-detection metrics: - accuracy - f1 - precision - recall model-index: - name: tiroberta-tiald-all-tasks results: - task: name: Text Classification type: text-classification metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8666502037288554 - name: Precision type: precision value: 0.8668478260869565 - name: Recall type: recall value: 0.8666666666666667 --- # TiRoBERTa Fine-tuned for Tigrinya Abusive Language Detection This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/tiroberta-base) on the [TiALD](https://huggingface.co/datasets/fgaim/tigrinya-abusive-language-detection) dataset. **Tigrinya Abusive Language Detection (TiALD) Dataset** is a large-scale, multi-task benchmark dataset for abusive language detection in the Tigrinya language. It consists of **13,717 YouTube comments** annotated for **abusiveness**, **sentiment**, and **topic** tasks. The dataset includes comments written in both the **Ge’ez script** and prevalent non-standard Latin **transliterations** to mirror real-world usage. > ⚠️ The dataset contains explicit, obscene, and potentially hateful language. It should be used for research purposes only. ⚠️ This work accompanies the paper ["A Multi-Task Benchmark for Abusive Language Detection in Low-Resource Settings"](https://arxiv.org/abs/2505.12116). ## Model Usage ```python from transformers import pipeline tiald_pipe = pipeline("text-classification", model="fgaim/tiroberta-abusiveness-detection") tiald_pipe("") ``` ## Performance Metrics This model achieves the following results on the evaluation set: ```json "abusiveness_metrics": { "accuracy": 0.8666666666666667, "macro_f1": 0.8666502037288554, "macro_precision": 0.8668478260869565, "macro_recall": 0.8666666666666667, "weighted_f1": 0.8666502037288554, "weighted_precision": 0.8668478260869565, "weighted_recall": 0.8666666666666667 } ``` ## Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - optimizer: Adam (betas=0.9, 0.999, epsilon=1e-08) - lr_scheduler_type: linear - num_epochs: 4.0 - seed: 42 ## Intended Usage The TiALD dataset and models designed to support: - Research in abusive language detection in low-resource languages - Context-aware abuse, sentiment, and topic modeling - Multi-task and transfer learning with digraphic scripts - Evaluation of multilingual and fine-tuned language models Researchers and developers should avoid using this dataset for direct moderation or enforcement tasks without human oversight. ## Ethical Considerations - **Sensitive content**: Contains toxic and offensive language. Use for research purposes only. - **Cultural sensitivity**: Abuse is context-dependent; annotations were made by native speakers to account for cultural nuance. - **Bias mitigation**: Data sampling and annotation were carefully designed to minimize reinforcement of stereotypes. - **Privacy**: All the source content for the dataset is publicly available on YouTube. - **Respect for expression**: The dataset should not be used for automated censorship without human review. This research received IRB approval (Ref: KH2022-133) and followed ethical data collection and annotation practices, including informed consent of annotators. ## Citation If you use this model or the `TiALD` dataset in your work, please cite: ```bibtex @misc{gaim-etal-2025-tiald-benchmark, title = {A Multi-Task Benchmark for Abusive Language Detection in Low-Resource Settings}, author = {Fitsum Gaim and Hoyun Song and Huije Lee and Changgeon Ko and Eui Jun Hwang and Jong C. Park}, year = {2025}, eprint = {2505.12116}, archiveprefix = {arXiv}, primaryclass = {cs.CL}, url = {https://arxiv.org/abs/2505.12116} } ``` ## License This dataset is released under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).