|
--- |
|
language: |
|
- en |
|
- ru |
|
- uk |
|
- de |
|
- es |
|
- am |
|
- zh |
|
- ar |
|
- hi |
|
- it |
|
- fr |
|
- he |
|
- ja |
|
- tt |
|
license: openrail++ |
|
library_name: transformers |
|
base_model: |
|
- prajjwal1/bert-tiny |
|
datasets: |
|
- gravitee-io/textdetox-multilingual-toxicity-dataset |
|
pipeline_tag: text-classification |
|
tags: |
|
- toxicity |
|
- bert-tiny |
|
- gravitee-io |
|
- ai-gateway |
|
--- |
|
# bert-tiny-toxicity |
|
|
|
This is a toxicity classifier fine-tuned using the [gravitee-io/textdetox-multilingual-toxicity-dataset](https://huggingface.co/datasets/gravitee-io/textdetox-multilingual-toxicity-dataset). The model supports a wide range of languages and is trained for toxicity classification ("not-toxic", "toxic"). |
|
|
|
We perform an 85/15 train-test split per language based on the `textdetox` dataset. All credits go to the authors of the original corpora. |
|
|
|
## Performance Overview |
|
|
|
While the model performance differs from [gravitee-io/distilbert-multilingual-toxicity-classifier](https://huggingface.co/gravitee-io/distilbert-multilingual-toxicity-classifier) |
|
Some languages still make the cut, even with the base model being lightweight and trained on English as per the model card. |
|
|
|
### Original model |
|
|
|
| Language | eval f1 | train f1 | Δ f1 | |
|
|----------|----------|----------|-----------| |
|
| en | 0.942105 | 0.975587 | -0.033482 | |
|
| fr | 0.876783 | 0.943089 | -0.066306 | |
|
| de | 0.872774 | 0.919155 | -0.046381 | |
|
| hi | 0.845178 | 0.885335 | -0.040157 | |
|
| it | 0.805556 | 0.857527 | -0.051971 | |
|
| es | 0.784119 | 0.856389 | -0.072270 | |
|
| ja | 0.745592 | 0.758249 | -0.012657 | |
|
| uk | 0.689095 | 0.686985 | +0.002110 | |
|
| hin | 0.688172 | 0.806429 | -0.118257 | |
|
| ru | 0.688372 | 0.724231 | -0.035858 | |
|
| am | 0.648816 | 0.691555 | -0.042739 | |
|
| tt | 0.644608 | 0.695892 | -0.051284 | |
|
| ar | 0.644471 | 0.670118 | -0.025647 | |
|
| zh | 0.640371 | 0.660996 | -0.020625 | |
|
| he | 0.514851 | 0.524138 | -0.009286 | |
|
|
|
|
|
### Quantized model (ONNX) |
|
|
|
| Language | eval f1 | train f1 | Δ F1 | |
|
|----------|----------|----------|-----------| |
|
| en | 0.942257 | 0.974907 | -0.032650 | |
|
| fr | 0.876783 | 0.942214 | -0.065431 | |
|
| de | 0.872636 | 0.918535 | -0.045900 | |
|
| hi | 0.842912 | 0.884449 | -0.041538 | |
|
| it | 0.806574 | 0.858737 | -0.052163 | |
|
| es | 0.782609 | 0.856392 | -0.073784 | |
|
| ja | 0.750317 | 0.756441 | -0.006124 | |
|
| hin | 0.697051 | 0.806604 | -0.109553 | |
|
| ru | 0.693208 | 0.722626 | -0.029418 | |
|
| uk | 0.689095 | 0.684864 | +0.004232 | |
|
| am | 0.647363 | 0.689944 | -0.042581 | |
|
| ar | 0.644471 | 0.669856 | -0.025386 | |
|
| tt | 0.642066 | 0.695060 | -0.052993 | |
|
| zh | 0.640462 | 0.661274 | -0.020811 | |
|
| he | 0.507463 | 0.521815 | -0.014352 | |
|
|
|
|
|
## 🤗 Usage |
|
|
|
```python |
|
from transformers import AutoTokenizer |
|
from optimum.onnxruntime import ORTModelForSequenceClassification |
|
import numpy as np |
|
# Load model and tokenizer using optimum |
|
model = ORTModelForSequenceClassification.from_pretrained( |
|
"gravitee-io/bert-tiny-toxicity", |
|
file_name="model.quant.onnx" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained("gravitee-io/bert-tiny-toxicity") |
|
# Tokenize input |
|
text = "Your text here" |
|
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True) |
|
# Run inference |
|
outputs = model(**inputs) |
|
logits = outputs.logits |
|
# Optional: convert to probabilities |
|
probs = 1 / (1 + np.exp(-logits)) |
|
print(probs) |
|
``` |
|
|
|
## Github Repository |
|
|
|
You can check details on how the model was fine-tuned and evaluated on the [Github Repository](https://github.com/gravitee-io-labs/gravitee-distilbert-multilingual-toxicity-classifier) |
|
|
|
## License |
|
|
|
This model is licensed under [OpenRAIL++](LICENSE) |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@misc{bhargava2021generalization, |
|
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, |
|
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, |
|
year={2021}, |
|
eprint={2110.01518}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
|
|
@article{DBLP:journals/corr/abs-1908-08962, |
|
author = {Iulia Turc and |
|
Ming{-}Wei Chang and |
|
Kenton Lee and |
|
Kristina Toutanova}, |
|
title = {Well-Read Students Learn Better: The Impact of Student Initialization |
|
on Knowledge Distillation}, |
|
journal = {CoRR}, |
|
volume = {abs/1908.08962}, |
|
year = {2019}, |
|
url = {http://arxiv.org/abs/1908.08962}, |
|
eprinttype = {arXiv}, |
|
eprint = {1908.08962}, |
|
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, |
|
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, |
|
bibsource = {dblp computer science bibliography, https://dblp.org} |
|
} |
|
``` |
|
|
|
```bibtex |
|
@inproceedings{dementieva2024overview, |
|
title={Overview of the Multilingual Text Detoxification Task at PAN 2024}, |
|
author={Dementieva, Daryna and Moskovskiy, Daniil and Babakov, Nikolay and Ayele, Abinew Ali and Rizwan, Naquee and Schneider, Frolian and Wang, Xintog and Yimam, Seid Muhie and Ustalov, Dmitry and Stakovskii, Elisei and Smirnova, Alisa and Elnagar, Ashraf and Mukherjee, Animesh and Panchenko, Alexander}, |
|
booktitle={Working Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum}, |
|
editor={Guglielmo Faggioli and Nicola Ferro and Petra Galu{{s}}{{c}}{'a}kov{'a} and Alba Garc{'i}a Seco de Herrera}, |
|
year={2024}, |
|
organization={CEUR-WS.org} |
|
} |
|
@inproceedings{dementieva-etal-2024-toxicity, |
|
title = "Toxicity Classification in {U}krainian", |
|
author = "Dementieva, Daryna and Khylenko, Valeriia and Babakov, Nikolay and Groh, Georg", |
|
booktitle = "Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024)", |
|
month = jun, |
|
year = "2024", |
|
address = "Mexico City, Mexico", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://aclanthology.org/2024.woah-1.19/", |
|
doi = "10.18653/v1/2024.woah-1.19", |
|
pages = "244--255" |
|
} |
|
@inproceedings{DBLP:conf/ecir/BevendorffCCDEFFKMMPPRRSSSTUWZ24, |
|
author = {Janek Bevendorff and et al.}, |
|
title = {Overview of {PAN} 2024: Multi-author Writing Style Analysis, Multilingual Text Detoxification, Oppositional Thinking Analysis, and Generative {AI} Authorship Verification - Extended Abstract}, |
|
booktitle = {ECIR 2024, Glasgow, UK, March 24-28, 2024, Proceedings, Part {VI}}, |
|
series = {Lecture Notes in Computer Science}, |
|
volume = {14613}, |
|
pages = {3--10}, |
|
publisher = {Springer}, |
|
year = {2024}, |
|
doi = {10.1007/978-3-031-56072-9_1} |
|
} |
|
``` |
|
|