metadata
language:
- en
tags:
- roberta
- toxic
- toxicity
- hate speech
- offensive language
Text Classification Toxicity
This model is a fined-tuned version of nreimers/MiniLMv2-L6-H384-distilled-from-BERT-Large on the on the Jigsaw 1st Kaggle competition dataset using unitary/toxic-bert as teacher model. The quantized version in ONNX format can be found here.
The model contains two labels only (toxicity and severe toxicity). For the model with all labels refer to this page
Load the Model
from transformers import pipeline
pipe = pipeline(model='minuva/MiniLMv2-toxic-jigsaw-lite', task='text-classification')
pipe("This is pure trash")
# [{'label': 'toxic', 'score': 0.887}]
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 48
- eval_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- warmup_ratio: 0.1
Metrics (comparison with teacher model)
Teacher (params) | Student (params) | Set (metric) | Score (teacher) | Score (student) |
---|---|---|---|---|
unitary/toxic-bert (110M) | MiniLMv2-toxic-jigsaw-lite (23M) | Test (ROC_AUC) | 0.982677 | 0.9815 |
Deployment
Check our repository to see how to easily deploy this model in a serverless environment with fast CPU inference and light resource utilization.