--- language: - en - sr license: mit library_name: transformers tags: - hate-speech-detection - text-classification - multilingual - xlm-roberta - serbian - english - pytorch datasets: - hate-speech pipeline_tag: text-classification widget: - text: "I really enjoyed that movie last night!" example_title: "Appropriate Content" - text: "You people are all the same, causing problems everywhere." example_title: "Hate Speech Example" - text: "Ovaj film je bio odličan!" example_title: "Serbian Appropriate" --- # Multilingual Hate Speech Detector (XLM-RoBERTa) ## Model Description This is a fine-tuned XLM-RoBERTa model for multilingual hate speech detection, specifically trained on English and Serbian text. The model classifies text into 8 categories: - **Race**: Racial discrimination and slurs - **Sexual Orientation**: Homophobic content, LGBTQ+ discrimination - **Gender**: Sexist content, misogyny, gender-based harassment - **Physical Appearance**: Body shaming, lookism, appearance-based harassment - **Religion**: Religious discrimination, islamophobia, antisemitism - **Class**: Classist content, economic discrimination - **Disability**: Ableist content, discrimination against disabled people - **Appropriate**: Non-hateful, normal conversation ## Languages Supported - **English**: Comprehensive hate speech detection - **Serbian**: Native Serbian language support (Cyrillic and Latin scripts) ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("sadjava/multilingual-hate-speech-xlm-roberta") model = AutoModelForSequenceClassification.from_pretrained("sadjava/multilingual-hate-speech-xlm-roberta") # Example prediction text = "Your text here" inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True) with torch.no_grad(): outputs = model(**inputs) predictions = torch.nn.functional.softmax(outputs.logits, dim=-1) # Categories categories = ["Race", "Sexual Orientation", "Gender", "Physical Appearance", "Religion", "Class", "Disability", "Appropriate"] # Get predicted category predicted_class = torch.argmax(predictions, dim=-1).item() predicted_category = categories[predicted_class] confidence = float(predictions[0][predicted_class]) print(f"Category: {predicted_category}") print(f"Confidence: {confidence:.2%}") ``` ## Training Data The model was fine-tuned on multilingual hate speech datasets including: - English hate speech datasets - Serbian hate speech datasets - Augmented examples for better multilingual performance ## Performance - **Accuracy**: High-confidence predictions with detailed explanations - **Languages**: English and Serbian with cross-lingual capabilities - **Categories**: 8-class classification including appropriate content ## Ethical Considerations This model is designed for research and educational purposes. Results should be interpreted carefully and human judgment should always be applied for critical decisions. The system is designed to assist, not replace, human moderation. ## Citation If you use this model, please cite: ```bibtex @misc{multilingual-hate-speech-xlm-roberta, author = {sadjava}, title = {Multilingual Hate Speech Detector}, year = {2024}, publisher = {Hugging Face}, url = {https://huggingface.co/sadjava/multilingual-hate-speech-xlm-roberta} } ``` ## Demo Try the interactive demo: [Multilingual Hate Speech Detector Space](https://huggingface.co/spaces/sadjava/multilingual-hate-speech-detector)