Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xf0 in position 1389: invalid continuation byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 73, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1199, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf0 in position 1389: invalid continuation byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Indonesian Hate Speech Detection Dataset

Dataset Summary

This dataset contains 13,169 Indonesian tweets annotated for hate speech detection and abusive language classification. The dataset provides comprehensive multi-label annotations covering different types of hate speech, target categories, and intensity levels, making it valuable for building robust content moderation systems for Indonesian social media.

Dataset Details

  • Total Samples: 13,169 Indonesian tweets
  • Language: Indonesian (Bahasa Indonesia)
  • Annotation Type: Multi-label binary classification
  • Labels: 12 different hate speech and abusive language categories
  • Format: CSV file
  • Text Length: 4-561 characters (average: 114 characters)

Label Categories

Primary Classifications

Label Description Positive Cases Percentage
HS Hate Speech - General hate speech detection 5,561 42.2%
Abusive Abusive Language - Offensive or abusive content 5,043 38.3%

Target-Based Classifications

Label Description Positive Cases Percentage
HS_Individual Hate speech targeting specific individuals 3,575 27.1%
HS_Group Hate speech targeting groups/communities 1,986 15.1%
HS_Religion Religious hate speech 793 6.0%
HS_Race Racial/ethnic hate speech 566 4.3%
HS_Physical Physical appearance-based hate speech 323 2.5%
HS_Gender Gender-based hate speech 306 2.3%
HS_Other Other types of hate speech 3,740 28.4%

Intensity Classifications

Label Description Positive Cases Percentage
HS_Weak Weak/mild hate speech 3,383 25.7%
HS_Moderate Moderate hate speech 1,705 12.9%
HS_Strong Strong/severe hate speech 473 3.6%

Key Statistics

Text Characteristics:

  • Average tweet length: 114 characters
  • Shortest tweet: 4 characters
  • Longest tweet: 561 characters
  • Language: Indonesian (Bahasa Indonesia)

Label Distribution:

  • Balanced primary labels: ~42% hate speech, ~38% abusive
  • Imbalanced target categories: Physical (2.5%) to Individual (27.1%)
  • Severity pyramid: Weak (25.7%) > Moderate (12.9%) > Strong (3.6%)

Use Cases

This dataset is ideal for:

  • Multi-label Text Classification: Train models to detect multiple types of hate speech
  • Indonesian NLP: Develop language-specific content moderation systems
  • Social Media Monitoring: Build automated detection for Indonesian platforms
  • Severity Assessment: Create models that classify hate speech intensity
  • Target Analysis: Understand different targets of hate speech
  • Content Moderation: Deploy real-time filtering systems
  • Research: Study hate speech patterns in Indonesian social media

Quick Start

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report

# Load dataset
df = pd.read_csv('data.csv')

# Prepare features and targets
X = df['Tweet']
y = df[['HS', 'Abusive', 'HS_Individual', 'HS_Group', 'HS_Religion', 
        'HS_Race', 'HS_Physical', 'HS_Gender', 'HS_Other']]

# Split data
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Vectorize text
vectorizer = TfidfVectorizer(max_features=10000, ngram_range=(1, 2))
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)

# Train multi-label classifier
classifier = MultiOutputClassifier(LogisticRegression(random_state=42))
classifier.fit(X_train_vec, y_train)

# Evaluate
y_pred = classifier.predict(X_test_vec)
print("Multi-label Classification Report:")
for i, label in enumerate(y.columns):
    print(f"\n{label}:")
    print(classification_report(y_test.iloc[:, i], y_pred[:, i]))

Advanced Usage Examples

Intensity-Based Classification

# Focus on hate speech intensity levels
intensity_labels = ['HS_Weak', 'HS_Moderate', 'HS_Strong']
hate_speech_data = df[df['HS'] == 1]  # Only hate speech samples

# Multi-class intensity classification
y_intensity = hate_speech_data[intensity_labels]

Target-Specific Models

# Build specialized models for different targets
target_labels = ['HS_Individual', 'HS_Group', 'HS_Religion', 'HS_Race', 
                'HS_Physical', 'HS_Gender', 'HS_Other']

# Train target-specific classifiers
for target in target_labels:
    # Create binary classifier for each target type
    pass

Indonesian Text Preprocessing

import re

def preprocess_indonesian_text(text):
    # Convert to lowercase
    text = text.lower()
    
    # Remove URLs
    text = re.sub(r'http\S+|www\S+|https\S+', '', text, flags=re.MULTILINE)
    
    # Remove user mentions and RT
    text = re.sub(r'@\w+|rt\s+', '', text)
    
    # Remove extra whitespace
    text = re.sub(r'\s+', ' ', text).strip()
    
    return text

# Apply preprocessing
df['Tweet_processed'] = df['Tweet'].apply(preprocess_indonesian_text)

Model Architecture Suggestions

Traditional ML

  • TF-IDF + Logistic Regression: Baseline multi-label classifier
  • TF-IDF + SVM: Better performance on imbalanced classes
  • Ensemble Methods: Random Forest or Gradient Boosting

Deep Learning

  • BERT-based Models: Use Indonesian BERT (IndoBERT) for better performance
  • Multilingual Models: mBERT or XLM-R for cross-lingual transfer
  • Custom Architecture: BiLSTM + Attention for sequence modeling

Multi-task Learning

# Hierarchical classification approach
# 1. First classify: Normal vs Abusive vs Hate Speech
# 2. If Hate Speech: Classify target and intensity
# 3. Multi-task loss combining all objectives

Evaluation Metrics

Given the multi-label and imbalanced nature:

Primary Metrics

  • F1-Score: Macro and micro averages
  • AUC-ROC: For each label separately
  • Hamming Loss: Multi-label specific metric
  • Precision/Recall: Per-label analysis

Specialized Metrics

from sklearn.metrics import multilabel_confusion_matrix, jaccard_score

# Multi-label specific metrics
jaccard = jaccard_score(y_true, y_pred, average='macro')
hamming = hamming_loss(y_true, y_pred)

Data Quality & Considerations

Strengths

  • Comprehensive Labeling: Multiple dimensions of hate speech
  • Large Scale: 13K+ samples for robust training
  • Real-world Data: Actual Indonesian tweets
  • Intensity Levels: Enables nuanced classification
  • Multiple Targets: Covers various hate speech types

Limitations

  • ⚠️ Class Imbalance: Some categories <5% positive samples
  • ⚠️ Language Specific: Limited to Indonesian context
  • ⚠️ Temporal Bias: Tweet collection timeframe not specified
  • ⚠️ Cultural Context: May not generalize across Indonesian regions

Ethical Considerations

Content Warning: This dataset contains hate speech and abusive language examples.

Responsible Use

  • Research Purpose: Intended for academic and safety research
  • Content Moderation: Building protective systems
  • Bias Awareness: Monitor for demographic biases in predictions
  • Privacy: Tweets should be handled according to platform policies

Not Suitable For

  • Training generative models that could amplify hate speech
  • Creating offensive content detection without human oversight
  • Commercial use without proper ethical review

Related Work & Benchmarks

Indonesian NLP Resources

  • IndoBERT: Pre-trained Indonesian BERT model
  • Indonesian Sentiment: Related sentiment analysis datasets
  • Multilingual Models: Cross-lingual hate speech detection

Benchmark Performance

Consider comparing against:

  • Traditional ML baselines (TF-IDF + SVM)
  • Pre-trained language models (mBERT, IndoBERT)
  • Multi-task learning approaches

Citation

@dataset{indonesian_hate_speech_2025,
  title={Indonesian Hate Speech Detection Dataset},
  year={2025},
  publisher={Dataset From Kaggle},
  url={https://huggingface.co/datasets/nahiar/indonesian-hate-speech},
  note={Multi-label hate speech and abusive language detection for Indonesian social media}
}

Acknowledgments

This dataset contributes to safer Indonesian social media environments and supports research in:

  • Multilingual content moderation
  • Southeast Asian NLP
  • Cross-cultural hate speech patterns
  • Social media safety systems

Note: Handle this sensitive content responsibly and in accordance with ethical AI principles.

Downloads last month
-