toxy-dataset / README.md
anitamaxvim's picture
Update README.md
93db794 verified
metadata
license: mit
task_categories:
  - text-classification
  - token-classification
language:
  - en
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: full_train
        path: full_train.parquet
      - split: balanced_train
        path: balanced_train.parquet
      - split: test
        path: test.parquet

Dataset Card for Jigsaw Toxic Comments

Dataset

Dataset Description

The Jigsaw Toxic Comments dataset is a benchmark dataset created for the Toxic Comment Classification Challenge on Kaggle. It is designed to help develop machine learning models that can identify and classify toxic online comments across multiple categories of toxicity.

  • Curated by: Jigsaw (a technology incubator within Alphabet Inc.)
  • Shared by: Kaggle
  • Language(s) (NLP): English
  • License: CC0 1.0 Universal (Public Domain)

Dataset Sources

Uses

Direct Use

The dataset is primarily intended for:

  • Training machine learning models for toxic comment classification
  • Developing natural language processing (NLP) solutions for online content moderation
  • Research on detecting different types of toxic language in online communications

Out-of-Scope Use

The dataset should NOT be used for:

  • Real-time content moderation without additional validation
  • Making definitive judgments about individual comments without human review
  • Training models intended to suppress legitimate free speech or diverse opinions

Dataset Structure

The dataset typically contains the following key features:

  • Comment text
  • Binary labels for multiple types of toxicity:
    • Toxic
    • Severe toxic
    • Obscene
    • Threat
    • Insult
    • Identity hate

Dataset Creation

Curation Rationale

The dataset was created to address the growing challenge of online toxicity and harassment. By providing a labeled dataset, it aims to support the development of AI tools that can help identify and mitigate harmful online communication.

Source Data

Data Collection and Processing

  • Sourced from online comment platforms
  • Underwent manual annotation by human raters
  • Labeled for multiple categories of toxic language
  • Preprocessed to ensure data quality and consistency

Who are the source data producers?

  • Original comments from various online platforms
  • Annotations and labels created by human raters
  • Curated and published by Jigsaw (Alphabet Inc.)

Annotations

Annotation process

  • Manual review by human annotators
  • Multiple annotators labeled each comment to ensure reliability
  • Comments classified into different toxicity categories
  • Likely used guidelines defining various forms of toxic language

Who are the annotators?

  • Professional content moderators
  • Trained human raters
  • Specific demographic information not publicly disclosed

Personal and Sensitive Information

  • Comments may contain personal or sensitive language
  • Efforts were made to anonymize and remove directly identifying information
  • Caution recommended when using or processing the dataset

Bias, Risks, and Limitations

  • Potential cultural and linguistic biases in toxicity interpretation
  • Limited to English-language comments
  • Annotations may reflect subjective human judgments
  • Possible underrepresentation of certain linguistic nuances

Recommendations

  • Use the dataset as a training resource, not as a definitive toxicity classifier
  • Supplement with additional context and human review
  • Be aware of potential biases in toxicity classification
  • Regularly update and validate classification models

Citation

BibTeX:

@misc{jigsaw2018toxiccomments,
  title={Jigsaw Toxic Comment Classification Challenge},
  author={Jigsaw},
  year={2018},
  howpublished={\url{https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge}}
}

APA: Jigsaw. (2018). Toxic Comment Classification Challenge [Dataset]. Kaggle. https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge

Glossary

  • Toxicity: Language that is rude, disrespectful, or likely to make someone leave a discussion
  • Binary Classification: Labeling comments as either toxic or non-toxic in specific categories
  • NLP: Natural Language Processing, a field of AI focused on understanding and processing human language

Dataset Card Authors

  • Jigsaw Team
  • Kaggle Community

Dataset Card Contact

For more information, visit the Kaggle Competition Page