Commit
·
b141ab2
1
Parent(s):
063f6d4
remove sub-headers
Browse files
README.md
CHANGED
@@ -21,10 +21,7 @@ size_categories:
|
|
21 |
|
22 |
The dataset is designed to analyze and address hate speech within online platforms. It consists of two sets: the training and testing sets. The two datasets have been labeled and categorized instances of hate speech into nine distinct categories.
|
23 |
|
24 |
-
## Dataset
|
25 |
-
|
26 |
-
### Dataset Description
|
27 |
-
|
28 |
<!-- Provide a longer summary of what this dataset is. -->
|
29 |
The dataset comprises three key features: tweets, labels (with hate speech denoted as 1 and non-hate speech as 0), and categories (behavior, class, disability, ethnicity, gender,
|
30 |
physical appearance, race, religion, sexual orientation).
|
@@ -34,12 +31,6 @@ physical appearance, race, religion, sexual orientation).
|
|
34 |
|
35 |
## Uses
|
36 |
|
37 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
38 |
-
|
39 |
-
### Direct Use
|
40 |
-
|
41 |
-
<!-- This section describes suitable use cases for the dataset. -->
|
42 |
-
|
43 |
This dataset can be utilized for various purposes, including but not limited to:
|
44 |
* Developing and training machine learning models for hate speech detection.
|
45 |
* Analyzing the prevalence and patterns of hate speech across different categories.
|
@@ -47,26 +38,17 @@ This dataset can be utilized for various purposes, including but not limited to:
|
|
47 |
|
48 |
Check it out for the example [project](https://github.com/Wei-Hsi/AI4health)!
|
49 |
|
50 |
-
|
51 |
-
## Dataset Creation
|
52 |
-
|
53 |
-
### Source Data
|
54 |
-
|
55 |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
56 |
The dataset utilized in this study is sourced from Kaggle and named the [Hate Speech and Offensive Language dataset](https://www.kaggle.com/datasets/mrmorj/hate-speech-and-offensive-language-dataset/).
|
57 |
Hate speech instances are identified by selecting tweets within the "class" column.
|
58 |
|
59 |
### Annotations
|
60 |
-
|
61 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
62 |
-
|
63 |
-
#### Annotation process
|
64 |
-
|
65 |
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
66 |
-
|
67 |
Category labels were generated through an OpenAI API call employing the GPT-3.5 model.
|
68 |
It's important to note the instability in category predictions when utilizing GPT-3.5 for label generation, as it tends to predict different categories each time. However, we have confirmed that these tweets were labeled correctly. If there are any misclassified labels, please feel free to reach out. Thank you in advance for your assistance.
|
69 |
|
70 |
## Dataset Card Contact
|
71 |
|
72 |
-
Please feel free to contact me via the [email]([email protected])!
|
|
|
|
21 |
|
22 |
The dataset is designed to analyze and address hate speech within online platforms. It consists of two sets: the training and testing sets. The two datasets have been labeled and categorized instances of hate speech into nine distinct categories.
|
23 |
|
24 |
+
## Dataset Description
|
|
|
|
|
|
|
25 |
<!-- Provide a longer summary of what this dataset is. -->
|
26 |
The dataset comprises three key features: tweets, labels (with hate speech denoted as 1 and non-hate speech as 0), and categories (behavior, class, disability, ethnicity, gender,
|
27 |
physical appearance, race, religion, sexual orientation).
|
|
|
31 |
|
32 |
## Uses
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
This dataset can be utilized for various purposes, including but not limited to:
|
35 |
* Developing and training machine learning models for hate speech detection.
|
36 |
* Analyzing the prevalence and patterns of hate speech across different categories.
|
|
|
38 |
|
39 |
Check it out for the example [project](https://github.com/Wei-Hsi/AI4health)!
|
40 |
|
41 |
+
## Source Data
|
|
|
|
|
|
|
|
|
42 |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
43 |
The dataset utilized in this study is sourced from Kaggle and named the [Hate Speech and Offensive Language dataset](https://www.kaggle.com/datasets/mrmorj/hate-speech-and-offensive-language-dataset/).
|
44 |
Hate speech instances are identified by selecting tweets within the "class" column.
|
45 |
|
46 |
### Annotations
|
|
|
|
|
|
|
|
|
|
|
47 |
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
|
|
48 |
Category labels were generated through an OpenAI API call employing the GPT-3.5 model.
|
49 |
It's important to note the instability in category predictions when utilizing GPT-3.5 for label generation, as it tends to predict different categories each time. However, we have confirmed that these tweets were labeled correctly. If there are any misclassified labels, please feel free to reach out. Thank you in advance for your assistance.
|
50 |
|
51 |
## Dataset Card Contact
|
52 |
|
53 |
+
Please feel free to contact me via the [email]([email protected])!
|
54 |
+
Please feel free to contact me via [email protected]!
|