Kaala741 commited on
Commit
684ee8a
1 Parent(s): cb41aad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -1,17 +1,17 @@
1
  ---
2
  license: mit
3
  language:
4
- - en
5
  metrics:
6
  - accuracy
7
  - f1
8
  - precision
9
  - recall
 
10
  base_model: google-bert/bert-base-uncased
11
  pipeline_tag: text-classification
12
  library_name: transformers
13
  ---
14
- # Group-3
15
 
16
  # Hate Speech Detection on the Reddit platform
17
 
@@ -101,4 +101,3 @@ BERT</pre>
101
  The performance metrics for the Simple NN, CNN, and LSTM models were similar, achieving an accuracy of 85% and an F1 score of 71% for hate speech detection. In contrast, the BERT model outperformed the others, achieving an accuracy of 89% and an F1 score of 89%.
102
  Based on these results, we selected BERT as our final model for hate speech detection and saved this model for further implementation and deployment.</p>
103
 
104
- ### The individual implementations of the models can be found in separate branches of the repository. Each team member experimented with different approaches and models, and we decided to use the best methods from these individual efforts.
 
1
  ---
2
  license: mit
3
  language:
4
+ - py
5
  metrics:
6
  - accuracy
7
  - f1
8
  - precision
9
  - recall
10
+ dataset : https://github.com/jing-qian/A-Benchmark-Dataset-for-Learning-to-Intervene-in-Online-Hate-Speech
11
  base_model: google-bert/bert-base-uncased
12
  pipeline_tag: text-classification
13
  library_name: transformers
14
  ---
 
15
 
16
  # Hate Speech Detection on the Reddit platform
17
 
 
101
  The performance metrics for the Simple NN, CNN, and LSTM models were similar, achieving an accuracy of 85% and an F1 score of 71% for hate speech detection. In contrast, the BERT model outperformed the others, achieving an accuracy of 89% and an F1 score of 89%.
102
  Based on these results, we selected BERT as our final model for hate speech detection and saved this model for further implementation and deployment.</p>
103