will-rads commited on
Commit
5e10c5e
·
verified ·
1 Parent(s): 9114a26

Update model card with pipeline_tag, widget examples, and detailed info

Browse files
Files changed (1) hide show
  1. README.md +32 -7
README.md CHANGED
@@ -1,16 +1,41 @@
1
  ---
 
 
2
  library_name: transformers
3
- license: apache-2.0
4
- base_model: distilbert-base-uncased
5
  tags:
6
- - generated_from_keras_callback
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  model-index:
8
- - name: distilbert-hatespeech-classifier
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
13
- probably proofread and complete it, then remove this comment. -->
14
 
15
  # Ethical-Content-Moderation
16
  Fine-Tuning DistilBERT for Ethical Content Moderation
 
1
  ---
2
+ language: en
3
+ license: mit # Or choose another like 'apache-2.0', 'cc-by-sa-4.0', etc.
4
  library_name: transformers
 
 
5
  tags:
6
+ - text-classification
7
+ - hate-speech
8
+ - offensive-language
9
+ - distilbert
10
+ - tensorflow
11
+ pipeline_tag: text-classification
12
+ widget:
13
+ - text: "I love this beautiful day, it's fantastic!"
14
+ example_title: "Positive Example"
15
+ - text: "You are a terrible person and I wish you the worst."
16
+ example_title: "Offensive Example"
17
+ - text: "This is a completely neutral statement about clouds."
18
+ example_title: "Neutral Example"
19
+ - text: "Kill all of them, they don't belong in our country." # Potentially strong hate speech
20
+ example_title: "Hate Speech Example"
21
  model-index:
22
+ - name: distilbert-hatespeech-classifier # Should match your model name
23
+ results:
24
+ - task:
25
+ type: text-classification
26
+ name: Text Classification
27
+ dataset:
28
+ name: tdavidson/hate_speech_offensive # Or the specific name you used
29
+ type: hf # Indicates it's from Hugging Face datasets
30
+ metrics:
31
+ - name: Validation Accuracy
32
+ type: accuracy
33
+ value: 0.7137 # Your best validation accuracy (from Epoch 2)
34
+ - name: Validation Loss
35
+ type: loss
36
+ value: 0.7337 # Your best validation loss (from Epoch 2)
37
  ---
38
 
 
 
39
 
40
  # Ethical-Content-Moderation
41
  Fine-Tuning DistilBERT for Ethical Content Moderation