ek-id commited on
Commit
d7b490f
·
verified ·
1 Parent(s): 2294eaf

Update hyperparameter and metric info in README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -20,10 +20,10 @@ model-index:
20
  type: polite-guard
21
  metrics:
22
  - type: accuracy
23
- value: 92.2
24
  name: Accuracy
25
  - type: f1
26
- value: 92.2
27
  name: F1 Score
28
  ---
29
  # Polite Guard
@@ -31,8 +31,8 @@ model-index:
31
  - **Model type**: BERT* (Bidirectional Encoder Representations from Transformers)
32
  - **Architecture**: Fine-tuned [BERT-base uncased](https://huggingface.co/bert-base-uncased)
33
  - **Task**: Text Classification
34
- - **Source Code**: (https://github.com/intel/polite-guard)
35
- - **Dataset**: (https://huggingface.co/datasets/Intel/polite-guard)
36
 
37
  **Polite Guard** is an open-source NLP language model developed by Intel, fine-tuned from BERT for text classification tasks. It is designed to classify text into four categories: polite, somewhat polite, neutral, and impolite. This model, along with its [accompanying datasets](https://huggingface.co/datasets/Intel/polite-guard) and [source code](https://github.com/intel/polite-guard), is available on Hugging Face* and GitHub* to enable both communities to contribute to developing more sophisticated and context-aware AI systems.
38
 
@@ -62,7 +62,7 @@ By ensuring respectful and polite interactions on various platforms, Polite Guar
62
 
63
  |Hypeparameter|Batch size|Learning rate|Learning rate schedule |Max epochs|Optimizer|Weight decay|Precision |
64
  |-------------|----------|-------------|--------------------------------|----------|---------|------------|----------|
65
- |Value |32 | 2.90e-5 |Linear warmup (10% of steps) | 2 | AdamW | 0.0003 |bf16-mixed|
66
 
67
  Hyperparameter tuning was performed using Bayesian optimization with the Tree-structured Parzen Estimator (TPE) algorithm through Optuna* with 35 trials to maximize the validation F1-score. The hyperparameter search space included
68
 
@@ -82,8 +82,8 @@ The code for the synthetic data generation and fine-tuning can be found [here](h
82
 
83
  Here are the key performance metrics of the model on the test dataset containing both synthetic and manually annotated data:
84
 
85
- - **Accuracy**: 92.2% on the Polite Guard test dataset.
86
- - **F1-Score**: 92.2% on the Polite Guard test dataset.
87
 
88
  ## How to Use
89
 
 
20
  type: polite-guard
21
  metrics:
22
  - type: accuracy
23
+ value: 92.4
24
  name: Accuracy
25
  - type: f1
26
+ value: 92.4
27
  name: F1 Score
28
  ---
29
  # Polite Guard
 
31
  - **Model type**: BERT* (Bidirectional Encoder Representations from Transformers)
32
  - **Architecture**: Fine-tuned [BERT-base uncased](https://huggingface.co/bert-base-uncased)
33
  - **Task**: Text Classification
34
+ - **Source Code**: https://github.com/intel/polite-guard
35
+ - **Dataset**: https://huggingface.co/datasets/Intel/polite-guard
36
 
37
  **Polite Guard** is an open-source NLP language model developed by Intel, fine-tuned from BERT for text classification tasks. It is designed to classify text into four categories: polite, somewhat polite, neutral, and impolite. This model, along with its [accompanying datasets](https://huggingface.co/datasets/Intel/polite-guard) and [source code](https://github.com/intel/polite-guard), is available on Hugging Face* and GitHub* to enable both communities to contribute to developing more sophisticated and context-aware AI systems.
38
 
 
62
 
63
  |Hypeparameter|Batch size|Learning rate|Learning rate schedule |Max epochs|Optimizer|Weight decay|Precision |
64
  |-------------|----------|-------------|--------------------------------|----------|---------|------------|----------|
65
+ |Value |32 | 4.78e-05 |Linear warmup (10% of steps) | 2 | AdamW | 1.01e-06 |bf16-mixed|
66
 
67
  Hyperparameter tuning was performed using Bayesian optimization with the Tree-structured Parzen Estimator (TPE) algorithm through Optuna* with 35 trials to maximize the validation F1-score. The hyperparameter search space included
68
 
 
82
 
83
  Here are the key performance metrics of the model on the test dataset containing both synthetic and manually annotated data:
84
 
85
+ - **Accuracy**: 92.4% on the Polite Guard test dataset.
86
+ - **F1-Score**: 92.4% on the Polite Guard test dataset.
87
 
88
  ## How to Use
89