Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://hu
|
|
| 19 |
## Model description
|
| 20 |
|
| 21 |
This model is Parameter Effecient Fine-tuned using Prompt Tuning. Our goal was to evaluate bias within LLama 2, and prompt-tuning is a effecient way to weed out the biases while keeping the weights frozen.
|
| 22 |
-
|
| 23 |
Classification Report of LLama 2 on original sentence:
|
| 24 |
|
| 25 |
precision recall f1-score support
|
|
@@ -42,7 +42,7 @@ Classification Report of LLama 2 on preturbed sentence:
|
|
| 42 |
accuracy 0.77 1792
|
| 43 |
macro avg 0.80 0.76 0.76 1792
|
| 44 |
weighted avg 0.80 0.77 0.77 1792
|
| 45 |
-
|
| 46 |
|
| 47 |
## Intended uses & limitations
|
| 48 |
|
|
|
|
| 19 |
## Model description
|
| 20 |
|
| 21 |
This model is Parameter Effecient Fine-tuned using Prompt Tuning. Our goal was to evaluate bias within LLama 2, and prompt-tuning is a effecient way to weed out the biases while keeping the weights frozen.
|
| 22 |
+
|
| 23 |
Classification Report of LLama 2 on original sentence:
|
| 24 |
|
| 25 |
precision recall f1-score support
|
|
|
|
| 42 |
accuracy 0.77 1792
|
| 43 |
macro avg 0.80 0.76 0.76 1792
|
| 44 |
weighted avg 0.80 0.77 0.77 1792
|
| 45 |
+
|
| 46 |
|
| 47 |
## Intended uses & limitations
|
| 48 |
|