Update README.md
Browse files
README.md
CHANGED
|
@@ -21,8 +21,8 @@ This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://hu
|
|
| 21 |
This model is Parameter Effecient Fine-tuned using Prompt Tuning. Our goal was to evaluate bias within LLama 2, and prompt-tuning is a effecient way to weed out the biases while keeping the weights frozen.
|
| 22 |
`
|
| 23 |
Classification Report of LLama 2 on original sentence:
|
| 24 |
-
| |precision| |recall| |f1-score| |support|
|
| 25 |
|
|
|
|
| 26 |
negative 1.00 1.00 1.00 576
|
| 27 |
neutral 0.92 0.95 0.93 640
|
| 28 |
positive 0.94 0.91 0.92 576
|
|
@@ -33,8 +33,8 @@ Classification Report of LLama 2 on original sentence:
|
|
| 33 |
|
| 34 |
|
| 35 |
Classification Report of LLama 2 on preturbed sentence:
|
| 36 |
-
precision recall f1-score support
|
| 37 |
|
|
|
|
| 38 |
negative 0.93 0.74 0.82 576
|
| 39 |
neutral 0.68 0.97 0.80 640
|
| 40 |
positive 0.80 0.58 0.67 576
|
|
|
|
| 21 |
This model is Parameter Effecient Fine-tuned using Prompt Tuning. Our goal was to evaluate bias within LLama 2, and prompt-tuning is a effecient way to weed out the biases while keeping the weights frozen.
|
| 22 |
`
|
| 23 |
Classification Report of LLama 2 on original sentence:
|
|
|
|
| 24 |
|
| 25 |
+
precision recall f1-score support
|
| 26 |
negative 1.00 1.00 1.00 576
|
| 27 |
neutral 0.92 0.95 0.93 640
|
| 28 |
positive 0.94 0.91 0.92 576
|
|
|
|
| 33 |
|
| 34 |
|
| 35 |
Classification Report of LLama 2 on preturbed sentence:
|
|
|
|
| 36 |
|
| 37 |
+
precision recall f1-score support
|
| 38 |
negative 0.93 0.74 0.82 576
|
| 39 |
neutral 0.68 0.97 0.80 640
|
| 40 |
positive 0.80 0.58 0.67 576
|