Text Generation
Transformers
PyTorch
Safetensors
English
gpt2
alignment
instruction tuned
text generation
conversation
assistant
dpo
text-generation-inference
Inference Endpoints
nicholasKluge commited on
Commit
e8d5663
1 Parent(s): a3591c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -47,7 +47,7 @@ co2_eq_emissions:
47
  ---
48
  # Aira-2-124M-DPO
49
 
50
- `Aira-2` is the second version of the Aira instruction-tuned series. `Aira-2-124M-DPO` is an instruction-tuned model further fine-tuned via DPO based on [Aira-2-124M](https://huggingface.co/nicholasKluge/Aira-2-124M). The model was first trained with supervised fine-tuning (STF) with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc). Secondly, the model was fine-tuned again via DPO using a reward dataset created by the [`Aira-RewardModel`](https://huggingface.co/nicholasKluge/RewardModel).
51
 
52
  Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
53
 
@@ -108,9 +108,11 @@ The model will output something like:
108
 
109
  ## Limitations
110
 
111
- 🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.
112
 
113
- 🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.
 
 
114
 
115
  ## Evaluation
116
 
@@ -146,4 +148,4 @@ The model will output something like:
146
 
147
  ## License
148
 
149
- The `Aira-2-124M-DPO` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
 
47
  ---
48
  # Aira-2-124M-DPO
49
 
50
+ Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-124M-DPO is an instruction-tuned model further fine-tuned via DPO based on [Aira-2-124M](https://huggingface.co/nicholasKluge/Aira-2-124M). The model was first trained with supervised fine-tuning (STF) with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc). Secondly, the model was fine-tuned again via DPO using a reward dataset created by the [`Aira-RewardModel`](https://huggingface.co/nicholasKluge/RewardModel).
51
 
52
  Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
53
 
 
108
 
109
  ## Limitations
110
 
111
+ - **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
112
 
113
+ - **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
114
+
115
+ - **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
116
 
117
  ## Evaluation
118
 
 
148
 
149
  ## License
150
 
151
+ Aira-2-124M-DPO is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.