--- license: bigscience-openrail-m datasets: - lvwerra/stack-exchange-paired language: - en tags: - trl - transformers - rlhf --- # Stack-Llama-2 [DPO](https://github.com/eric-mitchell/direct-preference-optimization) fine-tuned [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b). The model is designed to generate human-like responses to questions in Stack Exchange domains of programming, mathematics, physics, and more. For more info check out the [blog post](https://huggingface.co/blog/dpo-trl) and github [example](https://github.com/lvwerra/trl/tree/main/examples/research_projects/stack_llama_2/scripts). ## Uses ### Direct Use - Long-form question-answering on topics of programming, mathematics, and physics - Demonstrating a Large Language Model's ability to follow target behavior of generating answers to a question that would be highly rated on [Stack Exchange](https://stackexchange.com). ### Out of Scope Use - Replacing human expertise ## Bias, Risks, and Limitations - Inherits bias, risks, and limitations from the LLaMA model, as described in the [LLaMA Model Card Bias Evaluation](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#quantitative-analysis) and [Ethical Considerations](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#ethical-considerations). - Retains biases present in the Stack Exchange dataset. Per the [latest developer survey for Stack Overflow](https://survey.stackoverflow.co/2022/), which constitutes a significant part of the StackExchange data, most users who answered the survey identified themselves as [White or European, men, between 25 and 34 years old, and based in the US (with a significant part of responders from India).](https://survey.stackoverflow.co/2022/#developer-profile-demographics) - May generate answers that are incorrect or misleading. - May copy answers from the training data verbatim. - May generate language that is hateful or promotes discrimination ([example](https://huggingface.co/trl-lib/llama-7b-se-rl-peft/discussions/7#64376083369f6f907f5bfe4c)). - May generate language that is offensive to direct or indirect users or to people or groups mentioned. ### Recommendations - Answers should be validated through the use of external sources. - Disparities between the data contributors and the direct and indirect users of the technology should inform developers in assessing what constitutes an appropriate use case. - Further research is needed to attribute model generations to sources in the training data, especially in cases where the model copies answers from the training data. ## Training Details ### Training Data Original datasets are described in [the LLaMA Model Card](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#training-dataset). Fine-tuning datasets for this model are based on [Stack Exchange Paired](https://huggingface.co/datasets/lvwerra/stack-exchange-paired), which consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. Specifically: **Traditional Fine-tuning:** [https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune) **DPO Training:** [https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl) ### Training Procedure The model was first fine-tuned on the Stack Exchange question and answer pairs and then fine-tuned via the DPO training procedure using the SFT model as the reference model. It is trained to respond to prompts with the following prompt template: ``` Question: Answer: ```