Ablustrund commited on
Commit
3e3e15a
Β·
1 Parent(s): a325b3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -20
README.md CHANGED
@@ -2,13 +2,40 @@
2
  license: agpl-3.0
3
  language:
4
  - zh
 
 
 
 
 
5
  ---
6
 
7
  # MOSS-RLHF
8
 
9
- ### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>πŸ‘‰ <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://github.com/OpenLMLab/MOSS-RLHF" target="_blank">[Open-source code]*
10
 
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ## 🌠 Introduction
13
 
14
  Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
@@ -20,34 +47,24 @@ Contributions are summarized as follows:
20
  3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
21
 
22
 
23
- ## 🧾 Open-source List
24
- - A 7B Chinese reward model based on openChineseLlama.
25
- - A 7B English reward model based on Llama-7B.
26
- - Open source code for RL training in large language models.
27
- - ...
28
-
29
- ## ✨ Start training your own model!
30
-
31
- Run code in a few steps.
32
-
33
- ### πŸ”© Requirements & Setup
34
 
35
  This repository works on Python 3.8 and PyTorch 1.13.1.
36
 
37
  We recommend using the **conda** virtual environment to run the code.
38
 
39
- #### Step 1: create a new Python virtual environment
40
  ```bash
41
  conda update conda -n base -c defaults
42
  conda create -n rlhf python=3.8
43
  conda activate rlhf
44
  ```
45
- #### Step 2: install PyTorch and TensorBoard
46
  ```bash
47
  conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
48
  ```
49
 
50
- #### Step 3: install the remaining dependencies
51
  ```bash
52
  conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
53
  pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
@@ -56,16 +73,57 @@ apt install libaio-dev
56
  DS_BUILD_OPS=1 pip install deepspeed
57
  ```
58
 
59
- ### πŸ‘‰ Start Training
 
 
 
 
 
60
 
61
- TODO, To be finalised before 15. July 2023
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
  ## Citation
64
 
65
  ```bibtex
66
  @article{zheng2023secrets,
67
- title={Secrets of RLHF in Large Language Models Part I: PPO},
68
- author={Rui Zheng and Shihan Dou and Songyang Gao and Yuan Hua and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Yuhao Zhou and Limao Xiong and Lu Chen and Zhiheng Xi and Nuo Xu and Wenbin Lai and Minghao Zhu and Cheng Chang and Zhangyue Yin and Rongxiang Weng and Wensen Cheng and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
69
- year={2023}
 
 
 
70
  }
71
  ```
 
2
  license: agpl-3.0
3
  language:
4
  - zh
5
+ tags:
6
+ - llm
7
+ - reward model
8
+ - moss
9
+ - rlhf
10
  ---
11
 
12
  # MOSS-RLHF
13
 
14
+ ### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>πŸ‘‰ <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
15
 
16
 
17
+ <p align="center" width="100%">
18
+ <a href="https://arxiv.org/abs/2307.04964" target="_blank"><img src="./assets/img/moss.png" alt="MOSS" style="width: 50%; min-width: 300px; display: block; margin: auto;"></a>
19
+
20
+ ## 🌟 News
21
+ ### πŸ‘‰ Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
22
+ [moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
23
+ <br>
24
+
25
+ ### πŸ‘‰ Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
26
+ [moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
27
+
28
+ [moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
29
+ <br>
30
+
31
+ ## 🧾 Open-source List
32
+ - [x] Open source code for RL training in large language models.
33
+ - [x] A 7B Chinese reward model based on openChineseLlama.
34
+ - [x] A 7B English reward model based on Llama-7B.
35
+ - [x] SFT model for English.
36
+ - [ ] Policy model for English after RLHF.
37
+ - ...
38
+
39
  ## 🌠 Introduction
40
 
41
  Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
 
47
  3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
48
 
49
 
50
+ ## πŸ”© Requirements & Setup
 
 
 
 
 
 
 
 
 
 
51
 
52
  This repository works on Python 3.8 and PyTorch 1.13.1.
53
 
54
  We recommend using the **conda** virtual environment to run the code.
55
 
56
+ #### Step 1: Create a new Python virtual environment
57
  ```bash
58
  conda update conda -n base -c defaults
59
  conda create -n rlhf python=3.8
60
  conda activate rlhf
61
  ```
62
+ #### Step 2: Install PyTorch and TensorBoard
63
  ```bash
64
  conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
65
  ```
66
 
67
+ #### Step 3: Install the remaining dependencies
68
  ```bash
69
  conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
70
  pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
 
73
  DS_BUILD_OPS=1 pip install deepspeed
74
  ```
75
 
76
+ ## ✨ Start training your own model!
77
+ Run code in a few steps.
78
+
79
+ ### Step 1: Recover Reward model weights
80
+ We can not directly release the full weight of the reward model because of protocol restrictions.
81
+ You can merge the diff weight with original Llama-7B to recover the reward model we used.
82
 
83
+ We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
84
+ ```bash
85
+ 1) Download the weight diff into your local machine. The weight diff is located at:
86
+ # For English:
87
+ TODO
88
+ # For Chinese:
89
+ https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
90
+
91
+ 2) Merge the weight diff with the original Llama-7B:
92
+ # For English:
93
+ # Reward model
94
+ python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
95
+ # SFT model
96
+ python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
97
+ # Policy model
98
+ TODO
99
+ # For Chinese:
100
+ python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
101
+ ```
102
+ ### Step 2: Select your own SFT model.
103
+ Because of some limitations, we can not release the **Chinese** SFT model (Currently).
104
+ You can use your own SFT model, or a strong base model instead of our SFT model.
105
+
106
+ ### Step 3: Start training
107
+ Run the command below.
108
+ ```
109
+ # For Chinese:
110
+ # You need to use your own sft model currently.
111
+ bash run_zh.sh
112
+
113
+ # For English:
114
+ # We have loaded the sft model and reward model to huggingface.
115
+ bash run_en.sh
116
+ ```
117
 
118
  ## Citation
119
 
120
  ```bibtex
121
  @article{zheng2023secrets,
122
+ title={Secrets of RLHF in Large Language Models Part I: PPO},
123
+ author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
124
+ year={2023},
125
+ eprint={2307.04964},
126
+ archivePrefix={arXiv},
127
+ primaryClass={cs.CL}
128
  }
129
  ```