Update README.md
Browse files
README.md
CHANGED
@@ -1,17 +1,29 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
model_name: Llama-3.1-Argunaut-1-8B-SPIN
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
tags:
|
|
|
|
|
|
|
|
|
5 |
- generated_from_trainer
|
6 |
- trl
|
7 |
- dpo
|
8 |
-
|
|
|
9 |
---
|
10 |
|
11 |
-
# Model Card for Llama-3.1-Argunaut-1-8B-SPIN
|
12 |
|
13 |
-
|
14 |
-
|
|
|
15 |
|
16 |
## Quick start
|
17 |
|
@@ -19,17 +31,18 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
|
|
19 |
from transformers import pipeline
|
20 |
|
21 |
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
22 |
-
generator = pipeline("text-generation", model="DebateLabKIT/Llama-3.1-Argunaut-1-8B-SPIN
|
23 |
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
24 |
print(output["generated_text"])
|
25 |
```
|
26 |
|
27 |
## Training procedure
|
28 |
|
29 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ggbetz/argunauts-training/runs/
|
30 |
|
|
|
31 |
|
32 |
-
|
33 |
|
34 |
### Framework versions
|
35 |
|
@@ -41,16 +54,17 @@ This model was trained with DPO, a method introduced in [Direct Preference Optim
|
|
41 |
|
42 |
## Citations
|
43 |
|
44 |
-
Cite
|
45 |
|
46 |
```bibtex
|
47 |
-
@
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
|
|
54 |
}
|
55 |
```
|
56 |
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
model_name: Llama-3.1-Argunaut-1-8B-SPIN
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
base_model: DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT
|
6 |
+
datasets:
|
7 |
+
- DebateLabKIT/argdown_line-by-line
|
8 |
+
- DebateLabKIT/argument_mapping_dpo_pairs
|
9 |
+
- allenai/llama-3.1-tulu-3-70b-preference-mixture
|
10 |
tags:
|
11 |
+
- logic
|
12 |
+
- argumentation
|
13 |
+
- critical-thinking
|
14 |
+
- argument-mapping
|
15 |
- generated_from_trainer
|
16 |
- trl
|
17 |
- dpo
|
18 |
+
- spin
|
19 |
+
licence: llama3.1
|
20 |
---
|
21 |
|
22 |
+
# Model Card for Llama-3.1-Argunaut-1-8B-SPIN
|
23 |
|
24 |
+
|
25 |
+
This model is a fine-tuned version of [DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT](https://huggingface.co/DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT).
|
26 |
+
It has been trained using [TRL](https://github.com/huggingface/trl) and [vLLM](https://docs.vllm.ai/). Checkpoints are tagged.
|
27 |
|
28 |
## Quick start
|
29 |
|
|
|
31 |
from transformers import pipeline
|
32 |
|
33 |
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
34 |
+
generator = pipeline("text-generation", model="DebateLabKIT/Llama-3.1-Argunaut-1-8B-SPIN", device="cuda")
|
35 |
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
36 |
print(output["generated_text"])
|
37 |
```
|
38 |
|
39 |
## Training procedure
|
40 |
|
41 |
+
<!--[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ggbetz/argunauts-training/runs/s89n820x)-->
|
42 |
|
43 |
+
This model was trained with Self-Play Fine-Tuning (SPIN), a method introduced in [Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models](https://huggingface.co/papers/2401.01335).
|
44 |
|
45 |
+
More details about the training procedure will be released in a blog post!
|
46 |
|
47 |
### Framework versions
|
48 |
|
|
|
54 |
|
55 |
## Citations
|
56 |
|
57 |
+
Cite SPIN as:
|
58 |
|
59 |
```bibtex
|
60 |
+
@misc{chen2024selfplayfinetuningconvertsweak,
|
61 |
+
title={Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
|
62 |
+
author={Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
|
63 |
+
year={2024},
|
64 |
+
eprint={2401.01335},
|
65 |
+
archivePrefix={arXiv},
|
66 |
+
primaryClass={cs.LG},
|
67 |
+
url={https://arxiv.org/abs/2401.01335},
|
68 |
}
|
69 |
```
|
70 |
|