panyupj commited on
Commit
1d8b36a
·
verified ·
1 Parent(s): 1c67c74

Model save

Browse files
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-v0.1
3
+ library_name: transformers
4
+ model_name: zephyr-7b-sft-qlora
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - sft
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for zephyr-7b-sft-qlora
13
+
14
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="panyupj/zephyr-7b-sft-qlora", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+
31
+
32
+ This model was trained with SFT.
33
+
34
+ ### Framework versions
35
+
36
+ - TRL: 0.12.2
37
+ - Transformers: 4.46.3
38
+ - Pytorch: 2.1.2
39
+ - Datasets: 3.2.0
40
+ - Tokenizers: 0.20.3
41
+
42
+ ## Citations
43
+
44
+
45
+
46
+ Cite TRL as:
47
+
48
+ ```bibtex
49
+ @misc{vonwerra2022trl,
50
+ title = {{TRL: Transformer Reinforcement Learning}},
51
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
+ year = 2020,
53
+ journal = {GitHub repository},
54
+ publisher = {GitHub},
55
+ howpublished = {\url{https://github.com/huggingface/trl}}
56
+ }
57
+ ```
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b0625fddba7d15542f18912303ed6c352d96904ff194faa53b9088e206373e0
3
  size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8296bb6e19855ce13d4c7d23059fea02d67d8aad62f1c7153c2ffd7aabcf2a67
3
  size 167832240
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 1.2189453453533118e+19,
4
+ "train_loss": 0.9622706794551732,
5
+ "train_runtime": 38294.8783,
6
+ "train_samples": 207864,
7
+ "train_samples_per_second": 3.622,
8
+ "train_steps_per_second": 0.113
9
+ }
runs/Jun15_22-49-42_c263-gz3-server-iv-002/events.out.tfevents.1749999021.c263-gz3-server-iv-002.2766478.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0a8dc161802634b3b299a945ed67fdeeb446cb485f3869064076cd3830d7fb8a
3
- size 188221
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f648f458917a12d76b2d1a479304b05b64283ae255558158f8fc074c854e4952
3
+ size 190112
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 1.2189453453533118e+19,
4
+ "train_loss": 0.9622706794551732,
5
+ "train_runtime": 38294.8783,
6
+ "train_samples": 207864,
7
+ "train_samples_per_second": 3.622,
8
+ "train_steps_per_second": 0.113
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff