bjlawson commited on
Commit
75a9a12
·
verified ·
1 Parent(s): d75eafa

End of training

Browse files
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-3-4b-pt
3
+ library_name: transformers
4
+ model_name: gemma-receipt-extractor
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - sft
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for gemma-receipt-extractor
13
+
14
+ This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="bjlawson/gemma-receipt-extractor", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+
31
+
32
+
33
+ This model was trained with SFT.
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.15.2
38
+ - Transformers: 4.50.0
39
+ - Pytorch: 2.6.0
40
+ - Datasets: 3.3.2
41
+ - Tokenizers: 0.21.1
42
+
43
+ ## Citations
44
+
45
+
46
+
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @misc{vonwerra2022trl,
51
+ title = {{TRL: Transformer Reinforcement Learning}},
52
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
53
+ year = 2020,
54
+ journal = {GitHub repository},
55
+ publisher = {GitHub},
56
+ howpublished = {\url{https://github.com/huggingface/trl}}
57
+ }
58
+ ```
adapter_config.json CHANGED
@@ -27,13 +27,13 @@
27
  "rank_pattern": {},
28
  "revision": null,
29
  "target_modules": [
30
- "v_proj",
31
- "up_proj",
32
  "gate_proj",
 
 
33
  "k_proj",
34
- "o_proj",
35
  "down_proj",
36
- "q_proj"
 
37
  ],
38
  "task_type": "CAUSAL_LM",
39
  "use_dora": false,
 
27
  "rank_pattern": {},
28
  "revision": null,
29
  "target_modules": [
 
 
30
  "gate_proj",
31
+ "up_proj",
32
+ "v_proj",
33
  "k_proj",
 
34
  "down_proj",
35
+ "q_proj",
36
+ "o_proj"
37
  ],
38
  "task_type": "CAUSAL_LM",
39
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:964b1838563cd6f5fcffcfe90858fb275e11796a60cc1040976d59d712e8cf79
3
  size 3159148567
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:839d8837bab840643a7af03b60a8e70891487de08704fbab69647332c8bf5231
3
  size 3159148567
runs/Mar22_22-23-01_cloud1/events.out.tfevents.1742682186.cloud1.9957.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d24a7efaf44a7179bb78dbc03df1b1b37ab46a3413c1fdf8246756134cac45c9
3
+ size 22274
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:168f13ba95528387807c06847dfff1c266a926360d65e10117f0d5426aebc0a6
3
  size 5624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5a52af1b0543feda3bf8f7c8600f09883a068afd19d44181d11619e418ed1b3
3
  size 5624