Update README.md
Browse files
README.md
CHANGED
@@ -40,3 +40,53 @@ if tokenizer.chat_template is not None:
|
|
40 |
|
41 |
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
42 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
42 |
```
|
43 |
+
|
44 |
+
## Fine-tuning
|
45 |
+
|
46 |
+
This model has been fine-tuned with [BlossomTuneLLM-MLX](https://github.com/ethicalabs-ai/BlossomTuneLLM-MLX)
|
47 |
+
|
48 |
+
```
|
49 |
+
INFO : aggregate_fit: received 10 results and 0 failures
|
50 |
+
INFO : Communication cost: 6.23 MB this round / 124.51 MB total
|
51 |
+
Server: Saving global adapter for round 10...
|
52 |
+
Fetching 8 files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 8/8 [00:00<00:00, 8367.69it/s]
|
53 |
+
Global adapter and config saved to results/huggingfacetb-smollm2-135m-instruct-q8-mlx/server/2025-09-02_23-16-23/adapter_10
|
54 |
+
INFO : fit progress: (10, 0.0, {}, 239.02917212501052)
|
55 |
+
INFO : configure_evaluate: no clients selected, skipping evaluation
|
56 |
+
INFO :
|
57 |
+
INFO : [SUMMARY]
|
58 |
+
INFO : Run finished 10 round(s) in 239.03s
|
59 |
+
INFO : History (loss, centralized):
|
60 |
+
INFO : round 0: 0.0
|
61 |
+
INFO : round 1: 0.0
|
62 |
+
INFO : round 2: 0.0
|
63 |
+
INFO : round 3: 0.0
|
64 |
+
INFO : round 4: 0.0
|
65 |
+
INFO : round 5: 0.0
|
66 |
+
INFO : round 6: 0.0
|
67 |
+
INFO : round 7: 0.0
|
68 |
+
INFO : round 8: 0.0
|
69 |
+
INFO : round 9: 0.0
|
70 |
+
INFO : round 10: 0.0
|
71 |
+
INFO : History (metrics, distributed, fit):
|
72 |
+
INFO : {'train_loss': [(1, 2.2529776644706727),
|
73 |
+
INFO : (2, 1.6681898140907288),
|
74 |
+
INFO : (3, 1.5494979882240296),
|
75 |
+
INFO : (4, 1.4766268157958984),
|
76 |
+
INFO : (5, 1.4757164913415908),
|
77 |
+
INFO : (6, 1.387213920354843),
|
78 |
+
INFO : (7, 1.4945470476150513),
|
79 |
+
INFO : (8, 1.464623532295227),
|
80 |
+
INFO : (9, 1.4590632796287537),
|
81 |
+
INFO : (10, 1.4046799695491792)],
|
82 |
+
INFO : 'val_loss': [(1, 2.0296000242233276),
|
83 |
+
INFO : (2, 1.6557256400585174),
|
84 |
+
INFO : (3, 1.5062924563884734),
|
85 |
+
INFO : (4, 1.4948512375354768),
|
86 |
+
INFO : (5, 1.4645283639431),
|
87 |
+
INFO : (6, 1.4505432009696961),
|
88 |
+
INFO : (7, 1.4502118945121765),
|
89 |
+
INFO : (8, 1.4655221998691559),
|
90 |
+
INFO : (9, 1.4796700835227967),
|
91 |
+
INFO : (10, 1.429529356956482)]}
|
92 |
+
```
|