ethicalabs/BlossomTuneMLX-SmolLM2-135M-Instruct-Q8-NLP
This model ethicalabs/BlossomTuneMLX-SmolLM2-135M-Instruct-Q8-NLP was converted to MLX format from HuggingFaceTB/SmolLM2-135M-Instruct-Q8-mlx using mlx-lm version 0.26.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("ethicalabs/BlossomTuneMLX-SmolLM2-135M-Instruct-Q8-NLP")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Fine-tuning
This model has been fine-tuned with BlossomTuneLLM-MLX
INFO : aggregate_fit: received 10 results and 0 failures
INFO : Communication cost: 6.23 MB this round / 124.51 MB total
Server: Saving global adapter for round 10...
Fetching 8 files: 100%|鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻堚枅鈻坾 8/8 [00:00<00:00, 8367.69it/s]
Global adapter and config saved to results/huggingfacetb-smollm2-135m-instruct-q8-mlx/server/2025-09-02_23-16-23/adapter_10
INFO : fit progress: (10, 0.0, {}, 239.02917212501052)
INFO : configure_evaluate: no clients selected, skipping evaluation
INFO :
INFO : [SUMMARY]
INFO : Run finished 10 round(s) in 239.03s
INFO : History (loss, centralized):
INFO : round 0: 0.0
INFO : round 1: 0.0
INFO : round 2: 0.0
INFO : round 3: 0.0
INFO : round 4: 0.0
INFO : round 5: 0.0
INFO : round 6: 0.0
INFO : round 7: 0.0
INFO : round 8: 0.0
INFO : round 9: 0.0
INFO : round 10: 0.0
INFO : History (metrics, distributed, fit):
INFO : {'train_loss': [(1, 2.2529776644706727),
INFO : (2, 1.6681898140907288),
INFO : (3, 1.5494979882240296),
INFO : (4, 1.4766268157958984),
INFO : (5, 1.4757164913415908),
INFO : (6, 1.387213920354843),
INFO : (7, 1.4945470476150513),
INFO : (8, 1.464623532295227),
INFO : (9, 1.4590632796287537),
INFO : (10, 1.4046799695491792)],
INFO : 'val_loss': [(1, 2.0296000242233276),
INFO : (2, 1.6557256400585174),
INFO : (3, 1.5062924563884734),
INFO : (4, 1.4948512375354768),
INFO : (5, 1.4645283639431),
INFO : (6, 1.4505432009696961),
INFO : (7, 1.4502118945121765),
INFO : (8, 1.4655221998691559),
INFO : (9, 1.4796700835227967),
INFO : (10, 1.429529356956482)]}
- Downloads last month
- 26
Model tree for ethicalabs/BlossomTuneMLX-SmolLM2-135M-Instruct-Q8-NLP
Base model
HuggingFaceTB/SmolLM2-135M
Quantized
HuggingFaceTB/SmolLM2-135M-Instruct