File size: 1,009 Bytes
0002379
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: cc-by-nc-4.0
language:
- en
datasets:
- Gryphe/Opus-WritingPrompts
- Sao10K/Claude-3-Opus-Instruct-15K
- Sao10K/Short-Storygen-v2
- Sao10K/c2-Logs-Filtered
tags:
- mlx
base_model: Sao10K/L3-8B-Stheno-v3.2
---

# YorkieOH10/L3-8B-Stheno-v3.2-Q8-mlx

The Model [YorkieOH10/L3-8B-Stheno-v3.2-Q8-mlx](https://huggingface.co/YorkieOH10/L3-8B-Stheno-v3.2-Q8-mlx) was converted to MLX format from [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) using mlx-lm version **0.19.2**.

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("YorkieOH10/L3-8B-Stheno-v3.2-Q8-mlx")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
```