File size: 1,252 Bytes
f73a63e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
base_model: google/gemma-2-2b-jpn-it
language:
- ja
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
- mlx
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
  agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---

# thr3a/gemma-2-2b-jpn-it-mlx

The Model [thr3a/gemma-2-2b-jpn-it-mlx](https://huggingface.co/thr3a/gemma-2-2b-jpn-it-mlx) was converted to MLX format from [google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using mlx-lm version **0.19.0**.

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("thr3a/gemma-2-2b-jpn-it-mlx")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
```