Update README.md
Browse files
README.md
CHANGED
|
@@ -1,26 +1,25 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
|
| 5 |
-
pipeline_tag: image-text-to-text
|
| 6 |
-
library_name: transformers
|
| 7 |
-
base_model:
|
| 8 |
-
- OpenGVLab/InternViT-300M-448px-V2_5
|
| 9 |
-
- Qwen/Qwen2.5-0.5B
|
| 10 |
-
base_model_relation: merge
|
| 11 |
-
datasets:
|
| 12 |
-
- OpenGVLab/MMPR-v1.2
|
| 13 |
-
language:
|
| 14 |
-
- multilingual
|
| 15 |
tags:
|
| 16 |
-
-
|
| 17 |
-
-
|
|
|
|
| 18 |
- mlx
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
---
|
| 20 |
|
| 21 |
-
# mlx-
|
| 22 |
-
This model was converted to MLX format from [`
|
| 23 |
-
Refer to the [original model card](https://huggingface.co/
|
| 24 |
## Use with mlx
|
| 25 |
|
| 26 |
```bash
|
|
@@ -28,5 +27,5 @@ pip install -U mlx-vlm
|
|
| 28 |
```
|
| 29 |
|
| 30 |
```bash
|
| 31 |
-
python -m mlx_vlm.generate --model mlx-
|
| 32 |
```
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: google/gemma-3-12b-it
|
| 3 |
+
license: gemma
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
tags:
|
| 5 |
+
- gemma3
|
| 6 |
+
- gemma
|
| 7 |
+
- google
|
| 8 |
- mlx
|
| 9 |
+
pipeline_tag: image-text-to-text
|
| 10 |
+
library_name: transformers
|
| 11 |
+
extra_gated_heading: Access Gemma on Hugging Face
|
| 12 |
+
extra_gated_prompt: >-
|
| 13 |
+
To access Gemma on Hugging Face, you’re required to review and agree to
|
| 14 |
+
Google’s usage license. To do this, please ensure you’re logged in to Hugging
|
| 15 |
+
Face and click below. Requests are processed immediately.
|
| 16 |
+
extra_gated_button_content: Acknowledge license
|
| 17 |
+
|
| 18 |
---
|
| 19 |
|
| 20 |
+
# mlx-community/gemma-3-12b-it-qat-3bit
|
| 21 |
+
This model was converted to MLX format from [`google/gemma-3-12b-it-qat-q4_0-unquantized`]() using mlx-vlm version **0.1.23**.
|
| 22 |
+
Refer to the [original model card](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) for more details on the model.
|
| 23 |
## Use with mlx
|
| 24 |
|
| 25 |
```bash
|
|
|
|
| 27 |
```
|
| 28 |
|
| 29 |
```bash
|
| 30 |
+
python -m mlx_vlm.generate --model mlx-community/gemma-3-12b-it-qat-3bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
|
| 31 |
```
|