Update README.md
Browse files
README.md
CHANGED
|
@@ -1,10 +1,12 @@
|
|
| 1 |
-
|
| 2 |
---
|
| 3 |
license: apache-2.0
|
| 4 |
tags:
|
| 5 |
- mlx
|
| 6 |
- gemma-3
|
| 7 |
- 4-bit
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
# Gemma 3 27B Tools - 4-bit MLX Quantization
|
|
@@ -15,4 +17,4 @@ This is a 4-bit MLX quantization of the [ZySec-AI/gemma-3-27b-tools](https://hug
|
|
| 15 |
|
| 16 |
This repository contains a quantized version of Google's Gemma 3 27B model with tools, optimized for running with Apple's MLX framework. The quantization process reduces the model's size and computational requirements, making it suitable for deployment on devices with limited resources, such as Apple Silicon Macs.
|
| 17 |
|
| 18 |
-
For more details on the original model, please refer to the [original model card](https://huggingface.co/ZySec-AI/gemma-3-27b-tools).
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
tags:
|
| 4 |
- mlx
|
| 5 |
- gemma-3
|
| 6 |
- 4-bit
|
| 7 |
+
base_model:
|
| 8 |
+
- ZySec-AI/gemma-3-27b-tools
|
| 9 |
+
new_version: ZySec-AI/gemma-3-27b-tools
|
| 10 |
---
|
| 11 |
|
| 12 |
# Gemma 3 27B Tools - 4-bit MLX Quantization
|
|
|
|
| 17 |
|
| 18 |
This repository contains a quantized version of Google's Gemma 3 27B model with tools, optimized for running with Apple's MLX framework. The quantization process reduces the model's size and computational requirements, making it suitable for deployment on devices with limited resources, such as Apple Silicon Macs.
|
| 19 |
|
| 20 |
+
For more details on the original model, please refer to the [original model card](https://huggingface.co/ZySec-AI/gemma-3-27b-tools).
|