Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---
|
| 3 |
+
license: apache-2.0
|
| 4 |
+
tags:
|
| 5 |
+
- mlx
|
| 6 |
+
- gemma-3
|
| 7 |
+
- 4-bit
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
# Gemma 3 27B Tools - 4-bit MLX Quantization
|
| 11 |
+
|
| 12 |
+
This is a 4-bit MLX quantization of the [ZySec-AI/gemma-3-27b-tools](https://huggingface.co/ZySec-AI/gemma-3-27b-tools) model.
|
| 13 |
+
|
| 14 |
+
## Model Description
|
| 15 |
+
|
| 16 |
+
This repository contains a quantized version of Google's Gemma 3 27B model with tools, optimized for running with Apple's MLX framework. The quantization process reduces the model's size and computational requirements, making it suitable for deployment on devices with limited resources, such as Apple Silicon Macs.
|
| 17 |
+
|
| 18 |
+
For more details on the original model, please refer to the [original model card](https://huggingface.co/ZySec-AI/gemma-3-27b-tools).
|