Support QLoRA Training on AWQ Quantized Models
#89
by
s3171103
- opened
Hi,
Thank you for the release of the google/gemma-3-27b-it model! I have a couple of questions regarding fine-tuning:
Is there an AWQ (Activation-aware Weight Quantization) version of this model available, or are there any plans to release one in the future? It would be helpful to have a quantized version available for lower memory usage.
If an AWQ-quantized version is available, is it possible to directly perform QLoRA fine-tuning on it?