Quantize the weights to 4 bits using bitsandbytes from huihui-ai/Huihui-gemma-3n-E4B-it-abliterated
- Downloads last month
- 20
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for Qwe1325/Huihui-gemma-3n-E4B-it-abliterated-bnb-4bit
Base model
google/gemma-3n-E4B
Finetuned
google/gemma-3n-E4B-it