Llama-3.2-Kapusta-JapanChibi-3B-v1 GGUF Quantizations π²
γγγ¦γγ γγγη§γ―ε°γγγ¦ε½Ήγ«η«γ‘γΎγ
I love this model, but I don't understand Japanese, although it is also good in other languages.
This model was converted to GGUF format using llama.cpp.
For more information of the model, see the original model card: Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1.
Available Quantizations (ββΏβ)
Type | Quantized GGUF Model | Size |
---|---|---|
Q4_0 | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q4_0.gguf | 1.99 GiB |
Q6_K | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q6_K.gguf | 2.76 GiB |
Q8_0 | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q8_0.gguf | 3.57 GiB |
My thanks to the authors of the original models, your work is incredible. Have a good time π€
- Downloads last month
- 33
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.