Llama-3.2-Kapusta-JapanChibi-3B-v1 GGUF Quantizations πŸ—²

やめてください、私は小さくて役に立けます

I love this model, but I don't understand Japanese, although it is also good in other languages.

Kapusta-JapanChibi-Logo256.png

This model was converted to GGUF format using llama.cpp.

For more information of the model, see the original model card: Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1.

Available Quantizations (β—•β€Ώβ—•)

My thanks to the authors of the original models, your work is incredible. Have a good time πŸ–€

Downloads last month
33
GGUF
Model size
3.61B params
Architecture
llama

4-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-GGUF

Quantized
(4)
this model