Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Mungert
/
gemma-3-4b-it-qat-q4_0-GGUF
like
2
Image-Text-to-Text
GGUF
vision
gemma
llama.cpp
imatrix
conversational
License:
gemma
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
gemma-3-4b-it-qat-q4_0-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
28 commits
Mungert
Update README.md
b0a56cb
verified
9 days ago
.gitattributes
2.57 kB
Upload google_gemma-3-4b-it-q3_k_l.gguf with huggingface_hub
9 days ago
README.md
7.55 kB
Update README.md
9 days ago
gemma-3-4b-it-qat-q4_0-iq2_m.gguf
1.67 GB
xet
Upload gemma-3-4b-it-qat-q4_0-iq2_m.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-iq2_s.gguf
1.61 GB
xet
Upload gemma-3-4b-it-qat-q4_0-iq2_s.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-iq2_xs.gguf
1.45 GB
xet
Upload gemma-3-4b-it-qat-q4_0-iq2_xs.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-iq2_xxs.gguf
1.38 GB
xet
Upload gemma-3-4b-it-qat-q4_0-iq2_xxs.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-iq3_m.gguf
1.9 GB
xet
Upload gemma-3-4b-it-qat-q4_0-iq3_m.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-iq3_s.gguf
1.85 GB
xet
Upload gemma-3-4b-it-qat-q4_0-iq3_s.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-iq3_xs.gguf
1.77 GB
xet
Upload gemma-3-4b-it-qat-q4_0-iq3_xs.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-iq3_xxs.gguf
1.69 GB
xet
Upload gemma-3-4b-it-qat-q4_0-iq3_xxs.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-q2_k_s.gguf
1.55 GB
xet
Upload gemma-3-4b-it-qat-q4_0-q2_k_s.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-q3_k_l.gguf
2.26 GB
xet
Upload gemma-3-4b-it-qat-q4_0-q3_k_l.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-q3_k_m.gguf
2.01 GB
xet
Upload gemma-3-4b-it-qat-q4_0-q3_k_m.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0-q3_k_s.gguf
1.85 GB
xet
Upload gemma-3-4b-it-qat-q4_0-q3_k_s.gguf with huggingface_hub
9 days ago
gemma-3-4b-it-qat-q4_0.imatrix
3.42 MB
xet
Upload gemma-3-4b-it-qat-q4_0.imatrix with huggingface_hub
9 days ago
google_gemma-3-4b-it-q3_k_l.gguf
2.26 GB
xet
Upload google_gemma-3-4b-it-q3_k_l.gguf with huggingface_hub
9 days ago
mmproj-model-f16-4B.gguf
Safe
851 MB
xet
Upload mmproj-model-f16-4B.gguf with huggingface_hub
9 days ago
perp_test_2_files.py
6 kB
Upload perp_test_2_files.py with huggingface_hub
9 days ago
perplexity_test_data.txt
608 kB
Upload perplexity_test_data.txt with huggingface_hub
9 days ago