Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
gemma-3-12b-it-GGUF
like
0
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
conversational
arxiv:
2406.17415
License:
gemma
Model card
Files
Files and versions
Community
Deploy
Use this model
main
gemma-3-12b-it-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
24 commits
eaddario
Add GGUF internal file structure
157b6ec
verified
12 days ago
imatrix
Generate imatrices
13 days ago
logits
Add README.md
13 days ago
scores
Add GGUF internal file structure
12 days ago
.gitattributes
Safe
1.6 kB
Update .gitattributes
13 days ago
.gitignore
Safe
6.78 kB
Add .gitignore
13 days ago
README.md
19.5 kB
Update README.md
12 days ago
gemma-3-12b-it-F16.gguf
Safe
23.5 GB
LFS
Convert safetensor to GGUF @ F16
13 days ago
gemma-3-12b-it-IQ3_M.gguf
5.41 GB
LFS
Layer-wise quantization IQ3_M
13 days ago
gemma-3-12b-it-IQ3_S.gguf
5.23 GB
LFS
Layer-wise quantization IQ3_S
13 days ago
gemma-3-12b-it-IQ4_NL.gguf
6.39 GB
LFS
Layer-wise quantization IQ4_NL
13 days ago
gemma-3-12b-it-Q3_K_L.gguf
5.52 GB
LFS
Layer-wise quantization Q3_K_L
13 days ago
gemma-3-12b-it-Q3_K_M.gguf
5.22 GB
LFS
Layer-wise quantization Q3_K_M
13 days ago
gemma-3-12b-it-Q3_K_S.gguf
4.99 GB
LFS
Layer-wise quantization Q3_K_S
13 days ago
gemma-3-12b-it-Q4_K_M.gguf
Safe
6.43 GB
LFS
Layer-wise quantization Q4_K_M
13 days ago
gemma-3-12b-it-Q4_K_S.gguf
Safe
6.4 GB
LFS
Layer-wise quantization Q4_K_S
13 days ago
gemma-3-12b-it-Q5_K_M.gguf
Safe
7.61 GB
LFS
Layer-wise quantization Q5_K_M
13 days ago
gemma-3-12b-it-Q5_K_S.gguf
Safe
7.58 GB
LFS
Layer-wise quantization Q5_K_S
13 days ago
gemma-3-12b-it-Q6_K.gguf
Safe
9.37 GB
LFS
Layer-wise quantization Q6_K
13 days ago
gemma-3-12b-it-Q8_0.gguf
Safe
11.4 GB
LFS
Layer-wise quantization Q8_0
13 days ago