Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Lewdiculous
/
firefly-gemma-7b-GGUF-IQ-Imatrix
like
2
GGUF
gemma
quantized
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Use this model
1bb19e9
firefly-gemma-7b-GGUF-IQ-Imatrix
1 contributor
History:
5 commits
Lewdiculous
Upload 5 files
1bb19e9
verified
8 months ago
.gitattributes
2.3 kB
Upload 5 files
8 months ago
README.md
4.4 kB
Update README.md
8 months ago
firefly-gemma-7b-F16.gguf
17.1 GB
LFS
Upload 7 files
8 months ago
firefly-gemma-7b-IQ2_M-imatrix.gguf
3.13 GB
LFS
Upload 7 files
8 months ago
firefly-gemma-7b-IQ2_S-imatrix.gguf
2.92 GB
LFS
Upload 7 files
8 months ago
firefly-gemma-7b-IQ2_XS-imatrix.gguf
2.81 GB
LFS
Upload 7 files
8 months ago
firefly-gemma-7b-IQ2_XXS-imatrix.gguf
2.59 GB
LFS
Upload 7 files
8 months ago
firefly-gemma-7b-IQ4_XS-imatrix.gguf
4.77 GB
LFS
Upload 5 files
8 months ago
firefly-gemma-7b-Q4_K_S-imatrix.gguf
5.05 GB
LFS
Upload 5 files
8 months ago
firefly-gemma-7b-Q5_K_M-imatrix.gguf
6.14 GB
LFS
Upload 5 files
8 months ago
firefly-gemma-7b-Q5_K_S-imatrix.gguf
5.98 GB
LFS
Upload 5 files
8 months ago
firefly-gemma-7b-Q6_K-imatrix.gguf
7.01 GB
LFS
Upload 5 files
8 months ago
imatrix-firefly-gemma-7b-F16.dat
4.94 MB
LFS
Upload 7 files
8 months ago
kalomaze's-groups_merged.txt
201 kB
Upload 7 files
8 months ago