Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
afrideva
/
smartyplats-1.1b-v2-GGUF
like
0
Text Generation
GGUF
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0
License:
apache-2.0
Model card
Files
Files and versions
Community
Use this model
main
smartyplats-1.1b-v2-GGUF
1 contributor
History:
9 commits
afrideva
Upload README.md with huggingface_hub
cd65f67
12 months ago
.gitattributes
Safe
1.99 kB
Upload smartyplats-1.1b-v2.q8_0.gguf with huggingface_hub
12 months ago
README.md
Safe
1.88 kB
Upload README.md with huggingface_hub
12 months ago
smartyplats-1.1b-v2.fp16.gguf
Safe
2.2 GB
LFS
Upload smartyplats-1.1b-v2.fp16.gguf with huggingface_hub
12 months ago
smartyplats-1.1b-v2.q2_k.gguf
Safe
483 MB
LFS
Upload smartyplats-1.1b-v2.q2_k.gguf with huggingface_hub
12 months ago
smartyplats-1.1b-v2.q3_k_m.gguf
Safe
551 MB
LFS
Upload smartyplats-1.1b-v2.q3_k_m.gguf with huggingface_hub
12 months ago
smartyplats-1.1b-v2.q4_k_m.gguf
Safe
669 MB
LFS
Upload smartyplats-1.1b-v2.q4_k_m.gguf with huggingface_hub
12 months ago
smartyplats-1.1b-v2.q5_k_m.gguf
Safe
783 MB
LFS
Upload smartyplats-1.1b-v2.q5_k_m.gguf with huggingface_hub
12 months ago
smartyplats-1.1b-v2.q6_k.gguf
Safe
904 MB
LFS
Upload smartyplats-1.1b-v2.q6_k.gguf with huggingface_hub
12 months ago
smartyplats-1.1b-v2.q8_0.gguf
Safe
1.17 GB
LFS
Upload smartyplats-1.1b-v2.q8_0.gguf with huggingface_hub
12 months ago