Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
PowerInfer
/
SmallThinker-4BA0.6B-Instruct-INT4-QAT
like
0
Follow
PowerInfer
360
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
SmallThinker-4BA0.6B-Instruct-INT4-QAT
Ctrl+K
Ctrl+K
1 contributor
History:
8 commits
yixinsong
Update README.md
fd25a73
verified
13 days ago
.gitattributes
2.01 kB
Upload SmallThinker-4B-A0.6B-Instruct-qat.Q3_K_M.gguf with huggingface_hub
13 days ago
README.md
82 Bytes
Update README.md
13 days ago
SmallThinker-4B-A0.6B-Instruct-qat.Q3_K_M.gguf
2.12 GB
LFS
Upload SmallThinker-4B-A0.6B-Instruct-qat.Q3_K_M.gguf with huggingface_hub
13 days ago
SmallThinker-4B-A0.6B-Instruct-qat.Q3_K_S.gguf
1.94 GB
LFS
Upload SmallThinker-4B-A0.6B-Instruct-qat.Q3_K_S.gguf with huggingface_hub
13 days ago
SmallThinker-4B-A0.6B-Instruct-qat.Q4_0.gguf
Safe
2.41 GB
LFS
Upload SmallThinker-4B-A0.6B-Instruct-qat.Q4_0.gguf with huggingface_hub
13 days ago
SmallThinker-4B-A0.6B-Instruct-qat.Q4_K_M.gguf
2.63 GB
LFS
Upload SmallThinker-4B-A0.6B-Instruct-qat.Q4_K_M.gguf with huggingface_hub
13 days ago
SmallThinker-4B-A0.6B-Instruct-qat.Q4_K_S.gguf
2.49 GB
LFS
Upload SmallThinker-4B-A0.6B-Instruct-qat.Q4_K_S.gguf with huggingface_hub
13 days ago
SmallThinker-4B-A0.6B-Instruct-qat.Q8_0.gguf
4.55 GB
LFS
Upload SmallThinker-4B-A0.6B-Instruct-qat.Q8_0.gguf with huggingface_hub
13 days ago