IntelligentEstate/Sakura_Warding_H0.5-Qw2.5-7B-Q4_K_M-GGUF

Great all around functionality This model was converted to GGUF format from newsbang/Homer-v0.5-Qwen2.5-7B using llama.cpp This model is Great for code work and most other stuff Refer to the original model card for more details on the model. Took a few Quantizations to get everything perfect.

Model Named for personal system use, after multiple Quants this turned out to be the most functional for me,

Downloads last month
5
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for IntelligentEstate/Sakura_HomerV0.5-Qw2.5-7B-Q4_K_M-GGUF

Quantized
(5)
this model

Collections including IntelligentEstate/Sakura_HomerV0.5-Qw2.5-7B-Q4_K_M-GGUF