IntelligentEstate/Sakura_Warding_H0.5-Qw2.5-7B-Q4_K_M-GGUF

Great all around functionality This model was converted to GGUF format from newsbang/Homer-v0.5-Qwen2.5-7B using llama.cpp This model is Great for code work and most other stuff Refer to the original model card for more details on the model. Took a few Quantizations to get everything perfect.

Model Named for personal system use, after multiple Quants this turned out to be the most functional for me,

Downloads last month
20
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for IntelligentEstate/Sakura_Warding-H5-Qw2.5-7B-Q4_K_M-GGUF

Quantized
(5)
this model

Collections including IntelligentEstate/Sakura_Warding-H5-Qw2.5-7B-Q4_K_M-GGUF