IntelligentEstate/HammerHead-7b-Q4_K_M-GGUF
4Bit Quants, 1 using a business and normalized importance matrix and another just normal k_m Quant for side by side testing. both mimic the multi-in-turn-function-calling now popularized by many large scale services.
This model was converted to GGUF format from MadeAgents/Hammer2.1-7b
using llama.cpp
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
- Downloads last month
- 25
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for IntelligentEstate/HammerHead-7b-Q4_k_m.gguf
Base model
Qwen/Qwen2.5-7B
Finetuned
Qwen/Qwen2.5-Coder-7B
Finetuned
Qwen/Qwen2.5-Coder-7B-Instruct
Finetuned
MadeAgents/Hammer2.1-7b