GGUF
llama-cpp
conversational
fuzzy-mittenz commited on
Commit
461d5d2
·
verified ·
1 Parent(s): f256682

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ tags:
9
  ---
10
 
11
  # IntelligentEstate/HammerHead-7b-Q4_K_M-GGUF
12
-
13
  ![hammerhead.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/CeiBwM5RTFoDrQti8otFv.png)
14
 
15
  This model was converted to GGUF format from [`MadeAgents/Hammer2.1-7b`](https://huggingface.co/MadeAgents/Hammer2.1-7b) using llama.cpp
 
9
  ---
10
 
11
  # IntelligentEstate/HammerHead-7b-Q4_K_M-GGUF
12
+ 4Bit Quants, 1 using a business and normalized importance matrix and another just normal k_m Quant for side by side testing. both mimic the multi-in-turn-function-calling now popularized by many large scale services.
13
  ![hammerhead.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/CeiBwM5RTFoDrQti8otFv.png)
14
 
15
  This model was converted to GGUF format from [`MadeAgents/Hammer2.1-7b`](https://huggingface.co/MadeAgents/Hammer2.1-7b) using llama.cpp