ZeroWw commited on
Commit
fcaf11e
·
verified ·
1 Parent(s): c8b0e4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -20,6 +20,7 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
20
  quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
21
  and there is also a pure f16 in every directory.
22
 
 
23
  * [ZeroWw/Smegmma-Deluxe-9B-v1-GGUF](https://huggingface.co/ZeroWw/Smegmma-Deluxe-9B-v1-GGUF)
24
  * [ZeroWw/Smegmma-9B-v1-GGUF](https://huggingface.co/ZeroWw/Smegmma-9B-v1-GGUF)
25
  * [ZeroWw/internlm2_5-7b-chat-GGUF](https://huggingface.co/ZeroWw/internlm2_5-7b-chat-GGUF)
 
20
  quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
21
  and there is also a pure f16 in every directory.
22
 
23
+ * [ZeroWw/L3-Blackfall-Summanus-v0.1-15B-GGUF](https://huggingface.co/ZeroWw/L3-Blackfall-Summanus-v0.1-15B-GGUF)
24
  * [ZeroWw/Smegmma-Deluxe-9B-v1-GGUF](https://huggingface.co/ZeroWw/Smegmma-Deluxe-9B-v1-GGUF)
25
  * [ZeroWw/Smegmma-9B-v1-GGUF](https://huggingface.co/ZeroWw/Smegmma-9B-v1-GGUF)
26
  * [ZeroWw/internlm2_5-7b-chat-GGUF](https://huggingface.co/ZeroWw/internlm2_5-7b-chat-GGUF)