qwp4w3hyb commited on
Commit
ede5c98
·
verified ·
1 Parent(s): ffcf0b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -14,6 +14,7 @@ tags:
14
  # Quant Infos (Uploading in progress ETA 25mins)
15
 
16
  - Requires llama.cpp [b4875](https://github.com/ggml-org/llama.cpp/releases/tag/b4875)
 
17
  - quants done with an importance matrix for improved quantization loss
18
  - Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss.
19
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S (WIP)
 
14
  # Quant Infos (Uploading in progress ETA 25mins)
15
 
16
  - Requires llama.cpp [b4875](https://github.com/ggml-org/llama.cpp/releases/tag/b4875)
17
+ - LLM ONLY (No vision support)
18
  - quants done with an importance matrix for improved quantization loss
19
  - Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss.
20
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S (WIP)