Nondzu commited on
Commit
bc8e89b
Β·
verified Β·
1 Parent(s): f6c17d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -31,8 +31,6 @@ Below is a list of available quantized model files along with their quantization
31
  | [PLLuM-8x7B-nc-instruct-Q3_K_S.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q3_K_S | 20 GB | Moderate quality with improved space efficiency. |
32
  | [PLLuM-8x7B-nc-instruct-Q4_K_M.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q4_K_M | 27 GB | Default quality for most use cases – recommended. |
33
  | [PLLuM-8x7B-nc-instruct-Q4_K_S.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q4_K_S | 25 GB | Slightly lower quality with enhanced space savings – recommended when size is a priority. |
34
- | [PLLuM-8x7B-nc-instruct-Q5_0.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q5_0 | 31 GB | Extremely high quality – the maximum quant available. |
35
- | [PLLuM-8x7B-nc-instruct-Q5_K.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q5_K | 31 GB | Very high quality – recommended for demanding use cases. |
36
  | [PLLuM-8x7B-nc-instruct-Q5_K_M.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q5_K_M | 31 GB | High quality – recommended. |
37
  | [PLLuM-8x7B-nc-instruct-Q5_K_S.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q5_K_S | 31 GB | High quality, offered as an alternative with minimal quality loss. |
38
  | [PLLuM-8x7B-nc-instruct-Q6_K.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q6_K | 36 GB | Very high quality with quantized embed/output weights. |
 
31
  | [PLLuM-8x7B-nc-instruct-Q3_K_S.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q3_K_S | 20 GB | Moderate quality with improved space efficiency. |
32
  | [PLLuM-8x7B-nc-instruct-Q4_K_M.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q4_K_M | 27 GB | Default quality for most use cases – recommended. |
33
  | [PLLuM-8x7B-nc-instruct-Q4_K_S.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q4_K_S | 25 GB | Slightly lower quality with enhanced space savings – recommended when size is a priority. |
 
 
34
  | [PLLuM-8x7B-nc-instruct-Q5_K_M.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q5_K_M | 31 GB | High quality – recommended. |
35
  | [PLLuM-8x7B-nc-instruct-Q5_K_S.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q5_K_S | 31 GB | High quality, offered as an alternative with minimal quality loss. |
36
  | [PLLuM-8x7B-nc-instruct-Q6_K.gguf](https://huggingface.co/Nondzu/PLLuM-8x7B-chat-GGUF/tree/main) | Q6_K | 36 GB | Very high quality with quantized embed/output weights. |