ZeroWw commited on
Commit
4faed22
·
verified ·
1 Parent(s): afc6adf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -25,11 +25,11 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
25
  quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
26
  ```
27
 
 
28
  * [ZeroWw/llama-3-Nephilim-v3-8B-GGUF](https://huggingface.co/ZeroWw/llama-3-Nephilim-v3-8B-GGUF)
29
  * [ZeroWw/microsoft_WizardLM-2-7B-GGUF](https://huggingface.co/ZeroWw/microsoft_WizardLM-2-7B-GGUF)
30
  * [ZeroWw/h2ogpt-4096-llama2-13b-chat-GGUF](https://huggingface.co/ZeroWw/h2ogpt-4096-llama2-13b-chat-GGUF)
31
  * [ZeroWw/h2o-danube3-4b-chat-GGUF](https://huggingface.co/ZeroWw/h2o-danube3-4b-chat-GGUF)
32
- * [ZeroWw/Mistral-Nemo-Instruct-2407-GGUF](https://huggingface.co/ZeroWw/Mistral-Nemo-Instruct-2407-GGUF)
33
  * [ZeroWw/L3-8B-Celeste-V1.2-GGUF](https://huggingface.co/ZeroWw/L3-8B-Celeste-V1.2-GGUF)
34
  * [ZeroWw/xLAM-1b-fc-r-GGUF](https://huggingface.co/ZeroWw/xLAM-7b-fc-r-GGUF)
35
  * [ZeroWw/xLAM-1b-fc-r-GGUF](https://huggingface.co/ZeroWw/xLAM-1b-fc-r-GGUF)
 
25
  quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
26
  ```
27
 
28
+ * [ZeroWw/Mistral-Nemo-Instruct-2407-GGUF](https://huggingface.co/ZeroWw/Mistral-Nemo-Instruct-2407-GGUF)
29
  * [ZeroWw/llama-3-Nephilim-v3-8B-GGUF](https://huggingface.co/ZeroWw/llama-3-Nephilim-v3-8B-GGUF)
30
  * [ZeroWw/microsoft_WizardLM-2-7B-GGUF](https://huggingface.co/ZeroWw/microsoft_WizardLM-2-7B-GGUF)
31
  * [ZeroWw/h2ogpt-4096-llama2-13b-chat-GGUF](https://huggingface.co/ZeroWw/h2ogpt-4096-llama2-13b-chat-GGUF)
32
  * [ZeroWw/h2o-danube3-4b-chat-GGUF](https://huggingface.co/ZeroWw/h2o-danube3-4b-chat-GGUF)
 
33
  * [ZeroWw/L3-8B-Celeste-V1.2-GGUF](https://huggingface.co/ZeroWw/L3-8B-Celeste-V1.2-GGUF)
34
  * [ZeroWw/xLAM-1b-fc-r-GGUF](https://huggingface.co/ZeroWw/xLAM-7b-fc-r-GGUF)
35
  * [ZeroWw/xLAM-1b-fc-r-GGUF](https://huggingface.co/ZeroWw/xLAM-1b-fc-r-GGUF)