File size: 3,911 Bytes
ede6619
 
 
 
 
 
 
 
1791e0e
8bfedc0
 
 
c7d74ac
8bfedc0
 
 
b619c9e
 
 
 
 
2f4c54c
6d5b3e6
49809da
84f02c7
2c59f6b
2dbe16a
781eb5d
56e0842
08f50f3
4ceb532
b13bc3d
e370e41
2f4c54c
ede6619
 
8bfedc0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
title: README
emoji: 🔥
colorFrom: purple
colorTo: purple
sdk: static
pinned: true
---

These are my own quantizations (updated almost daily).  
The difference with normal quantizations is that I quantize the output and embed tensors to f16.  
and the other tensors to 15_k,q6_k or q8_0.  
This creates models that are little or not degraded at all and have a smaller size.  
They run at about 3-6 t/sec on CPU only using llama.cpp  
And obviously faster on computers with potent GPUs   

ALL the models were quantized in this way:  
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q5.gguf q5_k  
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q6_k  
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0  
and there is also a pure f16 in every directory.  

* [ZeroWw/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF](https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF)
* [ZeroWw/Gemma-2-9B-It-SPPO-Iter3-GGUF](https://huggingface.co/ZeroWw/Gemma-2-9B-It-SPPO-Iter3-GGUF)
* [ZeroWw/Phi-3-mini-4k-geminified-GGUF](https://huggingface.co/ZeroWw/Phi-3-mini-4k-geminified-GGUF)
* [ZeroWw/CodeQwen1.5-7B-Chat-GGUF](https://huggingface.co/ZeroWw/CodeQwen1.5-7B-Chat-GGUF)
* [ZeroWw/NeuralPipe-7B-slerp-GGUF](https://huggingface.co/ZeroWw/NeuralPipe-7B-slerp-GGUF)
* [ZeroWw/Llama-3-8B-Instruct-Gradient-4194k-GGUF](https://huggingface.co/ZeroWw/Llama-3-8B-Instruct-Gradient-4194k-GGUF)
* [ZeroWw/gemma-2-9b-it-GGUF](https://huggingface.co/ZeroWw/gemma-2-9b-it-GGUF)
* [ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF](https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF)
* [ZeroWw/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF)
* [ZeroWw/Hathor_Stable-v0.2-L3-8B-GGUF](https://huggingface.co/ZeroWw/Hathor_Stable-v0.2-L3-8B-GGUF)
* [ZeroWw/L3-Aethora-15B-V2-GGUF](https://huggingface.co/ZeroWw/L3-Aethora-15B-V2-GGUF)
* [ZeroWw/L3-8B-Stheno-v3.3-32K-GGUF](https://huggingface.co/ZeroWw/L3-8B-Stheno-v3.3-32K-GGUF)
* [ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF](https://huggingface.co/ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF)
* [ZeroWw/Pythia-Chat-Base-7B-GGUF](https://huggingface.co/ZeroWw/Pythia-Chat-Base-7B-GGUF)
* [ZeroWw/Yi-1.5-6B-Chat-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-6B-Chat-GGUF)
* [ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF](https://huggingface.co/ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF)
* [ZeroWw/Yi-1.5-9B-32K-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-9B-32K-GGUF)
* [ZeroWw/aya-23-8B-GGUF](https://huggingface.co/ZeroWw/aya-23-8B-GGUF)
* [ZeroWw/MixTAO-7Bx2-MoE-v8.1-GGUF](https://huggingface.co/ZeroWw/MixTAO-7Bx2-MoE-v8.1-GGUF)
* [ZeroWw/Phi-3-medium-128k-instruct-GGUF](https://huggingface.co/ZeroWw/Phi-3-medium-128k-instruct-GGUF)
* [ZeroWw/Phi-3-mini-128k-instruct-GGUF](https://huggingface.co/ZeroWw/Phi-3-mini-128k-instruct-GGUF)
* [ZeroWw/Qwen1.5-7B-Chat-GGUF](https://huggingface.co/ZeroWw/Qwen1.5-7B-Chat-GGUF)
* [ZeroWw/NeuralDaredevil-8B-abliterated-GGUF](https://huggingface.co/ZeroWw/NeuralDaredevil-8B-abliterated-GGUF)
* [ZeroWw/Mistroll-7B-v2.2-GGUF](https://huggingface.co/ZeroWw/Mistroll-7B-v2.2-GGUF)
* [ZeroWw/Samantha-Qwen-2-7B-GGUF](https://huggingface.co/ZeroWw/Samantha-Qwen-2-7B-GGUF)
* [ZeroWw/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3-8B-Instruct-GGUF)
* [ZeroWw/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/ZeroWw/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF)
* [ZeroWw/microsoft_WizardLM-2-7B-GGUF](https://huggingface.co/ZeroWw/microsoft_WizardLM-2-7B-GGUF)
* [ZeroWw/Mistral-7B-Instruct-v0.3-GGUF](https://huggingface.co/ZeroWw/Mistral-7B-Instruct-v0.3-GGUF)