TheBloke commited on
Commit
b4abafa
1 Parent(s): e80c05f

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -62,7 +62,6 @@ Note that, at the time of writing, overall throughput is still lower than runnin
62
 
63
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/StellarBright-AWQ)
64
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StellarBright-GPTQ)
65
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/StellarBright-GGUF)
66
  * [scott's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sequelbox/StellarBright)
67
  <!-- repositories-available end -->
68
 
 
62
 
63
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/StellarBright-AWQ)
64
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StellarBright-GPTQ)
 
65
  * [scott's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sequelbox/StellarBright)
66
  <!-- repositories-available end -->
67