TheBloke commited on
Commit
98ac5fe
1 Parent(s): b864d6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -24,7 +24,7 @@ These files are GGML format model files for [LmSys' Vicuna 7B v1.3](https://hugg
24
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
- * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
28
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
  * [ctransformers](https://github.com/marella/ctransformers)
30
 
@@ -89,7 +89,6 @@ Refer to the Provided Files table below to see what files use which methods, and
89
  | vicuna-7b-v1.3.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
90
  | vicuna-7b-v1.3.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
91
 
92
-
93
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
94
 
95
  ## How to run in `llama.cpp`
 
24
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
28
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
  * [ctransformers](https://github.com/marella/ctransformers)
30
 
 
89
  | vicuna-7b-v1.3.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
90
  | vicuna-7b-v1.3.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
91
 
 
92
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
93
 
94
  ## How to run in `llama.cpp`