Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ It was created by merging the LoRA provided in the above repo with the original
|
|
17 |
|
18 |
You will need at least 60GB VRAM to use this model.
|
19 |
|
20 |
-
For a [GPTQ](https://github.com/qwopqwop200/GPTQ-for-LLaMa) quantized 4bit model, usable on a 24GB GPU, see: [GPT4-Alpaca-LoRA-30B-
|
21 |
|
22 |
# Original GPT4 Alpaca Lora model card
|
23 |
|
|
|
17 |
|
18 |
You will need at least 60GB VRAM to use this model.
|
19 |
|
20 |
+
For a [GPTQ](https://github.com/qwopqwop200/GPTQ-for-LLaMa) quantized 4bit model, usable on a 24GB GPU, see: [GPT4-Alpaca-LoRA-30B-GPTQ-4bit-128g](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30B-GPTQ-4bit-128g)
|
21 |
|
22 |
# Original GPT4 Alpaca Lora model card
|
23 |
|