TheBloke commited on
Commit
8db02f1
·
1 Parent(s): ac17f3c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ It was created by merging the LoRA provided in the above repo with the original
17
 
18
  You will need at least 60GB VRAM to use this model.
19
 
20
- For a [GPTQ](https://github.com/qwopqwop200/GPTQ-for-LLaMa) quantized 4bit model, usable on a 24GB GPU, see: [GPT4-Alpaca-LoRA-30B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30B-GPTQ-4bit-128g)
21
 
22
  # Original GPT4 Alpaca Lora model card
23
 
 
17
 
18
  You will need at least 60GB VRAM to use this model.
19
 
20
+ For a [GPTQ](https://github.com/qwopqwop200/GPTQ-for-LLaMa) quantized 4bit model, usable on a 24GB GPU, see: [GPT4-Alpaca-LoRA-30B-GPTQ-4bit-128g](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30B-GPTQ-4bit-128g)
21
 
22
  # Original GPT4 Alpaca Lora model card
23