Here is a 4 bit GPTQ quantized version
#5
by
chplushsieh
- opened
https://huggingface.co/chplushsieh/Meta-Llama-3-8B-Instruct-abliterated-v3-GPTQ-4bit
for people who want to use it with GPTQ and a 8GB VRAM GPU.