--- license: llama3 library_name: transformers --- # Description This is a 4-bit GPTQ quantized model of [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) It is quantized with `wikitext2` dataset. Its file size is 5.73 GB, and can fit into a 8GB VRAM GPU.