Quantized Meta AI's LLaMA in 4bit with the help of GPTQ algorithm v2.
- llama13b-4bit-ts-ao-g128-v2.safetensors GPTQ implementation - https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/49efe0b67db4b40eac2ae963819ebc055da64074
Conversion process:
CUDA_VISIBLE_DEVICES=0 python llama.py ./llama-13b c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors ./q4/llama13b-4bit-ts-ao-g128-v2.safetensors
- llama13b-4bit-v2.safetensors GPTQ implementation - https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/841feedde876785bc8022ca48fd9c3ff626587e2
Note: This model will fail to load with current GPTQ-for-LLaMa implementation
Conversion process
CUDA_VISIBLE_DEVICES=0 python llama.py ./llama-13b c4 --wbits 4 --true-sequential --act-order --save_safetensors ./q4/llama13b-4bit-v2.safetensors
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.