--- license: other inference: false ---
TheBlokeAI

Chat & support: my new Discord server

Want to contribute? Patreon coming soon!

# Quantised GGMLs of alpaca-lora-65B Quantised 4bit and 5bit GGMLs of [changsung's alpaca-lora-65B](https://huggingface.co/chansung/alpaca-lora-65b) for CPU inference with [llama.cpp](https://github.com/ggerganov/llama.cpp). I also have 4bit GPTQ files for GPU inference available here: [TheBloke/alpaca-lora-65B-GPTQ-4bit](https://huggingface.co/TheBloke/alpaca-lora-65B-GPTQ-4bit). ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)! llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508 I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them. For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`. ## Provided files | Name | Quant method | Bits | Size | RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | `alpaca-lora-65B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | 4bit. | `alpaca-lora-65B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 44.9GB | 43GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | `alpaca-lora-65B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | 5bit. Higher quality than 4bit, at cost of slightly higher resources. | `alpaca-lora-65B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 49GB | 51GB | 5bit. Slightly higher resource usage and quality than q5_0. | ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 8 -m alpaca-lora-65B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Write a story about llamas ### Response:" ``` Change `-t 8` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you want to have a chat-style conversation, replace the `-p ` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files. ## Want to support my work? I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills. So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects. Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try. * Patreon: coming soon! (just awaiting approval) * Ko-Fi: https://ko-fi.com/TheBlokeAI * Discord: https://discord.gg/UBgz4VXf # Original model card not provided No model card was provided in [changsung's original repository](https://huggingface.co/chansung/alpaca-lora-65b). Based on the name, I assume this is the result of fine tuning using the original GPT 3.5 Alpaca dataset. It is unknown as to whether the original Stanford data was used, or the [cleaned tloen/alpaca-lora variant](https://github.com/tloen/alpaca-lora).