File size: 6,825 Bytes
9dc5119 6d3dd74 9dc5119 6d3dd74 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
license: other
inference: false
---
# OpenAssistant LLaMA 30B SFT 7 GPTQ
This in a repo of GGML format models for [OpenAssistant's LLaMA 30B SFT 7](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor).
It is the result of merging the XORs from the above repo with the original Llama 30B weights, and then quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
This is epoch 7 of OpenAssistant's training of their Llama 30B model.
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ).
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GGML).
* [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-HF).
## PROMPT TEMPLATE
This model requires the following prompt template:
```
<|prompter|> prompt goes here
<|assistant|>:
```
## How to easily download and use this model in text-generation-webui
Load text-generation-webui as you normally do.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter this repo name: `TheBloke/stable-vicuna-13B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
6. Now click the **Refresh** icon next to **Model** in the top left.
7. In the **Model drop-down**: choose this model: `stable-vicuna-13B-GPTQ`.
8. Click **Reload the Model** in the top right.
9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
I have uploaded two versions of the GPTQ.
**Compatible file - stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors**
In the `main` branch - the default one - you will find `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
* `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128g. No act-order.
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
```
**Latest file - stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors**
Created for more recent versions of GPTQ-for-LLaMa, and uses the `--act-order` flag for maximum theoretical performance.
To access this file, please switch to the `latest` branch fo this repo and download from there.
* `stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors`
* Only works with recent GPTQ-for-LLaMa code
* **Does not** work with text-generation-webui one-click-installers
* Parameters: Groupsize = 128g. **act-order**.
* Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
```
## Manual instructions for `text-generation-webui`
File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
[Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
```
Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model stable-vicuna-13B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
# Original model card
```
llama-30b-sft-7:
dtype: fp16
log_dir: "llama_log_30b"
learning_rate: 1e-5
model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
#model_name: OpenAssistant/llama-30b-super-pretrain
output_dir: llama_model_30b
deepspeed_config: configs/zero3_config_sft.json
weight_decay: 0.0
residual_dropout: 0.0
max_length: 2048
use_flash_attention: true
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 12
per_device_train_batch_size: 2
per_device_eval_batch_size: 3
eval_steps: 101
save_steps: 485
num_train_epochs: 4
save_total_limit: 3
use_custom_sampler: true
sort_by_length: false
#save_strategy: steps
save_strategy: epoch
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
val_split: 0.05
- vicuna:
val_split: 0.05
max_val_set: 800
fraction: 1.0
- dolly15k:
val_split: 0.05
max_val_set: 300
- grade_school_math_instructions:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
```
- **OASST dataset paper:** https://arxiv.org/abs/2304.07327 |