TheBloke's picture
Update README.md
6d3dd74
|
raw
history blame
6.83 kB
metadata
license: other
inference: false

OpenAssistant LLaMA 30B SFT 7 GPTQ

This in a repo of GGML format models for OpenAssistant's LLaMA 30B SFT 7.

It is the result of merging the XORs from the above repo with the original Llama 30B weights, and then quantising to 4bit and 5bit GGML for CPU inference using llama.cpp.

This is epoch 7 of OpenAssistant's training of their Llama 30B model.

Repositories available

PROMPT TEMPLATE

This model requires the following prompt template:

<|prompter|> prompt goes here
<|assistant|>:

How to easily download and use this model in text-generation-webui

Load text-generation-webui as you normally do.

  1. Click the Model tab.
  2. Under Download custom model or LoRA, enter this repo name: TheBloke/stable-vicuna-13B-GPTQ.
  3. Click Download.
  4. Wait until it says it's finished downloading.
  5. As this is a GPTQ model, fill in the GPTQ parameters on the right: Bits = 4, Groupsize = 128, model_type = Llama
  6. Now click the Refresh icon next to Model in the top left.
  7. In the Model drop-down: choose this model: stable-vicuna-13B-GPTQ.
  8. Click Reload the Model in the top right.
  9. Once it says it's loaded, click the Text Generation tab and enter a prompt!

Provided files

I have uploaded two versions of the GPTQ.

Compatible file - stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors

In the main branch - the default one - you will find stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors

This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility

It was created without the --act-order parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.

  • stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors
    • Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
    • Works with text-generation-webui one-click-installers
    • Parameters: Groupsize = 128g. No act-order.
    • Command used to create the GPTQ:
      CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
      

Latest file - stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors

Created for more recent versions of GPTQ-for-LLaMa, and uses the --act-order flag for maximum theoretical performance.

To access this file, please switch to the latest branch fo this repo and download from there.

  • stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors
    • Only works with recent GPTQ-for-LLaMa code
    • Does not work with text-generation-webui one-click-installers
    • Parameters: Groupsize = 128g. act-order.
    • Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
    • Command used to create the GPTQ:
      CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
      

Manual instructions for text-generation-webui

File stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors can be loaded the same as any other GPTQ file, without requiring any updates to oobaboogas text-generation-webui.

Instructions on using GPTQ 4bit files in text-generation-webui are here.

The other safetensors model file was created using --act-order to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.

If you want to use the act-order safetensors files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:

# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa

Then install this model into text-generation-webui/models and launch the UI as follows:

cd text-generation-webui
python server.py --model stable-vicuna-13B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want

The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.

If you can't update GPTQ-for-LLaMa or don't want to, you can use stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors as mentioned above, which should work without any upgrades to text-generation-webui.

Original model card

llama-30b-sft-7:
  dtype: fp16
  log_dir: "llama_log_30b"
  learning_rate: 1e-5
  model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
  #model_name: OpenAssistant/llama-30b-super-pretrain
  output_dir: llama_model_30b
  deepspeed_config: configs/zero3_config_sft.json
  weight_decay: 0.0
  residual_dropout: 0.0
  max_length: 2048
  use_flash_attention: true
  warmup_steps: 20
  gradient_checkpointing: true
  gradient_accumulation_steps: 12
  per_device_train_batch_size: 2
  per_device_eval_batch_size: 3
  eval_steps: 101
  save_steps: 485
  num_train_epochs: 4
  save_total_limit: 3
  use_custom_sampler: true
  sort_by_length: false
  #save_strategy: steps
  save_strategy: epoch
  datasets:
    - oasst_export:
        lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
        input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
        val_split: 0.05
    - vicuna:
        val_split: 0.05
        max_val_set: 800
        fraction: 1.0
    - dolly15k:
        val_split: 0.05
        max_val_set: 300
    - grade_school_math_instructions:
        val_split: 0.05
    - code_alpaca:
        val_split: 0.05
        max_val_set: 250