davidxmle's picture
Duplicate from astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit
4cb77e2 verified
|
raw
history blame
5.58 kB
metadata
base_model: meta-llama/Meta-Llama-3-8B-Instruct
inference: false
model_creator: astronomer-io
model_name: Meta-Llama-3-8B-Instruct
model_type: llama
pipeline_tag: text-generation
prompt_template: >-
  {% set loop_messages = messages %}{% for message in loop_messages %}{% set
  content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>


  '+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set
  content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if
  add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>


  ' }}{% endif %}
quantized_by: davidxmle
license: other
license_name: llama-3-community-license
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
tags:
  - llama
  - llama-3
  - facebook
  - meta
  - astronomer
  - gptq
  - pretrained
  - quantized
  - finetuned
  - autotrain_compatible
  - endpoints_compatible
datasets:
  - wikitext
Astronomer

This model is generously created and made open source by Astronomer.

Astronomer is the de facto company for Apache Airflow, the most trusted open-source framework for data orchestration and MLOps.


Llama-3-8B-Instruct-GPTQ-4-Bit

Important Note About Serving with vLLM & oobabooga/text-generation-webui

  • For loading this model onto vLLM, make sure all requests have "stop_token_ids":[128001, 128009] to temporarily address the non-stop generation issue.
  • For oobabooga/text-generation-webui
    • Load the model via AutoGPTQ, with no_inject_fused_attention enabled. This is a bug with AutoGPTQ library.
    • Under Parameters -> Generation -> Skip special tokens: turn this off (deselect)
    • Under Parameters -> Generation -> Custom stopping strings: add "<|end_of_text|>","<|eot_id|>" to the field

Description

This repo contains 4 Bit quantized GPTQ model files for meta-llama/Meta-Llama-3-8B-Instruct.

This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc).

The 4 bit GPTQ quant has small quality degradation from the original bfloat16 model but can be served on much smaller GPUs with maximum improvement in latency and throughput.

GPTQ Quantization Method

  • This model is quantized by utilizing the AutoGPTQ library, following best practices noted by GPTQ paper
  • Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
Branch Bits Group Size Act Order Damp % GPTQ Dataset Sequence Length VRAM Size ExLlama Description
main 4 128 Yes 0.1 wikitext 8192 5.74 GB Yes 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss
More variants to come TBD TBD TBD TBD TBD TBD TBD TBD May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc.

Serving this GPTQ model using vLLM

Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM).

Tested with the below command

python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit --max-model-len 8192 --dtype float16

For the non-stop token generation bug, make sure to send requests with stop_token_ids":[128001, 128009] to vLLM endpoint Example:

{
    "model": "astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit",
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who created Llama 3?"}
        ],
    "max_tokens": 2000,
    "stop_token_ids":[128001,128009]
}

Prompt Template

<|begin_of_text|><|start_header_id|>user<|end_header_id|>
{{prompt}}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

Contributors