|
--- |
|
base_model: meta-llama/Meta-Llama-3-8B-Instruct |
|
inference: false |
|
model_creator: astronomer-io |
|
model_name: Meta-Llama-3-8B-Instruct |
|
model_type: llama |
|
pipeline_tag: text-generation |
|
prompt_template: "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}" |
|
quantized_by: davidxmle |
|
license: other |
|
license_name: llama-3-community-license |
|
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE |
|
tags: |
|
- llama |
|
- llama-3 |
|
- facebook |
|
- meta |
|
- astronomer |
|
- gptq |
|
- pretrained |
|
- quantized |
|
- finetuned |
|
- autotrain_compatible |
|
- endpoints_compatible |
|
datasets: |
|
- wikitext |
|
--- |
|
|
|
# Llama-3-8B-Instruct-GPTQ-4-Bit |
|
- Original Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama) |
|
- Original model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) |
|
- Built with Meta Llama 3 |
|
- Quantized by [Astronomer](https://astronomer.io) |
|
|
|
# Important Note About Serving with vLLM & oobabooga/text-generation-webui |
|
- For loading this model onto vLLM, make sure all requests have `"stop_token_ids":[128001, 128009]` to temporarily address the non-stop generation issue. |
|
- vLLM does not yet respect `generation_config.json`. |
|
- vLLM team is working on a a fix for this https://github.com/vllm-project/vllm/issues/4180 |
|
- For oobabooga/text-generation-webui |
|
- Load the model via AutoGPTQ, with `no_inject_fused_attention` enabled. This is a bug with AutoGPTQ library. |
|
- Under `Parameters` -> `Generation` -> `Skip special tokens`: turn this off (deselect) |
|
- Under `Parameters` -> `Generation` -> `Custom stopping strings`: add `"<|end_of_text|>","<|eot_id|>"` to the field |
|
|
|
<!-- description start --> |
|
## Description |
|
|
|
This repo contains 4 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). |
|
|
|
This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc). |
|
|
|
The 4 bit GPTQ quant has small quality degradation from the original `bfloat16` model but can be served on much smaller GPUs with maximum improvement in latency and throughput. |
|
|
|
<!-- description end --> |
|
|
|
## GPTQ Quantization Method |
|
- This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323) |
|
- Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss. |
|
|
|
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description | |
|
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | |
|
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss | |
|
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. | |
|
|
|
## Serving this GPTQ model using vLLM |
|
Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM). |
|
|
|
Tested with the below command |
|
``` |
|
python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit --max-model-len 8192 --dtype float16 |
|
``` |
|
For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint |
|
Example: |
|
```json |
|
{ |
|
"model": "astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit", |
|
"messages": [ |
|
{"role": "system", "content": "You are a helpful assistant."}, |
|
{"role": "user", "content": "Who created Llama 3?"} |
|
], |
|
"max_tokens": 2000, |
|
"stop_token_ids":[128001,128009] |
|
} |
|
``` |
|
### Prompt Template |
|
``` |
|
<|begin_of_text|><|start_header_id|>user<|end_header_id|> |
|
{{prompt}}<|eot_id|> |
|
<|start_header_id|>assistant<|end_header_id|> |
|
``` |
|
### Contributors |
|
- Quantized by [David Xue, Machine Learning Engineer from Astronomer](https://www.linkedin.com/in/david-xue-uva/) |