Edit model card

Quantized Model Information

This repository is an AWQ 4-bit quantized version of the nvidia/Llama-3.1-Nemotron-70B-Instruct-HF model, which is an NVIDIA customized version of meta-llama/Meta-Llama-3.1-70B-Instruct, originally released by Meta AI.

This model was quantized using AutoAWQ from FP16 down to INT4 using GEMM kernels, with zero-point quantization and a group size of 128.

Hardware: Intel Xeon CPU E5-2699A v4 @ 2.40GHz, 256GB of RAM, and 2x NVIDIA RTX 3090. I have only tested this with vLLM, but this should work on any platform that supports LLama 3.1 70B Instruct AWQ INT4. The primary limiting factor seems to be whether the platform supports Rotary Positional Embeddings (RoPE).

Model usage (inference) information for Transformers, AutoAWQ, Text Generation Interface (TGI), and vLLM , as well as quantization reproduction details, are below.

Original Model Information

Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA to improve the helpfulness of LLM generated responses to user queries.

This model reaches Arena Hard of 85.0, AlpacaEval 2 LC of 57.6 and GPT-4-Turbo MT-Bench of 8.98, which are known to be predictive of LMSys Chatbot Arena Elo

As of 1 Oct 2024, this model is #1 on all three automatic alignment benchmarks (verified tab for AlpacaEval 2 LC), edging out strong frontier models such as GPT-4o and Claude 3.5 Sonnet.

As of Oct 24th, 2024 the model has Elo Score of 1267(+-7), rank 9 and style controlled rank of 26 on ChatBot Arena leaderboard.

The original model was trained using RLHF (specifically, REINFORCE), Llama-3.1-Nemotron-70B-Reward and HelpSteer2-Preference prompts on a Llama-3.1-70B-Instruct model as the initial policy.

nvidia/Llama-3.1-Nemotron-70B-Instruct-HF has been converted from Llama-3.1-Nemotron-70B-Instruct to support it in the HuggingFace Transformers codebase. Please note that evaluation results might be slightly different from the Llama-3.1-Nemotron-70B-Instruct as evaluated in NeMo-Aligner, which the evaluation results are based on.

Note from Terrell: Quantization to AWQ 4-bit will further affect evaluation results.

Model Usage

In order to use this quantized model, support is offered for different solutions such as transformers, autoawq, or text-generation-inference.

In order to run inference with Llama 3.1 Nemotron 70B Instruct AWQ in INT4, around 35 GiB of VRAM are needed for loading the model checkpoint, without including the KV cache or the CUDA graphs, meaning that there should be a bit over that VRAM available.

πŸ€— Transformers

In order to run the inference with Llama 3.1 Nemotron 70B Instruct AWQ in INT4, you need to install the following packages:

pip install -q --upgrade transformers autoawq accelerate

To run inference of Llama 3.1 Nemotron 70B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via AutoModelForCausalLM. Run inference as usual.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig

model_id = "ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4"
quantization_config = AwqConfig(
    bits=4,
    fuse_max_seq_len=512, # Note: Update this as per your use-case
    do_fuse=True,
)

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
  model_id,
  torch_dtype=torch.float16,
  low_cpu_mem_usage=True,
  device_map="auto",
  quantization_config=quantization_config
)

prompt = [
  {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
  {"role": "user", "content": "What's Deep Learning?"},
]
inputs = tokenizer.apply_chat_template(
  prompt,
  tokenize=True,
  add_generation_prompt=True,
  return_tensors="pt",
  return_dict=True,
).to("cuda")

outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])

AutoAWQ

In order to run the inference with Llama 3.1 Nemotron 70B Instruct AWQ in INT4, you need to install the following packages:

pip install -q --upgrade transformers autoawq accelerate

Alternatively, one may want to run that via AutoAWQ even though it's built on top of πŸ€— transformers, which is the recommended approach instead as described above.

import torch
from awq import AutoAWQForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoAWQForCausalLM.from_pretrained(
  model_id,
  torch_dtype=torch.float16,
  low_cpu_mem_usage=True,
  device_map="auto",
)

prompt = [
  {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
  {"role": "user", "content": "What's Deep Learning?"},
]
inputs = tokenizer.apply_chat_template(
  prompt,
  tokenize=True,
  add_generation_prompt=True,
  return_tensors="pt",
  return_dict=True,
).to("cuda")

outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])

The AutoAWQ script has been adapted from AutoAWQ/examples/generate.py.

πŸ€— Text Generation Inference (TGI)

To run the text-generation-launcher with Llama 3.1 Nemotron 70B Instruct AWQ in INT4 with Marlin kernels for optimized inference speed, you will need to have Docker installed (see installation notes) and the huggingface_hub Python package as you need to login to the Hugging Face Hub.

pip install -q --upgrade huggingface_hub
huggingface-cli login

Then you just need to run the TGI v2.2.0 (or higher) Docker container as follows:

docker run --gpus all --shm-size 1g -ti -p 8080:80 \
  -v hf_cache:/data \
  -e MODEL_ID=ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4 \
  -e NUM_SHARD=4 \
  -e QUANTIZE=awq \
  -e HF_TOKEN=$(cat ~/.cache/huggingface/token) \
  -e MAX_INPUT_LENGTH=4000 \
  -e MAX_TOTAL_TOKENS=4096 \
  ghcr.io/huggingface/text-generation-inference:2.2.0

TGI will expose different endpoints, to see all the endpoints available check TGI OpenAPI Specification.

To send request to the deployed TGI endpoint compatible with OpenAI OpenAPI specification i.e. /v1/chat/completions:

curl 0.0.0.0:8080/v1/chat/completions \
  -X POST \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "tgi",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "What is Deep Learning?"
      }
    ],
    "max_tokens": 128
  }'

Or programatically via the huggingface_hub Python client as follows:

import os
from huggingface_hub import InferenceClient

client = InferenceClient(base_url="http://0.0.0.0:8080", api_key=os.getenv("HF_TOKEN", "-"))

chat_completion = client.chat.completions.create(
  model="ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is Deep Learning?"},
  ],
  max_tokens=128,
)

Alternatively, the OpenAI Python client can also be used (see installation notes) as follows:

import os
from openai import OpenAI

client = OpenAI(base_url="http://0.0.0.0:8080/v1", api_key=os.getenv("OPENAI_API_KEY", "-"))

chat_completion = client.chat.completions.create(
  model="tgi",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is Deep Learning?"},
  ],
  max_tokens=128,
)

vLLM

To run vLLM with Llama 3.1 70B Instruct AWQ in INT4, you will need to have Docker installed (see installation notes) and run the latest vLLM Docker container as follows:

docker run --runtime nvidia --gpus all --ipc=host -p 8000:8000 \
  -v hf_cache:/root/.cache/huggingface \
  vllm/vllm-openai:latest \
  --model ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4 \
  --tensor-parallel-size 4 \
  --max-model-len 4096

To send request to the deployed vLLM endpoint compatible with OpenAI OpenAPI specification i.e. /v1/chat/completions:

curl 0.0.0.0:8000/v1/chat/completions \
  -X POST \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "What is Deep Learning?"
      }
    ],
    "max_tokens": 128
  }'

Or programatically via the openai Python client (see installation notes) as follows:

import os
from openai import OpenAI

client = OpenAI(base_url="http://0.0.0.0:8000/v1", api_key=os.getenv("VLLM_API_KEY", "-"))

chat_completion = client.chat.completions.create(
  model="ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is Deep Learning?"},
  ],
  max_tokens=128,
)

Quantization Reproduction Information

In order to quantize Llama 3.1 Nemotron 70B Instruct using AutoAWQ, you will need to use an instance with at least enough CPU RAM to fit the whole model i.e. ~140GiB, and an NVIDIA GPU with 40GiB of VRAM to quantize it.

In order to quantize Llama 3.1 Nemotron 70B Instruct, first install the following packages:

pip install -q --upgrade transformers autoawq accelerate

This quantization was produced using a single node with an Intel Xeon CPU E5-2699A v4 @ 2.40GHz, 256GB of RAM, and 2x NVIDIA RTX 3090 (24GB VRAM each, for a total of 48 GB VRAM).

I initially adapted hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4, so many thanks to the Hugging Quants team, the AutoAWQ team, and the MIT HAN Lab for LLM-AWQ. I'd also like to thank Professor David Dobolyi over at University of Colorado Boulder and Marc Sun at Hugging Face for their work, specifically AutoAWQ PR#630.

Adapted from AutoAWQ/examples/quantize.py and hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4:

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
import torch

# Empty Cache
torch.cuda.empty_cache()

# Memory Limits - Set this according to your hardware limits
max_memory = {0: "22GiB", 1: "22GiB", "cpu": "160GiB"}

model_path = "nvidia/Llama-3.1-Nemotron-70B-Instruct-HF"
quant_path = "ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4"
quant_config = {
  "zero_point": True,
  "q_group_size": 128,
  "w_bit": 4,
  "version": "GEMM"
  
}

# Load model - Note: while this loads the layers into the CPU, the GPUs (and the VRAM) are still required for quantization! (Verified with nvida-smi)
model = AutoAWQForCausalLM.from_pretrained(
    model_path,
    use_cache=False,
    max_memory=max_memory,
    device_map="cpu"
)

tokenizer = AutoTokenizer.from_pretrained(model_path)

# Quantize
model.quantize(
    tokenizer,
    quant_config=quant_config
)

# Save quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)

print(f'Model is quantized and saved at "{quant_path}"')
Downloads last month
446
Safetensors
Model size
11.3B params
Tensor type
I32
Β·
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4

Dataset used to train ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4