Llama-3_3-Nemotron-Super-49B-v1_5-FP8-Dynamic
FP8 quantization of https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-49B-v1_5
Creation
Created with llmcompressor using the following code:
import sys
from transformers import AutoTokenizer, AutoModelForCausalLM
from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
MODEL_ID = sys.argv[1]
SAVE_DIR = sys.argv[2]
# Load the model
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map="auto", torch_dtype="auto", local_files_only=True, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, local_files_only=True, trust_remote_code=True)
# Configure the simple PTQ quantization
recipe = QuantizationModifier(targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"])
# Apply the quantization algorithm.
oneshot(model=model, recipe=recipe, trust_remote_code_model=True)
# Save the model
model.save_pretrained(SAVE_DIR)
tokenizer.save_pretrained(SAVE_DIR)
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Ithanil/Llama-3_3-Nemotron-Super-49B-v1_5-FP8-Dynamic
Base model
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5