CarlOwOs's picture
Add FP8 dynamically quantized Qwen3-0.6B-Base model using llm-compressor
9dce9b8 verified
raw
history blame contribute delete
136 Bytes
default_stage:
default_modifiers:
QuantizationModifier:
ignore: [lm_head]
targets: [Linear]
scheme: FP8_DYNAMIC