NVFP4
Collection
LLMs quantized with LLM Compressor to NVFP4. Check the JSON files in the model directories for more information.
•
10 items
•
Updated
This is Qwen/Qwen3-4B quantized with LLM Compressor in 4-bit (NVFP4), weights and activations. The calibration step used 512 samples of 16000 tokens, chat template applied, from open-r1/OpenR1-Math-220k.
The quantization has been done, tested, and evaluated by The Kaitchup. The model is compatible with vLLM. Use a Blackwell GPU to get >2x throughput.
More details in this article: NVFP4: Same Accuracy with 2.3x Higher Throughput for 4-Bit LLMs
Subscribe to The Kaitchup. Or, for a one-time contribution, here is my ko-fi link: https://ko-fi.com/bnjmn_marie
This helps me a lot to continue quantizing and evaluating models for free.