File size: 1,029 Bytes
0bc95cc
587bb7d
 
0bc95cc
587bb7d
 
 
0bc95cc
 
798fb2d
 
0bc95cc
 
 
587bb7d
0bc95cc
587bb7d
 
0bc95cc
 
587bb7d
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
language:
- en
library_name: transformers
tags:
- AutoRound
license: apache-2.0
---

**Warning**: This model poorly performs. I ran the quantization three times but it never produced a good model. I recommend using the asymmetric quantization ([kaitchup/Mistral-NeMo-Minitron-8B-Base-AutoRound-GPTQ-asym-4bit](https://huggingface.co/kaitchup/Mistral-NeMo-Minitron-8B-Base-AutoRound-GPTQ-asym-4bit)) version instead.


## Model Details

This is [nvidia/Mistral-NeMo-Minitron-8B-Base](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base) quantized with AutoRound (symmetric quantization) to 4-bit. The model has been created, tested, and evaluated by The Kaitchup. It is compatible with the main inference frameworks, e.g., TGI and vLLM.

Details on the quantization process and evaluation:
[Mistral-NeMo: 4.1x Smaller with Quantized Minitron](https://kaitchup.substack.com/p/mistral-nemo-41x-smaller-with-quantized)


- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **License:** Apache license 2.0