Perplexity test goes way lower than the base model at equivalent quant!
#1
by
Nexesenex
- opened
Impressive results, confirmed on 2 exl2-2 quants :
Your model https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-limarp-exl2 equaled the perplexity of the original Mixtral instruct quantized by turboderp. Norobara decreases it beyond the margin of error.
Good job !
Nexesenex
changed discussion title from
Perplexity test
to Perplexity test goes way lower than the base model at equal quant!
Nexesenex
changed discussion title from
Perplexity test goes way lower than the base model at equal quant!
to Perplexity test goes way lower than the base model at equivalent quant!