Mostly quanting this to try it out, didn't see any other quants for EXL2 on this so here we are.
This is the 8bpw version of this model. Find the original here.
For the 6bpw version, go here
For the 4bpw version, go here
mistral-nemo-gutenberg-12B-v4
Sao10K/MN-12B-Lyra-v1 finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
Finetuned using an A100 on Google Colab for 3 epochs.
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Statuo/Lyra-Gutenberg-12b-EXL2-8bpw
Base model
Sao10K/MN-12B-Lyra-v1