YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
mistral-dory-12b - GGUF
- Model creator: https://huggingface.co/BeaverAI/
- Original model: https://huggingface.co/BeaverAI/mistral-dory-12b/
Name | Quant method | Size |
---|---|---|
mistral-dory-12b.Q2_K.gguf | Q2_K | 4.46GB |
mistral-dory-12b.IQ3_XS.gguf | IQ3_XS | 4.94GB |
mistral-dory-12b.IQ3_S.gguf | IQ3_S | 5.18GB |
mistral-dory-12b.Q3_K_S.gguf | Q3_K_S | 5.15GB |
mistral-dory-12b.IQ3_M.gguf | IQ3_M | 5.33GB |
mistral-dory-12b.Q3_K.gguf | Q3_K | 5.67GB |
mistral-dory-12b.Q3_K_M.gguf | Q3_K_M | 5.67GB |
mistral-dory-12b.Q3_K_L.gguf | Q3_K_L | 6.11GB |
mistral-dory-12b.IQ4_XS.gguf | IQ4_XS | 6.33GB |
mistral-dory-12b.Q4_0.gguf | Q4_0 | 6.59GB |
mistral-dory-12b.IQ4_NL.gguf | IQ4_NL | 6.65GB |
mistral-dory-12b.Q4_K_S.gguf | Q4_K_S | 6.63GB |
mistral-dory-12b.Q4_K.gguf | Q4_K | 6.96GB |
mistral-dory-12b.Q4_K_M.gguf | Q4_K_M | 6.96GB |
mistral-dory-12b.Q4_1.gguf | Q4_1 | 7.26GB |
mistral-dory-12b.Q5_0.gguf | Q5_0 | 7.93GB |
mistral-dory-12b.Q5_K_S.gguf | Q5_K_S | 7.93GB |
mistral-dory-12b.Q5_K.gguf | Q5_K | 8.13GB |
mistral-dory-12b.Q5_K_M.gguf | Q5_K_M | 8.13GB |
mistral-dory-12b.Q5_1.gguf | Q5_1 | 8.61GB |
mistral-dory-12b.Q6_K.gguf | Q6_K | 9.37GB |
mistral-dory-12b.Q8_0.gguf | Q8_0 | 12.13GB |
Original model description:
base_model: mistralai/Mistral-Nemo-Base-2407 license: apache-2.0 datasets: - BeaverAI/Nemo-Inst-Tune-ds language: - en library_name: transformers
Dory 12b
redone instruct finetune of mistral nemo 12b. not (E)RP-focused, leave that to drummer.
thanks to twisted for the compute :3
Prompting
alpaca-like:
### System:
[Optional system prompt]
### Instruction:
[Query]
### Response:
[Response]<EOT>
### Instruction:
[...]
Training details
Rank 64 QDoRA, trained on the following data mix:
- All of kalomaze/Opus_Instruct_3k
- All conversations with a reward model rating above 5 in Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered
- 50k of Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- All stories above 4.7 rating and published before 2020 in Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
- Downloads last month
- 0
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support