Mistral 12B merges
Collection
8 items
•
Updated
•
1
This is a merge of pre-trained language models
Goal of this merge was to enhance roleplay capabilities and overall performance of Nomad_12b
I tested this model over a week, in rp, but also for work and as creative assistant, and results were good. Model is very attentive to char card, and to context in general, smart, creative enough, it follows instructions just fine. Model is surprisingly stable, on 16k context it not loosing dramatically in quality, and even if used on completely wrong settings it will not break.
Of course, it is not censored (like all mistral models), and fine for ERP.
Ru performance is good enough to use it as assistant or chat, for rp En is better.
Tested on mistral v7 template t 1.04.