I'm honestly too tired to write out a whole model page. I retrained the model using a cleaner dataset and then ran KTO using a previous dataset. Probably can improve performance by generating a KTO set off the same base model.
Next major version I plan to Pretrain -> SFT -> KTO which should improve things further.
Chat Format: ChatML with System Prompt
<im_start>system
Your system prompt.
<|im_end|>
<|im_start>assistant
Some response from the model or some character intro.
<|im_end|>
<|im_start|>user
Your message here.
<|im_end|>
Sampler Settings
The above examples were taken from the model using the below sampler settings. Feel free to experiment.
- Temp: 1.0 - 1.25
- minP: 0.05 - 0.1
For Roleplay, use ChatML with System prompt, and append the character name to the start messages .
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support