Fine tuned with my relatively small (about 3000 samples) sample of roleplay conversations. this datasets RP conversations have 3-15+ turns. this is my $60 attempt to force qwen3 30b to be able to handle RP stuff.

3 epochs, 4e-4 learning rate, because screw you qwen3... more info after I can test it. Q4_K_S gguf incoming soon.

Reasoning - here is a cool ST setting that as processed good results.

image/jpeg

<think> Alright, my thinking should be concise. What are the top 5 things I should keep in mind about the current scene? 

1. **

its great because the bulleted lists or already an effective use of tokens, AND you can somewhat control the length of the response by changing the number.

Downloads last month
53
Safetensors
Model size
30.5B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2

Finetuned
(1)
this model
Quantizations
3 models