Rei-12B

Another prototype Magnum... (This time with RL!)

Rei Model

✨ Overview

Taking the previous 12B trained with Subseqence Loss - This model is meant to refine the base's sharp edges and increase coherency, intelligence and prose while replicating the prose of the Claude models Opus and Sonnet

Fine-tuned on top of Rei-V3-12B-Base, Rei-12B is designed to replicate the prose quality of Claude 3 models, particularly Sonnet and Opus, using a prototype Magnum V5 datamix.

📥 Quantized Models

💬 Prompt Format

Rei-12B uses the ChatML format. A typical conversation should be structured as:

<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant

Recommended System Prompt

View Euryale System Prompt

Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n\n\n\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n\n\nFollow the instructions in , avoiding the items listed in .

⚙️ Training

Hparams

  • For Hparams for this model we used a grad clip of 1e-4 as it was proven to the best value for Mistral-12B based models, and also to prevent Rewards/Chosen from flat-lining as Hermes-genned data is... The biggest piece of dogshit.

Configuration

View Axolotl Config

https://wandb.ai/new-eden/KTO/artifacts/axolotl-config/config-eyt7d5i9/v0/files/axolotl_config_jvjuci1x.yml

The model was trained for 1 epochs on 8x NVIDIA H100s GPUs generously provided by @Kalomaze

⚠️ Credits

I'd like to thank, Ruka/Sama twinkman | LucyKnada | Kubernetes Bad | PocketDoc | Tav | Trappu | Alicat | And the rest of Anthracite/Pygmalion for testing, feedback, and support.

Downloads last month
5
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Delta-Vector/Rei-V3-KTO-12B

Finetuned
(1)
this model
Quantizations
3 models

Datasets used to train Delta-Vector/Rei-V3-KTO-12B

Collection including Delta-Vector/Rei-V3-KTO-12B