TinyRP
A >tiny< roleplaying model, trained from scratch with a custom tokenizer! Inspired by the success of models like Microsoft's Phi and TinyStories, this is an experiment to see if reasonable succes can be achieved for roleplay with a similar approach.
Roleplay was chosen because it is harder to keep a story going consistently across multiple turns than it is to simply generate it once based off of a simple prompt. Do remember, that this training set was not sanitized, so NSFW results are definitely possible.
Out of scope
Anything other than roleplay in English.
Formatting
Use the ChatML format to "chat" with this model. The entire training dataset was modified to use this format, so the model only understand it. Use the "system" prompt to describe the name and character of the AI, the user and assistant tags are used as normal.
Recommended settings
TBD
Model Details
- Architecture: Mistral (modified for smaller size)
- Parameters: ~90M
- Vocabulary Size: 8192
- Context Length: 2048
- Sliding Window: 4096
- Training Steps: 6000 (6 epochs)
- Hidden Size: 768
- Layers: 12
- Attention Heads: 16
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TinyRP")
model = AutoModelForCausalLM.from_pretrained("TinyRP")
# Generate text
inputs = tokenizer("Hello there!", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
This model was trained from scratch using a custom dataset for roleplay scenarios. Training was stopped at approximately 6 epochs.
Training code
Please see this GitHub repo for training code
Limitations
- Optimized for roleplay and creative writing scenarios
- May not perform well on factual or analytical tasks
- Further fine-tuning may improve performance
- Downloads last month
- 59