Dippy DialoGPT Optimized
This is a fine-tuned version of microsoft/DialoGPT-medium optimized for conversational AI with Dippy personality.
Model Details
- Base model: microsoft/DialoGPT-medium
- Fine-tuned for: Conversational AI, roleplay, helpful assistant interactions
- Optimized for: Bittensor SN11 Dippy subnet
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Gatescrispy/dippy-dialogpt-optimized")
model = AutoModelForCausalLM.from_pretrained("Gatescrispy/dippy-dialogpt-optimized")
# Generate response
inputs = tokenizer.encode("Hello! How are you today?", return_tensors="pt")
outputs = model.generate(inputs, max_length=50, pad_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training
- Dataset: Custom Dippy personality conversations
- Training: 1 epoch with learning rate scheduling
- Hardware: NVIDIA RTX 3090
Bittensor Integration
This model is designed for Bittensor SN11 Dippy subnet integration.
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support