๐ŸŽจ Stable Diffusion Text2Text
Architecture
myT5 1.2B
400k samples trained
Run the Model
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("Nekochu/myt5-large-SD-prompts", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Nekochu/myt5-large-SD-prompts")

prompt = "### Instruction:\nCreate stable diffusion metadata based on the given english description. a futuristic city\n\n### Response:\n" inputs = tokenizer(prompt, return_tensors="pt", max_length=256, truncation=True).to(model.device) outputs = model.generate(**inputs, max_length=256, num_beams=5, early_stopping=True) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result)

๐Ÿ™๏ธ
Cyberpunk City
SFW
Nikon Z9 200mm f_8 ISO 160, (giant rifle structure), flawless ornate architecture, cyberpunk, neon lights, busy street, realistic, ray tracing, hasselblad
๐Ÿ‰
Fantasy Dragon
SFW
masterpiece, best quality, cinematic lighting, 1girl, solo,
๐Ÿ˜ˆ
Anime Succubus
NSFW
masterpiece, best quality, highly detailed background, intricate, 1girl, (full-face blush, aroused:1.3), long hair, medium breasts, nipples
Experimental Model
  • Structured output tasks perform best.
๐Ÿ”ฌ Training Evolution & Alternative Attempts
torchtune Qwen3-0.6B โ†’
decoder-only
pytorch-lightning Zamba2-1.2B โ†’
hybrid arch
tokenizer failed google/byt5
byte-level
Downloads last month
16
Safetensors
Model size
1.23B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Nekochu/myt5-large-SD-prompts

Base model

Tomlim/myt5-large
Finetuned
(1)
this model

Dataset used to train Nekochu/myt5-large-SD-prompts