--- license: mit tags: - generated_from_trainer model-index: - name: ArtPrompter results: [] --- # [ArtPrompter](https://pearsonkyle.github.io/Art-Prompter/) A [gpt2](https://huggingface.co/gpt2) powered predictive algorithm for making descriptive text prompts for A.I. image generators (e.g. MidJourney, Stable Diffusion, ArtBot, etc). The model was trained on a custom dataset containing 666K unique prompts from MidJourney. Simply start a prompt and let the algorithm suggest ways to finish it. ![](https://huggingface.co/pearsonkyle/ArtPrompter/resolve/main/starry_night.gif) [![Art Prompter Basic](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1HQOtD2LENTeXEaxHUfIhDKUaPIGd6oTR?usp=sharing) ```python from transformers import pipeline prompter = pipeline('text-generation',model='pearsonkyle/ArtPrompter', tokenizer='gpt2') texts = prompter('A portal to a galaxy, view with', max_length=30, num_return_sequences=5) for i in range(5): print(texts[i]['generated_text']+'\n') ``` ## Intended uses & limitations Build sick prompts and lots of them.. use it to [make animations](https://colab.research.google.com/drive/1Ooe7c87xGMa9oG5BDrFVzYqJLvnoKcyZ?usp=sharing) or a discord bot that can interact with MidJourney. [![](https://pearsonkyle.github.io/Art-Prompter/images/discord_bot.png)](https://discord.gg/3S8Taqa2Xy) ## Examples - *The entire universe is a simulation,a confessional with a smiling guy fawkes mask, symmetrical, inviting,hyper realistic* - *a pug disguised as a teacher. Setting is a class room* - *I wish I had an angel For one moment of love I wish I had your angel Your Virgin Mary undone Im in love with my desire Burning angelwings to dust* - *The heart of a galaxy, surrounded by stars, magnetic fields, big bang, cinestill 800T,black background, hyper detail, 8k, black* ## Training procedure ~30 hours of finetune on RTX3070 with 666K unique prompts ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1 - Tokenizers 0.13.2