YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

ModelCard for UnGPT-v1

Model Details

  • Name: UnGPT-v1
  • Foundation Model: Mistral v0.3 (7B parameters)
  • Recommended Context Length: 16k tokens
  • Fine-tuning Methodology: LoRA-based training with Odds Ratio Preference Optimization method, using a combination of ebooks and synthetic data.

Usage Instructions

Use the Alpaca format for prompts:

### Instruction:
{instruction}

### Input:
{input}

### Response:

Example prompts

For instructions, it is not recommended to deviate from the provided examples. For the input, a minimum is 10 sentences, but more can work as the model can handle longer context sizes (Thanks to the Mistral 7B v0.3 base model).

  1. Completion Prompt:

    ### Instruction:
    Continue writing the story while retaining writing style. Write about 10 sentences.
    
    ### Input:
    It was a dark and stormy night...
    
    ### Response:
    
  2. Fill-in-the-middle Prompt:

    ### Instruction:
    Fill in the missing part of the story ({{FILL_ME}}) with about 10 sentences while retaining the writing style.
    
    ### Input:
    The bus was speeding down the road, cops chasing after it. 
    {{FILL_ME}}
    She woke up to find herself in an unfamiliar room...
    
     ### Response:
    

Dataset Preparation

For dataset acquisition and cleanup please refer steps 1 and 2 of my text-completion example, molbal/llm-text-completion-finetune.

Chunking: Split texts into chunks based on sentence boundaries, aiming for 100 sentences per example.

  • For completion examples, 90 sentences were used as input, 10 sentences as response.
  • For fill-in-the-middle examples, 80 + 10 sentences as input (before and after the {{FILL_ME}} placeholder, respectively), and 10 sentences as response.

The beauty of the ORPO method is that for a single prompt we can set both a positive and a negative example. I wanted the model to avoid 'GPTisms' so I had gpt4o-mini generate answers both for completion and FOM tasks and added them as a neative example.

The dataset used is ~15k examples, each approximately 9000 characters long including input, accepted and refused response. (Note these are characters not tokens)

Training setup

  • Fine-tuned the Mistral v0.3 foundation model using Unsloth and ORPO trainer.

  • Training configuration:

    • Batch size: 1
    • Gradient accumulation steps: 4
    • Learning rate scheduler type: Linear
    • Optimizer: AdamW (8-bit)
    • Number of training epochs: 1
  • Hardware

  • Training costs

    • ~5€ for renting a GPU pod (+15€ in unsuccessful attempts)
    • ~5€ in OpenAI API costs for generating refusals

Licensing and Citation

  • License: This model is licensed under the Apache License 2.0.
  • Citation:
@misc{ungpt-v1,
  author = Bálint Molnár-Kaló,
  title = {UnGPT-v1: A Fine-tuned Mistral Model for Story Continuation},
  howpublished = {\url{https://huggingface.co/models/molbal/UnGPT-v1}},
  year = 2024
}
Downloads last month
2
GGUF
Model size
7.25B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for molbal/ungpt-v1

Quantized
(183)
this model