bart-base-open-instructiongen-v1
Instead of generating questions from text, generate instructions for LLMs!
- Check out a basic demo on Spaces
- An example of how to use instructiongen models in a CLI script can be found here
- You can find other models fine-tuned for instruction generation by searching for the instructiongen tag
Model description
This model is a fine-tuned version of facebook/bart-base on the hakurei/open-instruct-v1 dataset.
- This model only generates the
instructionfor arbitrary text (it does not provideinputsas well - look for models withw-inputsin the name). - There was no validation split at the time of training, so no statistics here.
- Comparing the performance of this model with pszemraj/bart-base-instructiongen might give some indication of whether and how much dataset scaling is needed to produce "robust" instruction generators.
- If you notice any trends, feel free to reach out! would be happy to hear about it.
Training and evaluation data
See hakurei/open-instruct-v1. This model was trained on the dataset "backwards", i.e. the model was given the output column as input and trained to predict instruction.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2.0
Training results
Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.9.0
- Tokenizers 0.12.1
- Downloads last month
- 5