library_name: peft | |
tags: | |
- facebook/opt-125m | |
- code | |
- instruct | |
- alpaca-instruct | |
- alpaca | |
datasets: | |
- tatsu-lab/alpaca | |
base_model: facebook/opt-125m | |
We finetuned facebook/opt-125m on tatsu-lab/alpaca Dataset for 10 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm). | |
This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment. | |
The finetuning session got completed in 40 minutes and costed us only `$4` for the entire finetuning run! | |
#### Hyperparameters & Run details: | |
- Model: facebook/opt-125m | |
- Dataset: tatsu-lab/alpaca | |
- Learning rate: 0.0003 | |
- Number of epochs: 10 | |
- Data split: Training: 90% / Validation: 10% | |
- Gradient accumulation steps: 1 | |
- | |
--- | |
license: apache-2.0 | |
--- | |