Felladrin's picture
Update README.md
068b303
|
raw
history blame
1.33 kB
---
license: apache-2.0
tags:
- autotrain
- text-generation
base_model: Locutusque/TinyMistral-248M
datasets:
- tatsu-lab/alpaca
widget:
- text: |-
Find me a list of some nice places to visit around the world.
### Response:
- text: |-
Tell me a story.
Once upon a time...
### Response:
inference:
parameters:
max_new_tokens: 32
repetition_penalty: 1.15
do_sample: true
temperature: 0.5
top_p: 0.5
---
# Locutusque's TinyMistral-248M trained on the Alpaca dataset using AutoTrain
- Base model: [Locutusque/TinyMistral-248M](https://huggingface.co/Locutusque/TinyMistral-248M)
- Dataset: [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)
- Training: 2h under [these parameters](https://huggingface.co/Felladrin/TinyMistral-248M-Alpaca/blob/93533a5f190f79a8ad5e5a9765ce9ec498dfa5bd/training_params.json)
- Availability in other ML formats:
- GGUF: [afrideva/TinyMistral-248M-Alpaca-GGUF](https://huggingface.co/afrideva/TinyMistral-248M-Alpaca-GGUF)
- ONNX: [Felladrin/onnx-int8-TinyMistral-248M-Alpaca](https://huggingface.co/Felladrin/onnx-int8-TinyMistral-248M-Alpaca)
## Recommended Prompt Format
```
<instruction>
### Response:
```
## Recommended Inference Parameters
```yml
repetition_penalty: 1.15
do_sample: true
temperature: 0.5
top_p: 0.5
```