|
|
--- |
|
|
language: |
|
|
- kv |
|
|
- vro |
|
|
- liv |
|
|
base_model: |
|
|
- tartuNLP/Llama-SMUGRI-7B |
|
|
- meta-llama/Llama-2-7b-hf |
|
|
--- |
|
|
|
|
|
# Llama-SMUGRI-7B-Instruct-MTI |
|
|
An instruction-tuned version of [tartuNLP/Llama-SMUGRI-7B](tartuNLP/Llama-SMUGRI-7B) base model continually pre-trained from [meta-llama/Llama-2-7b-hf](meta-llama/Llama-2-7b-hf) |
|
|
to support Võro, Komi, and Livonian. The model additionally supports English, Estonian, Finnish, and Russian, however those languages were not the focus. |
|
|
|
|
|
The instruction-tuning dataset consists of supporting instructions in Estonian, Finnish, English, and Russian, |
|
|
and Alpaca-style instructions translated into Võro, Livonian, and Komi with [Neurotõlge](https://neurotolge.ee/). |
|
|
See our [paper](https://arxiv.org/abs/2410.18902) for more details (model referenced as *Llama-SMUGRI-Instruct SupInst+TrAlpaca*). |
|
|
|
|
|
## Usage |
|
|
We trained and evaluated our model with `transformers==4.36.2`. |
|
|
|
|
|
Example usage: |
|
|
``` |
|
|
from transformers import pipeline |
|
|
import torch |
|
|
|
|
|
pipe = pipeline("text-generation", model="tartuNLP/Llama-SMUGRI-7B-Instruct-MTI", torch_dtype=torch.bfloat16, device_map="auto") |
|
|
|
|
|
messages = [ |
|
|
{"role": "user", "content": "Kolm nõvvo, et terveq püssüq."}, |
|
|
] |
|
|
|
|
|
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
|
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.6, top_k=50, top_p=0.9) |
|
|
print(outputs[0]["generated_text"][len(prompt):]) |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
``` |
|
|
@misc{purason2024llmsextremelylowresourcefinnougric, |
|
|
title={LLMs for Extremely Low-Resource Finno-Ugric Languages}, |
|
|
author={Taido Purason and Hele-Andra Kuulmets and Mark Fishel}, |
|
|
year={2024}, |
|
|
eprint={2410.18902}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2410.18902}, |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
|