base_model: google/gemma-2-9b-it
library_name: peft
license: cc-by-nc-4.0
language:
- uk
pipeline_tag: text-generation
Model Card for Model ID
This model is CC BY NC 4.0 (allowing only non-commercial use) and should not be used outside of research purposes.
Presented in Empowering Smaller Models: Tuning LLaMA and Gemma with Chain-of-Thought for Ukrainian Exam Tasks (arXiv:2503.13988)
PEFT 4bit tuning of google/gemma-2-9b-it
on Ukrainian language and literature tasks of ZNO (EIE) & NMT dataset to generate correct answer letter:
<bos><start_of_turn>user
Дайте відповідь на завдання, починаючи з ключового слова "Відповідь:" та використовуючи лише наведені нижче варіанти. У якості відповіді наведіть лише літеру, що відповідає правильному варіанту. Якщо правильних відповідей декілька, то перерахуйте їх через ";".
Завдання: З’ясуйте, якими частинами мови є виділені слова в реченні (цифра позначає наступне слово).
Сучасна людина, щоб бути (1)успішною, має вчитися (2)впродовж (3)усього життя, (4)опановуючи нові галузі знань.
Варіанти відповіді:
А – займенник
Б – прикметник
В – форма дієслова (дієприкметник)
Г – форма дієслова (дієприслівник)
Д – прийменник<end_of_turn>
<start_of_turn>model
Відповідь: В;Д;А;Б<end_of_turn>
Inference code
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16, # computation in fp16
bnb_4bit_use_double_quant=True, # enables double quantization for better accuracy
bnb_4bit_quant_type="nf4" # choose "nf4" (normal float4) or other types as supported
)
base_model = "google/gemma-2-9b-it"
tokenizer = AutoTokenizer.from_pretrained(base_model, max_sequence_length=3072, model_max_length=3072)
model_base = AutoModelForCausalLM.from_pretrained(base_model, quantization_config=quantization_config, device_map="auto", torch_dtype=torch.float16, use_flash_attention_2=False)
model = PeftModel.from_pretrained(model_base, "NLPForUA/gemma-2-it-zno-al", quantization_config=quantization_config, device_map="auto", torch_dtype=torch.float16, use_flash_attention_2=False)
print(tokenizer.decode(
model.generate(
input_ids=inputs,
max_new_tokens=1024,
use_cache=True,
temperature=0.0,
do_sample=False,
repetition_penalty=1.0,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=[101, 107]
)[0]))
Model Details
Model Description
- Developed by: NLP for UA
- Model type: Gemma
- Language(s) (NLP): Ukrainian (uk)
- License: cc-by-nc-4.0
- Finetuned from model: google/gemma-2-9b-it
Model Sources [optional]
- Repository: github.com/NLPForUA/ZNO
- Paper: Empowering Smaller Models: Tuning LLaMA and Gemma with Chain-of-Thought for Ukrainian Exam Tasks (arXiv:2503.13988)
Uses
Direct Use
The model can be used directly for generating correct answers to Ukrainian language and literature exam tasks. Input should follow the format shown in the example above.
Downstream Use
The model could be fine-tuned further for other Ukrainian language tasks or integrated into educational applications.
Out-of-Scope Use
This model is specifically trained for Ukrainian exam tasks. It may not perform well on other languages or tasks.
Bias, Risks, and Limitations
The model may exhibit biases present in the training data. It is crucial to critically evaluate its outputs and be aware of potential inaccuracies. Further analysis is needed to fully characterize biases and limitations.
Recommendations
Users should be aware of the potential biases and limitations of the model and use its output critically. Further evaluation is needed to fully assess the model's capabilities and limitations.
Training Details
Training Data
[More Information Needed - Link to Dataset Card and description]
Training Procedure
[More Information Needed]
Training Hyperparameters
- Training regime: 4-bit quantization
Speeds, Sizes, Times
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed - Link to Dataset Card and description]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
Summary
[More Information Needed]
Citation
BibTeX:
@article{EmpoweringSmallerModels,
author = {Mykyta Syromiatnikov, Victoria Ruvinskaya, and Nataliia Komleva},
title = {Empowering Smaller Models: Tuning LLaMA and Gemma with Chain-of-Thought for Ukrainian Exam Tasks},
journal = {arXiv preprint arXiv:2503.13988},
year = {2025}
}
APA:
[More Information Needed]
Model Card Contact
[More Information Needed]
Framework versions
- PEFT 0.14.0