Model Card for Model ID

gemma-2b-tr fine-tuned with Turkish Instruction-Response pairs.

Model Details

Model Description

Uses

The model is designed for Turkish instruction following and question answering. Its current response quality is limited, likely due to the small instruction set and model size. It is not recommended for real-world applications at this stage.

Restrictions

Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms Please refer to the gemma use restrictions before start using the model. https://ai.google.dev/gemma/terms#3.2-use

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Metin/gemma-2b-tr-inst")
model = AutoModelForCausalLM.from_pretrained("Metin/gemma-2b-tr-inst")

system_prompt = "You are a helpful assistant. Always reply in Turkish."
instruction = "Ankara hangi ülkenin başkentidir?"
prompt = f"{system_prompt} [INST] {instruction} [/INST]"
input_ids = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

As it can be seen from the above example instructions should be framed within the following structure:

SYSTEM_PROMPT [INST] <Your instruction here> [/INST]

Training Details

Training Data

Training Procedure

Training Hyperparameters

  • Adapter: QLoRA
  • Epochs: 1
  • Context length: 1024
  • LoRA Rank: 32
  • LoRA Alpha: 32
  • LoRA Dropout: 0.05
Downloads last month
62
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Metin/gemma-2b-tr-inst

Quantizations
1 model