Qwen2-7B-Instruct LoRA TR
Bu model, Qwen/Qwen2-7B-Instruct tabanlı olarak, Türkçe diyaloglar üzerine LoRA ve PEFT teknikleriyle fine-tune edilmiştir.
Fine-tune işlemi Google Colab Pro ortamında, A100 GPU üzerinde, bitsandbytes ile quantized (8-bit) olarak gerçekleştirilmiştir.
Model Details
- Base Model: Qwen/Qwen2-7B-Instruct
- Fine-tuned Model: elifbasboga/qwen2-7b-instruct-lora-tr
- Method: LoRA (Parameter-Efficient Fine-Tuning), PEFT
- Language(s): Turkish
- Libraries: transformers, peft, bitsandbytes
- License: Apache 2.0 (base model) – lütfen kendi kullanımına göre belirle!
Model Description
This model is a Turkish conversational AI based on Qwen2-7B-Instruct, fine-tuned with LoRA adapters on custom Turkish dialog dataset.
It is suitable for chatbot, assistant, and Turkish NLP tasks.
Model Sources
Uses
Direct Use
- Turkish conversational tasks
- Chatbot and assistant applications
Downstream Use
- Can be further fine-tuned for domain-specific Turkish tasks.
Out-of-Scope Use
- Not suitable for tasks outside Turkish language modeling.
- Not recommended for critical applications without further evaluation.
Bias, Risks, and Limitations
- May reflect biases present in the training data (Turkish conversations).
- Outputs may sometimes be off-topic, incorrect, or inappropriate.
Recommendations
- Use responsibly and review outputs, especially in sensitive or production settings.
How to Get Started
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("elifbasboga/qwen2-7b-instruct-lora-tr")
tokenizer = AutoTokenizer.from_pretrained("elifbasboga/qwen2-7b-instruct-lora-tr")
Training Procedure
- Epochs: 3
- Batch Size: 1
- Learning Rate: 2e-4
- Quantization: 8-bit (bitsandbytes)
- Adapter: LoRA
Hardware
- Google Colab Pro, NVIDIA A100 GPU
Software
- Python 3.11
- transformers, peft, bitsandbytes, datasets
Evaluation
- Model was evaluated qualitatively during training via loss metrics.
- For best results, further testing on your own data is recommended.
Environmental Impact
- GPU: NVIDIA A100
- Cloud Provider: Google Colab
Citation
If you use this model, please cite the base model and this repository.
@misc{elifbasboga_qwen2_7b_instruct_lora_tr_2025,
title={Qwen2-7B-Instruct LoRA Turkish},
author={elifbasboga},
year={2025},
howpublished={\url{https://huggingface.co/elifbasboga/qwen2-7b-instruct-lora-tr}},
}
Model Card Authors
- elifbasboga
Contact
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support