Uploaded model
- Developed by: Haq Nawa Malik
- License: apache-2.0
Key Features
- Unsloth: Leverages Unsloth for efficient and faster fine-tuning.
- 4-bit Quantization: Utilizes 4-bit quantization to reduce memory usage and improve performance.
- LoRA Adapters: Employs LoRA adapters to update only a small percentage of parameters.
- Hugging Face TRL: Uses the
SFTTrainer
from TRL for supervised fine-tuning. - Alpaca Dataset: Trains on the
yahma/alpaca-cleaned
dataset.
Model Card Details
Model Name: Omarrran/Qwen2_5_7B_hnm
Dataset: yahma/alpaca-cleaned
Training Method: Supervised Fine-Tuning with LoRA adapters.
Quantization: 4-bit quantization.
Training Steps: 90 steps (adjust for full training runs).
LoRA Parameters: r=16, lora_alpha=16, lora_dropout=0, bias="none".
Maximum Sequence Length: 2048 tokens
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-generation models for adapter-transformers
library.