Model Card for Model ID
A 4bit Mistral 7B model finetuned using unsloth on T4 GPU
Model Details
Model Description
Finetuned from model: unsloth/mistral-7b-bnb-4bit
Repository: https://github.com/unslothai/unsloth
Training Details
Training Data
https://huggingface.co/datasets/yahma/alpaca-cleaned
Training Procedure
Preprocessing
Alpaca prompt template is used:
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
Training Hyperparameters
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 5,
max_steps = 60,
learning_rate = 2e-4,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407
- Hardware Type: T4 GPU
- Cloud Provider: Google Colab
Framework versions
- PEFT 0.7.1
- Downloads last month
- 0
Model tree for Adishah31/mistral_4bit_lora_model
Base model
unsloth/mistral-7b-bnb-4bit