See axolotl config
axolotl version: 0.12.2
base_model: sudoping01/bambara-llm-exp3-v2-merged #google/gemma-3n-E2B-it
hub_model_id: sudoping01/bambara-llm-exp3-continous-v2
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
load_in_4bit: false # Changed: Use LoRA instead of QLoRA for better quality
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
ddp: true
chat_template: gemma3n
eot_tokens:
- <end_of_turn>
special_tokens:
eot_token: <end_of_turn>
datasets:
- path: sudoping01/bambara-instructions
type: chat_template
split: train
name: cleaned
field_messages: messages
message_property_mappings:
role: role
content: content
val_set_size: 0.01
output_dir: ./outputs/bambara-gemma3n-lora-exp3-continous-v2
adapter: lora # Changed: LoRA instead of QLoRA
lora_r: 64 # Increased: Higher rank for better capacity
lora_alpha: 128 # Increased: 2x the rank is a good starting point
lora_dropout: 0.05
lora_target_modules: 'model.language_model.layers.[\d]+.(mlp|self_attn).(up|down|gate|q|k|v|o)_proj'
sequence_len: 4096
sample_packing: false
pad_to_sequence_len: false
micro_batch_size: 8 # Increased: You have 8x H100s, can handle larger batches
gradient_accumulation_steps: 2
num_epochs: 6 # Reduced: Start conservative with 1M samples
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 1.2e-4 # Changed: Your friend's suggestion for 1M samples on 7B model
warmup_ratio: 0.03
weight_decay: 0.01
bf16: auto
tf32: false
logging_steps: 10
saves_per_epoch: 2 # Increased: More checkpoints for 1M samples
evals_per_epoch: 2
bambara-llm-exp3-continous-v2
This model is a fine-tuned version of sudoping01/bambara-llm-exp3-v2-merged on the sudoping01/bambara-instructions dataset. It achieves the following results on the evaluation set:
- Loss: 0.2763
- Memory/max Mem Active(gib): 57.85
- Memory/max Mem Allocated(gib): 57.85
- Memory/device Mem Reserved(gib): 59.88
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Use adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1267
- training_steps: 42251
Training results
Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) |
---|---|---|---|---|---|---|
No log | 0 | 0 | 0.3871 | 18.76 | 18.76 | 19.99 |
0.4401 | 0.5 | 3521 | 0.4003 | 57.52 | 57.52 | 58.42 |
0.4292 | 1.0 | 7042 | 0.3883 | 57.52 | 57.52 | 58.42 |
0.3849 | 1.5 | 10563 | 0.3775 | 57.54 | 57.54 | 59.3 |
0.4088 | 2.0 | 14084 | 0.3677 | 57.54 | 57.54 | 59.3 |
0.3887 | 2.5 | 17605 | 0.3540 | 57.84 | 57.84 | 59.3 |
0.3169 | 3.0 | 21126 | 0.3368 | 57.85 | 57.85 | 59.88 |
0.3384 | 3.5 | 24647 | 0.3221 | 57.85 | 57.85 | 59.88 |
0.3119 | 4.0 | 28168 | 0.3043 | 57.85 | 57.85 | 59.88 |
0.3069 | 4.5 | 31689 | 0.2908 | 57.85 | 57.85 | 59.88 |
0.3314 | 5.0 | 35210 | 0.2807 | 57.85 | 57.85 | 59.88 |
0.273 | 5.5 | 38731 | 0.2763 | 57.85 | 57.85 | 59.88 |
Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 1
Model tree for sudoping01/bambara-llm-exp3-continous-v2-all
Base model
sudoping01/bambara-llm-exp3-v2-merged