Built with Axolotl

See axolotl config

axolotl version: 0.8.0.dev0

# 学習のベースモデルに関する設定
base_model: google/gemma-2-2b
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

# 学習後のモデルのHFへのアップロードに関する設定
hub_model_id: kazuyamaa/gemma-2-2b-sft-lora
hub_strategy: "end"
push_dataset_to_hub:
hf_use_auth_token: true

# Liger Kernelの設定(学習の軽量・高速化)
plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_cross_entropy: false
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

# 量子化に関する設定
load_in_8bit: false
load_in_4bit: true

# SFTに利用するchat templateの設定
chat_template: gemma

# 学習データセットの前処理に関する設定
datasets:
  - path: kazuyamaa/multi-language-messages-01
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/code-translate-google_messages
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/code_x_glue_cc_code_refinement_messages
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/CodeTranslatorLLM-Code-Translation_messages
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/CodeTranslatorLLM-Code-Translation_messages
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content

# データセット、モデルの出力先に関する設定
shuffle_merged_datasets: true
dataset_prepared_path: /workspace/data/sft-data
output_dir: /workspace/data/models/gemma-2-2b-sft

# valid datasetのサイズ
val_set_size: 0.05

# LoRAに関する設定(フルファインチューニングしたい場合は全て空欄にする)
adapter: qlora
lora_model_dir:
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:

# wandbに関する設定
wandb_project: axolotl
wandb_entity: kazukitakayamas051-securities-companies
wandb_watch:
wandb_name: sft-lora-1
wandb_log_model:

# 学習に関する様々な設定
sequence_len: 4096
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
cosine_min_lr_ratio: 0.1
learning_rate: 3e-4

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: false
early_stopping_patience:
auto_resume_from_checkpoints: true
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

save_strategy: steps
save_steps: 50
save_total_limit: 2

warmup_steps: 10
eval_steps: 50
eval_batch_size: 1
eval_table_size:
eval_max_new_tokens:
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
  pad_token: <pad>

gemma-2-2b-sft-lora

This model is a fine-tuned version of google/gemma-2-2b on the kazuyamaa/multi-language-messages-01, the kazuyamaa/code-translate-google_messages, the kazuyamaa/code_x_glue_cc_code_refinement_messages, the kazuyamaa/CodeTranslatorLLM-Code-Translation_messages and the kazuyamaa/CodeTranslatorLLM-Code-Translation_messages datasets. It achieves the following results on the evaluation set:

  • Loss: 0.1289

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 32
  • total_eval_batch_size: 2
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
0.6413 0.0019 1 0.6653
0.4044 0.0938 50 0.3591
0.3151 0.1877 100 0.2964
0.2916 0.2815 150 0.2508
0.2053 0.3753 200 0.2177
0.1833 0.4692 250 0.1907
0.1789 0.5630 300 0.1711
0.1414 0.6568 350 0.1529
0.129 0.7506 400 0.1420
0.1153 0.8445 450 0.1344
0.1309 0.9383 500 0.1289

Framework versions

  • PEFT 0.14.0
  • Transformers 4.49.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.1
Downloads last month
-
Safetensors
Model size
2.61B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kazuyamaa/gemma-2-2b-sft-merged

Base model

google/gemma-2-2b
Adapter
(94)
this model
Adapters
1 model

Datasets used to train kazuyamaa/gemma-2-2b-sft-merged