Built with Axolotl

See axolotl config

axolotl version: 0.8.0.dev0

# 学習のベースモデルに関する設定
base_model: google/gemma-2-2b
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

# 学習後のモデルのHFへのアップロードに関する設定
hub_model_id: kazuyamaa/code-trans-gemma-2-2b-sft-lora
hub_strategy: "end"
push_dataset_to_hub:
hf_use_auth_token: true

# Liger Kernelの設定(学習の軽量・高速化)
plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_cross_entropy: false
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

# 量子化に関する設定
load_in_8bit: false
load_in_4bit: false

# SFTに利用するchat templateの設定
chat_template: gemma

# 学習データセットの前処理に関する設定
datasets:
  - path: kazuyamaa/multi-language-messages-01
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/code-translate-google_messages
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/code_x_glue_cc_code_refinement_messages
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/CodeTranslatorLLM-Code-Translation_messages
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/CodeTranslatorLLM-Code-Translation_messages
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
  - path: kazuyamaa/CodeLlama-34b-Instruct-hf-synthetic-datasets
    split: train
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content

# データセット、モデルの出力先に関する設定
shuffle_merged_datasets: true
dataset_prepared_path: /workspace/data/sft-data
output_dir: /workspace/data/models/code-trans-gemma-2-2b-sft-ver01

# valid datasetのサイズ
val_set_size: 0.05

# LoRAに関する設定(フルファインチューニングしたい場合は全て空欄にする)
adapter: 
lora_model_dir:
lora_r: 
lora_alpha: 
lora_dropout: 
lora_target_linear: 
lora_fan_in_fan_out:

# wandbに関する設定
wandb_project: axolotl
wandb_entity: kazukitakayamas051-securities-companies
wandb_watch:
wandb_name: sft-lora-2
wandb_log_model:

# 学習に関する様々な設定
sequence_len: 4096
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
cosine_min_lr_ratio: 0.1
learning_rate: 3e-4

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: false
early_stopping_patience:
auto_resume_from_checkpoints: true
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

save_strategy: steps
save_steps: 50
save_total_limit: 2

warmup_steps: 10
eval_steps: 50
eval_batch_size: 1
eval_table_size:
eval_max_new_tokens:
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
  pad_token: <pad>

code-trans-gemma-2-2b-sft-lora

This model is a fine-tuned version of google/gemma-2-2b on the kazuyamaa/multi-language-messages-01, the kazuyamaa/code-translate-google_messages, the kazuyamaa/code_x_glue_cc_code_refinement_messages, the kazuyamaa/CodeTranslatorLLM-Code-Translation_messages, the kazuyamaa/CodeTranslatorLLM-Code-Translation_messages and the kazuyamaa/CodeLlama-34b-Instruct-hf-synthetic-datasets datasets. It achieves the following results on the evaluation set:

  • Loss: 0.1038

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 32
  • total_eval_batch_size: 2
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
0.6358 0.0019 1 0.6480
0.7695 0.0936 50 0.6026
0.5641 0.1871 100 0.4303
0.3587 0.2807 150 0.3163
0.2699 0.3742 200 0.2515
0.3096 0.4678 250 0.2050
0.1531 0.5613 300 0.1695
0.1314 0.6549 350 0.1437
0.1047 0.7485 400 0.1267
0.0923 0.8420 450 0.1139
0.0743 0.9356 500 0.1038

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.4.1
  • Tokenizers 0.21.1
Downloads last month
11
Safetensors
Model size
2.61B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kazuyamaa/code-trans-gemma-2-2b-sft

Base model

google/gemma-2-2b
Finetuned
(524)
this model
Finetunes
1 model

Datasets used to train kazuyamaa/code-trans-gemma-2-2b-sft