myBit-Llama2-jp-127M-2B4TLike-aozora-sort-3epc

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2302

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0024
  • train_batch_size: 24
  • eval_batch_size: 24
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 96
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
6.9258 0.0883 100 5.2708
4.7937 0.1765 200 4.4660
4.2565 0.2648 300 4.1755
3.9951 0.3530 400 4.0060
3.8438 0.4413 500 3.8854
3.7223 0.5296 600 3.7829
3.6523 0.6178 700 3.7125
3.5985 0.7061 800 3.6535
3.5666 0.7944 900 3.6039
3.5519 0.8826 1000 3.5693
3.5365 0.9709 1100 3.5404
3.6085 1.0591 1200 3.5638
3.4953 1.1474 1300 3.4983
3.425 1.2357 1400 3.4737
3.3693 1.3239 1500 3.4579
3.3396 1.4122 1600 3.4431
3.3187 1.5004 1700 3.4259
3.3013 1.5887 1800 3.4121
3.3036 1.6770 1900 3.4004
3.2947 1.7652 2000 3.3808
3.3041 1.8535 2100 3.3653
3.304 1.9417 2200 3.3541
3.3582 2.0300 2300 3.4233
3.3097 2.1183 2400 3.3351
3.2426 2.2065 2500 3.3234
3.2034 2.2948 2600 3.3149
3.1675 2.3831 2700 3.3033
3.1611 2.4713 2800 3.2953
3.1344 2.5596 2900 3.2832
3.1391 2.6478 3000 3.2729
3.1324 2.7361 3100 3.2572
3.1355 2.8244 3200 3.2440
3.1417 2.9126 3300 3.2302

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
3
Safetensors
Model size
128M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support