Llama-3.2-3B-SFT-QLoRA / train_results.json
ByungOh-Ko
Update model
468cae5
raw
history blame contribute delete
252 Bytes
{
"epoch": 0.9994226994573375,
"total_flos": 5.054722820332847e+18,
"train_loss": 1.3049180526756903,
"train_runtime": 62465.0629,
"train_samples": 207864,
"train_samples_per_second": 3.328,
"train_steps_per_second": 0.026
}