BxSD-finetuned-Real_RGBCROP_5e6-poly

This model is a fine-tuned version of TanAlexanderlz/BxSD_RGBCROP_Aug16F-8B16F5e6-poly on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1354
  • Accuracy: 0.7712

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: polynomial
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 2112

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.4047 0.0838 177 0.6428 0.6667
0.1045 1.0838 354 0.7805 0.7133
0.0827 2.0838 531 0.9707 0.7133
0.0142 3.0838 708 1.2223 0.71
0.0007 4.0838 885 1.3418 0.7233
0.0007 5.0838 1062 1.4507 0.7333
0.0009 6.0838 1239 1.5195 0.72
0.0003 7.0838 1416 1.5668 0.7233
0.0003 8.0838 1593 1.5697 0.74
0.0004 9.0838 1770 1.5862 0.73
0.0002 10.0838 1947 1.6013 0.7233
0.0002 11.0781 2112 1.6033 0.7267

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
16
Safetensors
Model size
86.2M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TanAlexanderlz/BxSD-finetuned-Real_RGBCROP_5e6-poly