sagittal-b4-v11-finetuned-segments

This model is a fine-tuned version of nvidia/mit-b4 on the jenniferlumeng/MiceSagittal dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2102
  • Mean Iou: 0.7660
  • Mean Accuracy: 0.8966
  • Overall Accuracy: 0.9025
  • Accuracy Background: nan
  • Accuracy Olfactory bulb: 0.9311
  • Accuracy Anterior olfactory nucleus: 0.8576
  • Accuracy Basal ganglia: 0.8881
  • Accuracy Cortex: 0.9547
  • Accuracy Hypothalamus: 0.7991
  • Accuracy Thalamus: 0.8460
  • Accuracy Hippocampus: 0.9531
  • Accuracy Midbrain: 0.8908
  • Accuracy Cerebellum: 0.9482
  • Accuracy Pons and medulla: 0.8971
  • Iou Background: 0.0
  • Iou Olfactory bulb: 0.9039
  • Iou Anterior olfactory nucleus: 0.7535
  • Iou Basal ganglia: 0.8169
  • Iou Cortex: 0.9464
  • Iou Hypothalamus: 0.6482
  • Iou Thalamus: 0.8106
  • Iou Hippocampus: 0.9292
  • Iou Midbrain: 0.8164
  • Iou Cerebellum: 0.9371
  • Iou Pons and medulla: 0.8635

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Background Accuracy Olfactory bulb Accuracy Anterior olfactory nucleus Accuracy Basal ganglia Accuracy Cortex Accuracy Hypothalamus Accuracy Thalamus Accuracy Hippocampus Accuracy Midbrain Accuracy Cerebellum Accuracy Pons and medulla Iou Background Iou Olfactory bulb Iou Anterior olfactory nucleus Iou Basal ganglia Iou Cortex Iou Hypothalamus Iou Thalamus Iou Hippocampus Iou Midbrain Iou Cerebellum Iou Pons and medulla
0.679 2.8571 20 0.9745 0.6058 0.7741 0.8090 nan 0.8511 0.3173 0.7203 0.9142 0.7325 0.7090 0.9034 0.7282 0.9504 0.9147 0.0 0.7558 0.2904 0.5982 0.7974 0.6685 0.5337 0.7721 0.5572 0.9323 0.7581
0.3016 5.7143 40 0.3587 0.7273 0.8768 0.8823 nan 0.9822 0.8005 0.9478 0.9440 0.6822 0.8097 0.9763 0.7227 0.9769 0.9253 0.0 0.9301 0.7134 0.7860 0.9003 0.6332 0.7492 0.8546 0.6615 0.9086 0.8635
0.1624 8.5714 60 0.2448 0.7492 0.8943 0.8995 nan 0.9893 0.8487 0.8965 0.9607 0.7695 0.7685 0.9800 0.7855 0.9873 0.9567 0.0 0.9430 0.6882 0.8037 0.9349 0.6812 0.7179 0.9229 0.7106 0.9520 0.8865
0.1345 11.4286 80 0.2230 0.7558 0.8896 0.9019 nan 0.9778 0.8016 0.9463 0.9728 0.6755 0.8240 0.9634 0.8346 0.9680 0.9321 0.0 0.9270 0.7009 0.8164 0.9578 0.6296 0.7628 0.9486 0.7469 0.9470 0.8766
0.0976 14.2857 100 0.1891 0.7781 0.9114 0.9200 nan 0.9805 0.8293 0.9469 0.9676 0.7870 0.8236 0.9832 0.8752 0.9811 0.9393 0.0 0.9437 0.7270 0.8437 0.9653 0.7230 0.7625 0.9627 0.7735 0.9634 0.8941
0.0881 17.1429 120 0.1893 0.7803 0.9135 0.9214 nan 0.9807 0.8364 0.9315 0.9758 0.7977 0.8714 0.9677 0.8514 0.9779 0.9439 0.0 0.9431 0.7231 0.8374 0.9699 0.7346 0.7965 0.9561 0.7745 0.9599 0.8885
0.0873 20.0 140 0.1849 0.7739 0.9054 0.9158 nan 0.9813 0.8181 0.9328 0.9729 0.7932 0.8266 0.9580 0.8558 0.9728 0.9421 0.0 0.9382 0.7194 0.8417 0.9664 0.7251 0.7666 0.9478 0.7653 0.9597 0.8831
0.076 22.8571 160 0.1871 0.7765 0.9086 0.9192 nan 0.9823 0.8126 0.9226 0.9777 0.8067 0.8463 0.9570 0.8531 0.9776 0.9498 0.0 0.9369 0.7113 0.8346 0.9719 0.7353 0.7829 0.9478 0.7706 0.9621 0.8881
0.0948 25.7143 180 0.1850 0.7727 0.9047 0.9139 nan 0.9730 0.8330 0.9127 0.9667 0.8194 0.8083 0.9538 0.8653 0.9759 0.9384 0.0 0.9374 0.7165 0.8343 0.9599 0.7339 0.7588 0.9442 0.7697 0.9559 0.8895
0.0646 28.5714 200 0.1813 0.7756 0.9067 0.9137 nan 0.9731 0.8354 0.9258 0.9588 0.7750 0.8675 0.9635 0.8512 0.9749 0.9422 0.0 0.9414 0.7175 0.8305 0.9569 0.7235 0.7973 0.9530 0.7725 0.9562 0.8827
0.0636 31.4286 220 0.1894 0.7729 0.9034 0.9130 nan 0.9754 0.8133 0.9279 0.9673 0.7529 0.8737 0.9581 0.8483 0.9725 0.9448 0.0 0.9383 0.7088 0.8227 0.9625 0.7090 0.7979 0.9488 0.7750 0.9556 0.8831
0.0688 34.2857 240 0.1906 0.7761 0.9079 0.9174 nan 0.9803 0.8062 0.9240 0.9747 0.7894 0.8718 0.9650 0.8394 0.9792 0.9491 0.0 0.9350 0.7046 0.8302 0.9689 0.7313 0.7982 0.9527 0.7717 0.9579 0.8864
0.0635 37.1429 260 0.1874 0.7783 0.9093 0.9189 nan 0.9814 0.8162 0.9153 0.9752 0.8063 0.8667 0.9536 0.8567 0.9783 0.9436 0.0 0.9397 0.7071 0.8299 0.9683 0.7441 0.7990 0.9457 0.7800 0.9584 0.8887
0.0693 40.0 280 0.1869 0.7785 0.9094 0.9180 nan 0.9779 0.8151 0.9170 0.9727 0.8047 0.8740 0.9628 0.8455 0.9772 0.9471 0.0 0.9398 0.7103 0.8302 0.9678 0.7427 0.8030 0.9514 0.7747 0.9586 0.8853
0.0585 42.8571 300 0.1889 0.7770 0.9081 0.9168 nan 0.9778 0.8248 0.9108 0.9720 0.8021 0.8638 0.9564 0.8545 0.9768 0.9426 0.0 0.9389 0.7094 0.8293 0.9663 0.7435 0.7943 0.9470 0.7763 0.9559 0.8859
0.0783 45.7143 320 0.1880 0.7772 0.9078 0.9166 nan 0.9768 0.8193 0.9141 0.9708 0.7949 0.8700 0.9590 0.8567 0.9741 0.9426 0.0 0.9385 0.7093 0.8312 0.9651 0.7414 0.7979 0.9484 0.7769 0.9560 0.8842
0.0588 48.5714 340 0.1859 0.7774 0.9076 0.9167 nan 0.9767 0.8169 0.9159 0.9709 0.8001 0.8695 0.9546 0.8556 0.9744 0.9415 0.0 0.9380 0.7104 0.8323 0.9656 0.7425 0.7984 0.9455 0.7779 0.9563 0.8848

Framework versions

  • Transformers 4.52.2
  • Pytorch 2.6.0+cu124
  • Datasets 2.16.1
  • Tokenizers 0.21.1
Downloads last month
19
Safetensors
Model size
64M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jenniferlumeng/sagittal-b4-v11-finetuned-segments

Base model

nvidia/mit-b4
Finetuned
(9)
this model