albert-random-3
This model is a fine-tuned version of deepseek-ai/deepseek-math-7b-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0375
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.802 | 0.01 | 1 | 0.4394 |
0.8 | 0.01 | 2 | 0.4325 |
0.7741 | 0.02 | 3 | 0.4256 |
0.7764 | 0.03 | 4 | 0.4207 |
0.756 | 0.03 | 5 | 0.4180 |
0.7324 | 0.04 | 6 | 0.4137 |
0.7399 | 0.04 | 7 | 0.4102 |
0.6939 | 0.05 | 8 | 0.4063 |
0.5255 | 0.06 | 9 | 0.4014 |
0.522 | 0.06 | 10 | 0.4000 |
0.5147 | 0.07 | 11 | 0.3937 |
0.5088 | 0.08 | 12 | 0.3893 |
0.5033 | 0.08 | 13 | 0.3856 |
0.5035 | 0.09 | 14 | 0.3826 |
0.494 | 0.1 | 15 | 0.3783 |
0.4919 | 0.1 | 16 | 0.3748 |
0.49 | 0.11 | 17 | 0.3721 |
0.4856 | 0.12 | 18 | 0.3673 |
0.4726 | 0.12 | 19 | 0.3653 |
0.47 | 0.13 | 20 | 0.3611 |
0.4539 | 0.13 | 21 | 0.3561 |
0.4553 | 0.14 | 22 | 0.3524 |
0.4529 | 0.15 | 23 | 0.3486 |
0.3904 | 0.15 | 24 | 0.3453 |
0.3481 | 0.16 | 25 | 0.3412 |
0.3392 | 0.17 | 26 | 0.3377 |
0.3384 | 0.17 | 27 | 0.3330 |
0.3353 | 0.18 | 28 | 0.3303 |
0.3278 | 0.19 | 29 | 0.3266 |
0.3286 | 0.19 | 30 | 0.3230 |
0.3197 | 0.2 | 31 | 0.3201 |
0.3173 | 0.2 | 32 | 0.3161 |
0.3058 | 0.21 | 33 | 0.3131 |
0.3183 | 0.22 | 34 | 0.3081 |
0.3043 | 0.22 | 35 | 0.3043 |
0.3051 | 0.23 | 36 | 0.2996 |
0.2966 | 0.24 | 37 | 0.2979 |
0.2995 | 0.24 | 38 | 0.2936 |
0.2929 | 0.25 | 39 | 0.2896 |
0.2434 | 0.26 | 40 | 0.2858 |
0.24 | 0.26 | 41 | 0.2817 |
0.2339 | 0.27 | 42 | 0.2777 |
0.2353 | 0.28 | 43 | 0.2750 |
0.2355 | 0.28 | 44 | 0.2724 |
0.2285 | 0.29 | 45 | 0.2666 |
0.2264 | 0.29 | 46 | 0.2631 |
0.2215 | 0.3 | 47 | 0.2578 |
0.2151 | 0.31 | 48 | 0.2542 |
0.2178 | 0.31 | 49 | 0.2502 |
0.2139 | 0.32 | 50 | 0.2461 |
0.2028 | 0.33 | 51 | 0.2416 |
0.2046 | 0.33 | 52 | 0.2372 |
0.2071 | 0.34 | 53 | 0.2326 |
0.1921 | 0.35 | 54 | 0.2287 |
0.1912 | 0.35 | 55 | 0.2238 |
0.1724 | 0.36 | 56 | 0.2182 |
0.1675 | 0.36 | 57 | 0.2147 |
0.1653 | 0.37 | 58 | 0.2105 |
0.1585 | 0.38 | 59 | 0.2047 |
0.1546 | 0.38 | 60 | 0.2013 |
0.1507 | 0.39 | 61 | 0.1966 |
0.1552 | 0.4 | 62 | 0.1923 |
0.1417 | 0.4 | 63 | 0.1876 |
0.142 | 0.41 | 64 | 0.1831 |
0.1387 | 0.42 | 65 | 0.1788 |
0.1315 | 0.42 | 66 | 0.1735 |
0.1358 | 0.43 | 67 | 0.1693 |
0.1312 | 0.44 | 68 | 0.1637 |
0.1305 | 0.44 | 69 | 0.1596 |
0.1269 | 0.45 | 70 | 0.1551 |
0.1278 | 0.45 | 71 | 0.1498 |
0.1193 | 0.46 | 72 | 0.1451 |
0.1175 | 0.47 | 73 | 0.1392 |
0.1132 | 0.47 | 74 | 0.1351 |
0.1098 | 0.48 | 75 | 0.1298 |
0.1092 | 0.49 | 76 | 0.1254 |
0.0992 | 0.49 | 77 | 0.1197 |
0.1 | 0.5 | 78 | 0.1142 |
0.0929 | 0.51 | 79 | 0.1090 |
0.0875 | 0.51 | 80 | 0.1048 |
0.091 | 0.52 | 81 | 0.1006 |
0.083 | 0.52 | 82 | 0.0963 |
0.0814 | 0.53 | 83 | 0.0921 |
0.0818 | 0.54 | 84 | 0.0895 |
0.0748 | 0.54 | 85 | 0.0869 |
0.0758 | 0.55 | 86 | 0.0843 |
0.0779 | 0.56 | 87 | 0.0817 |
0.081 | 0.56 | 88 | 0.0797 |
0.0706 | 0.57 | 89 | 0.0778 |
0.0607 | 0.58 | 90 | 0.0762 |
0.0642 | 0.58 | 91 | 0.0744 |
0.0649 | 0.59 | 92 | 0.0729 |
0.0631 | 0.6 | 93 | 0.0719 |
0.0594 | 0.6 | 94 | 0.0698 |
0.0578 | 0.61 | 95 | 0.0692 |
0.0616 | 0.61 | 96 | 0.0680 |
0.0567 | 0.62 | 97 | 0.0669 |
0.0558 | 0.63 | 98 | 0.0655 |
0.0536 | 0.63 | 99 | 0.0642 |
0.0551 | 0.64 | 100 | 0.0639 |
0.0509 | 0.65 | 101 | 0.0627 |
0.06 | 0.65 | 102 | 0.0616 |
0.053 | 0.66 | 103 | 0.0608 |
0.0587 | 0.67 | 104 | 0.0612 |
0.0509 | 0.67 | 105 | 0.0592 |
0.0489 | 0.68 | 106 | 0.0584 |
0.0515 | 0.68 | 107 | 0.0579 |
0.0515 | 0.69 | 108 | 0.0564 |
0.0485 | 0.7 | 109 | 0.0558 |
0.0496 | 0.7 | 110 | 0.0544 |
0.0471 | 0.71 | 111 | 0.0534 |
0.0514 | 0.72 | 112 | 0.0524 |
0.0458 | 0.72 | 113 | 0.0522 |
0.0454 | 0.73 | 114 | 0.0515 |
0.0435 | 0.74 | 115 | 0.0511 |
0.0475 | 0.74 | 116 | 0.0503 |
0.0506 | 0.75 | 117 | 0.0497 |
0.0452 | 0.76 | 118 | 0.0492 |
0.0468 | 0.76 | 119 | 0.0488 |
0.0446 | 0.77 | 120 | 0.0487 |
0.0419 | 0.77 | 121 | 0.0481 |
0.0414 | 0.78 | 122 | 0.0477 |
0.0424 | 0.79 | 123 | 0.0467 |
0.044 | 0.79 | 124 | 0.0465 |
0.0413 | 0.8 | 125 | 0.0460 |
0.0439 | 0.81 | 126 | 0.0455 |
0.0391 | 0.81 | 127 | 0.0453 |
0.0407 | 0.82 | 128 | 0.0451 |
0.0357 | 0.83 | 129 | 0.0446 |
0.0387 | 0.83 | 130 | 0.0443 |
0.0411 | 0.84 | 131 | 0.0442 |
0.0396 | 0.84 | 132 | 0.0438 |
0.039 | 0.85 | 133 | 0.0435 |
0.0427 | 0.86 | 134 | 0.0432 |
0.0417 | 0.86 | 135 | 0.0427 |
0.035 | 0.87 | 136 | 0.0425 |
0.036 | 0.88 | 137 | 0.0425 |
0.0391 | 0.88 | 138 | 0.0419 |
0.0389 | 0.89 | 139 | 0.0414 |
0.042 | 0.9 | 140 | 0.0413 |
0.0368 | 0.9 | 141 | 0.0415 |
0.0336 | 0.91 | 142 | 0.0406 |
0.0356 | 0.92 | 143 | 0.0405 |
0.0349 | 0.92 | 144 | 0.0400 |
0.04 | 0.93 | 145 | 0.0401 |
0.0398 | 0.93 | 146 | 0.0397 |
0.0333 | 0.94 | 147 | 0.0398 |
0.0344 | 0.95 | 148 | 0.0390 |
0.037 | 0.95 | 149 | 0.0396 |
0.0327 | 0.96 | 150 | 0.0382 |
0.0336 | 0.97 | 151 | 0.0383 |
0.0342 | 0.97 | 152 | 0.0378 |
0.0348 | 0.98 | 153 | 0.0373 |
0.0368 | 0.99 | 154 | 0.0380 |
0.0333 | 0.99 | 155 | 0.0376 |
0.0344 | 1.0 | 156 | 0.0375 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.17.1
- Tokenizers 0.15.2
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for woody72/albert-random-3
Base model
deepseek-ai/deepseek-math-7b-base