Alijeff1214 commited on
Commit
f9415f2
·
verified ·
1 Parent(s): c7b2cb9

Upload folder using huggingface_hub

Browse files
All Cluster_tokenizer_plot.png ADDED

Git LFS Details

  • SHA256: d3634cdc6d01381228e5bc8499b5b640d6e4bc34dd1fdf13dcc0f1fc86fc46ec
  • Pointer size: 130 Bytes
  • Size of remote file: 61.3 kB
Final_tokenizer_plot.png ADDED

Git LFS Details

  • SHA256: 254a32a5e850bbf5375690e68e22f0ceda25eb5beb9e9892717fae473d597652
  • Pointer size: 130 Bytes
  • Size of remote file: 57.2 kB
FineTune_16All1265068.out ADDED
@@ -0,0 +1,648 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Checking label assignment:
3
+
4
+ Domain: Mathematics
5
+ Categories: cs.IT math.IT
6
+ Abstract: information embedding ie is the transmission of information within a host signal subject to a distor...
7
+
8
+ Domain: Computer Science
9
+ Categories: cs.CY
10
+ Abstract: according to socioconstructivism approach collective situations are promoted to favor learning in cl...
11
+
12
+ Domain: Physics
13
+ Categories: physics.pop-ph physics.optics
14
+ Abstract: a method is presented for generation of a subwavelength lambda longitudinally polarized beam which p...
15
+
16
+ Domain: Chemistry
17
+ Categories: nlin.PS
18
+ Abstract: rolls in finite prandtl number rotating convection with freeslip top and bottom boundary conditions ...
19
+
20
+ Domain: Statistics
21
+ Categories: stat.ME stat.CO
22
+ Abstract: in this paper we introduce a novel particle filter scheme for a class of partiallyobserved multivari...
23
+
24
+ Domain: Biology
25
+ Categories: q-bio.PE q-bio.CB quant-ph
26
+ Abstract: this is a supplement to the paper arxivqbio containing the text of correspondence sent to nature in...
27
+
28
+ Training with All Cluster tokenizer:
29
+ Vocabulary size: 16005
30
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
31
+ Initialized model with vocabulary size: 16005
32
+ Batch 0:
33
+ input_ids shape: torch.Size([16, 256])
34
+ attention_mask shape: torch.Size([16, 256])
35
+ labels shape: torch.Size([16])
36
+ input_ids max value: 16003
37
+ Vocab size: 16005
38
+ Batch 100:
39
+ input_ids shape: torch.Size([16, 256])
40
+ attention_mask shape: torch.Size([16, 256])
41
+ labels shape: torch.Size([16])
42
+ input_ids max value: 16003
43
+ Vocab size: 16005
44
+ Batch 200:
45
+ input_ids shape: torch.Size([16, 256])
46
+ attention_mask shape: torch.Size([16, 256])
47
+ labels shape: torch.Size([16])
48
+ input_ids max value: 16003
49
+ Vocab size: 16005
50
+ Batch 300:
51
+ input_ids shape: torch.Size([16, 256])
52
+ attention_mask shape: torch.Size([16, 256])
53
+ labels shape: torch.Size([16])
54
+ input_ids max value: 16003
55
+ Vocab size: 16005
56
+ Batch 400:
57
+ input_ids shape: torch.Size([16, 256])
58
+ attention_mask shape: torch.Size([16, 256])
59
+ labels shape: torch.Size([16])
60
+ input_ids max value: 16003
61
+ Vocab size: 16005
62
+ Batch 500:
63
+ input_ids shape: torch.Size([16, 256])
64
+ attention_mask shape: torch.Size([16, 256])
65
+ labels shape: torch.Size([16])
66
+ input_ids max value: 16003
67
+ Vocab size: 16005
68
+ Batch 600:
69
+ input_ids shape: torch.Size([16, 256])
70
+ attention_mask shape: torch.Size([16, 256])
71
+ labels shape: torch.Size([16])
72
+ input_ids max value: 16003
73
+ Vocab size: 16005
74
+ Batch 700:
75
+ input_ids shape: torch.Size([16, 256])
76
+ attention_mask shape: torch.Size([16, 256])
77
+ labels shape: torch.Size([16])
78
+ input_ids max value: 16003
79
+ Vocab size: 16005
80
+ Batch 800:
81
+ input_ids shape: torch.Size([16, 256])
82
+ attention_mask shape: torch.Size([16, 256])
83
+ labels shape: torch.Size([16])
84
+ input_ids max value: 16003
85
+ Vocab size: 16005
86
+ Batch 900:
87
+ input_ids shape: torch.Size([16, 256])
88
+ attention_mask shape: torch.Size([16, 256])
89
+ labels shape: torch.Size([16])
90
+ input_ids max value: 16003
91
+ Vocab size: 16005
92
+ Epoch 1/3:
93
+ Train Loss: 0.9143, Train Accuracy: 0.6955
94
+ Val Loss: 0.6986, Val Accuracy: 0.7743, Val F1: 0.7502
95
+ Batch 0:
96
+ input_ids shape: torch.Size([16, 256])
97
+ attention_mask shape: torch.Size([16, 256])
98
+ labels shape: torch.Size([16])
99
+ input_ids max value: 16003
100
+ Vocab size: 16005
101
+ Batch 100:
102
+ input_ids shape: torch.Size([16, 256])
103
+ attention_mask shape: torch.Size([16, 256])
104
+ labels shape: torch.Size([16])
105
+ input_ids max value: 16003
106
+ Vocab size: 16005
107
+ Batch 200:
108
+ input_ids shape: torch.Size([16, 256])
109
+ attention_mask shape: torch.Size([16, 256])
110
+ labels shape: torch.Size([16])
111
+ input_ids max value: 16003
112
+ Vocab size: 16005
113
+ Batch 300:
114
+ input_ids shape: torch.Size([16, 256])
115
+ attention_mask shape: torch.Size([16, 256])
116
+ labels shape: torch.Size([16])
117
+ input_ids max value: 16003
118
+ Vocab size: 16005
119
+ Batch 400:
120
+ input_ids shape: torch.Size([16, 256])
121
+ attention_mask shape: torch.Size([16, 256])
122
+ labels shape: torch.Size([16])
123
+ input_ids max value: 16003
124
+ Vocab size: 16005
125
+ Batch 500:
126
+ input_ids shape: torch.Size([16, 256])
127
+ attention_mask shape: torch.Size([16, 256])
128
+ labels shape: torch.Size([16])
129
+ input_ids max value: 16003
130
+ Vocab size: 16005
131
+ Batch 600:
132
+ input_ids shape: torch.Size([16, 256])
133
+ attention_mask shape: torch.Size([16, 256])
134
+ labels shape: torch.Size([16])
135
+ input_ids max value: 16003
136
+ Vocab size: 16005
137
+ Batch 700:
138
+ input_ids shape: torch.Size([16, 256])
139
+ attention_mask shape: torch.Size([16, 256])
140
+ labels shape: torch.Size([16])
141
+ input_ids max value: 16003
142
+ Vocab size: 16005
143
+ Batch 800:
144
+ input_ids shape: torch.Size([16, 256])
145
+ attention_mask shape: torch.Size([16, 256])
146
+ labels shape: torch.Size([16])
147
+ input_ids max value: 16003
148
+ Vocab size: 16005
149
+ Batch 900:
150
+ input_ids shape: torch.Size([16, 256])
151
+ attention_mask shape: torch.Size([16, 256])
152
+ labels shape: torch.Size([16])
153
+ input_ids max value: 16003
154
+ Vocab size: 16005
155
+ Epoch 2/3:
156
+ Train Loss: 0.6277, Train Accuracy: 0.7987
157
+ Val Loss: 0.6150, Val Accuracy: 0.8002, Val F1: 0.7753
158
+ Batch 0:
159
+ input_ids shape: torch.Size([16, 256])
160
+ attention_mask shape: torch.Size([16, 256])
161
+ labels shape: torch.Size([16])
162
+ input_ids max value: 16003
163
+ Vocab size: 16005
164
+ Batch 100:
165
+ input_ids shape: torch.Size([16, 256])
166
+ attention_mask shape: torch.Size([16, 256])
167
+ labels shape: torch.Size([16])
168
+ input_ids max value: 16003
169
+ Vocab size: 16005
170
+ Batch 200:
171
+ input_ids shape: torch.Size([16, 256])
172
+ attention_mask shape: torch.Size([16, 256])
173
+ labels shape: torch.Size([16])
174
+ input_ids max value: 16003
175
+ Vocab size: 16005
176
+ Batch 300:
177
+ input_ids shape: torch.Size([16, 256])
178
+ attention_mask shape: torch.Size([16, 256])
179
+ labels shape: torch.Size([16])
180
+ input_ids max value: 16003
181
+ Vocab size: 16005
182
+ Batch 400:
183
+ input_ids shape: torch.Size([16, 256])
184
+ attention_mask shape: torch.Size([16, 256])
185
+ labels shape: torch.Size([16])
186
+ input_ids max value: 16003
187
+ Vocab size: 16005
188
+ Batch 500:
189
+ input_ids shape: torch.Size([16, 256])
190
+ attention_mask shape: torch.Size([16, 256])
191
+ labels shape: torch.Size([16])
192
+ input_ids max value: 16003
193
+ Vocab size: 16005
194
+ Batch 600:
195
+ input_ids shape: torch.Size([16, 256])
196
+ attention_mask shape: torch.Size([16, 256])
197
+ labels shape: torch.Size([16])
198
+ input_ids max value: 16003
199
+ Vocab size: 16005
200
+ Batch 700:
201
+ input_ids shape: torch.Size([16, 256])
202
+ attention_mask shape: torch.Size([16, 256])
203
+ labels shape: torch.Size([16])
204
+ input_ids max value: 16003
205
+ Vocab size: 16005
206
+ Batch 800:
207
+ input_ids shape: torch.Size([16, 256])
208
+ attention_mask shape: torch.Size([16, 256])
209
+ labels shape: torch.Size([16])
210
+ input_ids max value: 16003
211
+ Vocab size: 16005
212
+ Batch 900:
213
+ input_ids shape: torch.Size([16, 256])
214
+ attention_mask shape: torch.Size([16, 256])
215
+ labels shape: torch.Size([16])
216
+ input_ids max value: 16003
217
+ Vocab size: 16005
218
+ Epoch 3/3:
219
+ Train Loss: 0.5085, Train Accuracy: 0.8373
220
+ Val Loss: 0.6998, Val Accuracy: 0.7784, Val F1: 0.7468
221
+
222
+ Test Results for All Cluster tokenizer:
223
+ Accuracy: 0.7781
224
+ F1 Score: 0.7465
225
+ AUC-ROC: 0.8821
226
+
227
+ Training with Final tokenizer:
228
+ Vocabulary size: 15047
229
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
230
+ Initialized model with vocabulary size: 15047
231
+ Batch 0:
232
+ input_ids shape: torch.Size([16, 256])
233
+ attention_mask shape: torch.Size([16, 256])
234
+ labels shape: torch.Size([16])
235
+ input_ids max value: 15046
236
+ Vocab size: 15047
237
+ Batch 100:
238
+ input_ids shape: torch.Size([16, 256])
239
+ attention_mask shape: torch.Size([16, 256])
240
+ labels shape: torch.Size([16])
241
+ input_ids max value: 15046
242
+ Vocab size: 15047
243
+ Batch 200:
244
+ input_ids shape: torch.Size([16, 256])
245
+ attention_mask shape: torch.Size([16, 256])
246
+ labels shape: torch.Size([16])
247
+ input_ids max value: 15046
248
+ Vocab size: 15047
249
+ Batch 300:
250
+ input_ids shape: torch.Size([16, 256])
251
+ attention_mask shape: torch.Size([16, 256])
252
+ labels shape: torch.Size([16])
253
+ input_ids max value: 15046
254
+ Vocab size: 15047
255
+ Batch 400:
256
+ input_ids shape: torch.Size([16, 256])
257
+ attention_mask shape: torch.Size([16, 256])
258
+ labels shape: torch.Size([16])
259
+ input_ids max value: 15046
260
+ Vocab size: 15047
261
+ Batch 500:
262
+ input_ids shape: torch.Size([16, 256])
263
+ attention_mask shape: torch.Size([16, 256])
264
+ labels shape: torch.Size([16])
265
+ input_ids max value: 15046
266
+ Vocab size: 15047
267
+ Batch 600:
268
+ input_ids shape: torch.Size([16, 256])
269
+ attention_mask shape: torch.Size([16, 256])
270
+ labels shape: torch.Size([16])
271
+ input_ids max value: 15046
272
+ Vocab size: 15047
273
+ Batch 700:
274
+ input_ids shape: torch.Size([16, 256])
275
+ attention_mask shape: torch.Size([16, 256])
276
+ labels shape: torch.Size([16])
277
+ input_ids max value: 15046
278
+ Vocab size: 15047
279
+ Batch 800:
280
+ input_ids shape: torch.Size([16, 256])
281
+ attention_mask shape: torch.Size([16, 256])
282
+ labels shape: torch.Size([16])
283
+ input_ids max value: 15046
284
+ Vocab size: 15047
285
+ Batch 900:
286
+ input_ids shape: torch.Size([16, 256])
287
+ attention_mask shape: torch.Size([16, 256])
288
+ labels shape: torch.Size([16])
289
+ input_ids max value: 15046
290
+ Vocab size: 15047
291
+ Epoch 1/3:
292
+ Train Loss: 0.9914, Train Accuracy: 0.6629
293
+ Val Loss: 0.8531, Val Accuracy: 0.7224, Val F1: 0.6560
294
+ Batch 0:
295
+ input_ids shape: torch.Size([16, 256])
296
+ attention_mask shape: torch.Size([16, 256])
297
+ labels shape: torch.Size([16])
298
+ input_ids max value: 15046
299
+ Vocab size: 15047
300
+ Batch 100:
301
+ input_ids shape: torch.Size([16, 256])
302
+ attention_mask shape: torch.Size([16, 256])
303
+ labels shape: torch.Size([16])
304
+ input_ids max value: 15046
305
+ Vocab size: 15047
306
+ Batch 200:
307
+ input_ids shape: torch.Size([16, 256])
308
+ attention_mask shape: torch.Size([16, 256])
309
+ labels shape: torch.Size([16])
310
+ input_ids max value: 15046
311
+ Vocab size: 15047
312
+ Batch 300:
313
+ input_ids shape: torch.Size([16, 256])
314
+ attention_mask shape: torch.Size([16, 256])
315
+ labels shape: torch.Size([16])
316
+ input_ids max value: 15046
317
+ Vocab size: 15047
318
+ Batch 400:
319
+ input_ids shape: torch.Size([16, 256])
320
+ attention_mask shape: torch.Size([16, 256])
321
+ labels shape: torch.Size([16])
322
+ input_ids max value: 15046
323
+ Vocab size: 15047
324
+ Batch 500:
325
+ input_ids shape: torch.Size([16, 256])
326
+ attention_mask shape: torch.Size([16, 256])
327
+ labels shape: torch.Size([16])
328
+ input_ids max value: 15046
329
+ Vocab size: 15047
330
+ Batch 600:
331
+ input_ids shape: torch.Size([16, 256])
332
+ attention_mask shape: torch.Size([16, 256])
333
+ labels shape: torch.Size([16])
334
+ input_ids max value: 15046
335
+ Vocab size: 15047
336
+ Batch 700:
337
+ input_ids shape: torch.Size([16, 256])
338
+ attention_mask shape: torch.Size([16, 256])
339
+ labels shape: torch.Size([16])
340
+ input_ids max value: 15046
341
+ Vocab size: 15047
342
+ Batch 800:
343
+ input_ids shape: torch.Size([16, 256])
344
+ attention_mask shape: torch.Size([16, 256])
345
+ labels shape: torch.Size([16])
346
+ input_ids max value: 15046
347
+ Vocab size: 15047
348
+ Batch 900:
349
+ input_ids shape: torch.Size([16, 256])
350
+ attention_mask shape: torch.Size([16, 256])
351
+ labels shape: torch.Size([16])
352
+ input_ids max value: 15046
353
+ Vocab size: 15047
354
+ Epoch 2/3:
355
+ Train Loss: 0.7899, Train Accuracy: 0.7359
356
+ Val Loss: 0.7491, Val Accuracy: 0.7516, Val F1: 0.7260
357
+ Batch 0:
358
+ input_ids shape: torch.Size([16, 256])
359
+ attention_mask shape: torch.Size([16, 256])
360
+ labels shape: torch.Size([16])
361
+ input_ids max value: 15046
362
+ Vocab size: 15047
363
+ Batch 100:
364
+ input_ids shape: torch.Size([16, 256])
365
+ attention_mask shape: torch.Size([16, 256])
366
+ labels shape: torch.Size([16])
367
+ input_ids max value: 15046
368
+ Vocab size: 15047
369
+ Batch 200:
370
+ input_ids shape: torch.Size([16, 256])
371
+ attention_mask shape: torch.Size([16, 256])
372
+ labels shape: torch.Size([16])
373
+ input_ids max value: 15046
374
+ Vocab size: 15047
375
+ Batch 300:
376
+ input_ids shape: torch.Size([16, 256])
377
+ attention_mask shape: torch.Size([16, 256])
378
+ labels shape: torch.Size([16])
379
+ input_ids max value: 15046
380
+ Vocab size: 15047
381
+ Batch 400:
382
+ input_ids shape: torch.Size([16, 256])
383
+ attention_mask shape: torch.Size([16, 256])
384
+ labels shape: torch.Size([16])
385
+ input_ids max value: 15046
386
+ Vocab size: 15047
387
+ Batch 500:
388
+ input_ids shape: torch.Size([16, 256])
389
+ attention_mask shape: torch.Size([16, 256])
390
+ labels shape: torch.Size([16])
391
+ input_ids max value: 15046
392
+ Vocab size: 15047
393
+ Batch 600:
394
+ input_ids shape: torch.Size([16, 256])
395
+ attention_mask shape: torch.Size([16, 256])
396
+ labels shape: torch.Size([16])
397
+ input_ids max value: 15046
398
+ Vocab size: 15047
399
+ Batch 700:
400
+ input_ids shape: torch.Size([16, 256])
401
+ attention_mask shape: torch.Size([16, 256])
402
+ labels shape: torch.Size([16])
403
+ input_ids max value: 15046
404
+ Vocab size: 15047
405
+ Batch 800:
406
+ input_ids shape: torch.Size([16, 256])
407
+ attention_mask shape: torch.Size([16, 256])
408
+ labels shape: torch.Size([16])
409
+ input_ids max value: 15046
410
+ Vocab size: 15047
411
+ Batch 900:
412
+ input_ids shape: torch.Size([16, 256])
413
+ attention_mask shape: torch.Size([16, 256])
414
+ labels shape: torch.Size([16])
415
+ input_ids max value: 15046
416
+ Vocab size: 15047
417
+ Epoch 3/3:
418
+ Train Loss: 0.6774, Train Accuracy: 0.7784
419
+ Val Loss: 0.7340, Val Accuracy: 0.7557, Val F1: 0.7386
420
+
421
+ Test Results for Final tokenizer:
422
+ Accuracy: 0.7560
423
+ F1 Score: 0.7388
424
+ AUC-ROC: 0.8423
425
+
426
+ Training with General tokenizer:
427
+ Vocabulary size: 16000
428
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
429
+ Initialized model with vocabulary size: 16000
430
+ Batch 0:
431
+ input_ids shape: torch.Size([16, 256])
432
+ attention_mask shape: torch.Size([16, 256])
433
+ labels shape: torch.Size([16])
434
+ input_ids max value: 15945
435
+ Vocab size: 16000
436
+ Batch 100:
437
+ input_ids shape: torch.Size([16, 256])
438
+ attention_mask shape: torch.Size([16, 256])
439
+ labels shape: torch.Size([16])
440
+ input_ids max value: 15973
441
+ Vocab size: 16000
442
+ Batch 200:
443
+ input_ids shape: torch.Size([16, 256])
444
+ attention_mask shape: torch.Size([16, 256])
445
+ labels shape: torch.Size([16])
446
+ input_ids max value: 15973
447
+ Vocab size: 16000
448
+ Batch 300:
449
+ input_ids shape: torch.Size([16, 256])
450
+ attention_mask shape: torch.Size([16, 256])
451
+ labels shape: torch.Size([16])
452
+ input_ids max value: 15984
453
+ Vocab size: 16000
454
+ Batch 400:
455
+ input_ids shape: torch.Size([16, 256])
456
+ attention_mask shape: torch.Size([16, 256])
457
+ labels shape: torch.Size([16])
458
+ input_ids max value: 15973
459
+ Vocab size: 16000
460
+ Batch 500:
461
+ input_ids shape: torch.Size([16, 256])
462
+ attention_mask shape: torch.Size([16, 256])
463
+ labels shape: torch.Size([16])
464
+ input_ids max value: 15973
465
+ Vocab size: 16000
466
+ Batch 600:
467
+ input_ids shape: torch.Size([16, 256])
468
+ attention_mask shape: torch.Size([16, 256])
469
+ labels shape: torch.Size([16])
470
+ input_ids max value: 15985
471
+ Vocab size: 16000
472
+ Batch 700:
473
+ input_ids shape: torch.Size([16, 256])
474
+ attention_mask shape: torch.Size([16, 256])
475
+ labels shape: torch.Size([16])
476
+ input_ids max value: 15985
477
+ Vocab size: 16000
478
+ Batch 800:
479
+ input_ids shape: torch.Size([16, 256])
480
+ attention_mask shape: torch.Size([16, 256])
481
+ labels shape: torch.Size([16])
482
+ input_ids max value: 15973
483
+ Vocab size: 16000
484
+ Batch 900:
485
+ input_ids shape: torch.Size([16, 256])
486
+ attention_mask shape: torch.Size([16, 256])
487
+ labels shape: torch.Size([16])
488
+ input_ids max value: 15901
489
+ Vocab size: 16000
490
+ Epoch 1/3:
491
+ Train Loss: 0.8970, Train Accuracy: 0.7058
492
+ Val Loss: 0.7586, Val Accuracy: 0.7604, Val F1: 0.6892
493
+ Batch 0:
494
+ input_ids shape: torch.Size([16, 256])
495
+ attention_mask shape: torch.Size([16, 256])
496
+ labels shape: torch.Size([16])
497
+ input_ids max value: 15873
498
+ Vocab size: 16000
499
+ Batch 100:
500
+ input_ids shape: torch.Size([16, 256])
501
+ attention_mask shape: torch.Size([16, 256])
502
+ labels shape: torch.Size([16])
503
+ input_ids max value: 15950
504
+ Vocab size: 16000
505
+ Batch 200:
506
+ input_ids shape: torch.Size([16, 256])
507
+ attention_mask shape: torch.Size([16, 256])
508
+ labels shape: torch.Size([16])
509
+ input_ids max value: 15985
510
+ Vocab size: 16000
511
+ Batch 300:
512
+ input_ids shape: torch.Size([16, 256])
513
+ attention_mask shape: torch.Size([16, 256])
514
+ labels shape: torch.Size([16])
515
+ input_ids max value: 15973
516
+ Vocab size: 16000
517
+ Batch 400:
518
+ input_ids shape: torch.Size([16, 256])
519
+ attention_mask shape: torch.Size([16, 256])
520
+ labels shape: torch.Size([16])
521
+ input_ids max value: 15985
522
+ Vocab size: 16000
523
+ Batch 500:
524
+ input_ids shape: torch.Size([16, 256])
525
+ attention_mask shape: torch.Size([16, 256])
526
+ labels shape: torch.Size([16])
527
+ input_ids max value: 15992
528
+ Vocab size: 16000
529
+ Batch 600:
530
+ input_ids shape: torch.Size([16, 256])
531
+ attention_mask shape: torch.Size([16, 256])
532
+ labels shape: torch.Size([16])
533
+ input_ids max value: 15928
534
+ Vocab size: 16000
535
+ Batch 700:
536
+ input_ids shape: torch.Size([16, 256])
537
+ attention_mask shape: torch.Size([16, 256])
538
+ labels shape: torch.Size([16])
539
+ input_ids max value: 15980
540
+ Vocab size: 16000
541
+ Batch 800:
542
+ input_ids shape: torch.Size([16, 256])
543
+ attention_mask shape: torch.Size([16, 256])
544
+ labels shape: torch.Size([16])
545
+ input_ids max value: 15973
546
+ Vocab size: 16000
547
+ Batch 900:
548
+ input_ids shape: torch.Size([16, 256])
549
+ attention_mask shape: torch.Size([16, 256])
550
+ labels shape: torch.Size([16])
551
+ input_ids max value: 15973
552
+ Vocab size: 16000
553
+ Epoch 2/3:
554
+ Train Loss: 0.6461, Train Accuracy: 0.7883
555
+ Val Loss: 0.5972, Val Accuracy: 0.8024, Val F1: 0.7585
556
+ Batch 0:
557
+ input_ids shape: torch.Size([16, 256])
558
+ attention_mask shape: torch.Size([16, 256])
559
+ labels shape: torch.Size([16])
560
+ input_ids max value: 15973
561
+ Vocab size: 16000
562
+ Batch 100:
563
+ input_ids shape: torch.Size([16, 256])
564
+ attention_mask shape: torch.Size([16, 256])
565
+ labels shape: torch.Size([16])
566
+ input_ids max value: 15871
567
+ Vocab size: 16000
568
+ Batch 200:
569
+ input_ids shape: torch.Size([16, 256])
570
+ attention_mask shape: torch.Size([16, 256])
571
+ labels shape: torch.Size([16])
572
+ input_ids max value: 15985
573
+ Vocab size: 16000
574
+ Batch 300:
575
+ input_ids shape: torch.Size([16, 256])
576
+ attention_mask shape: torch.Size([16, 256])
577
+ labels shape: torch.Size([16])
578
+ input_ids max value: 15973
579
+ Vocab size: 16000
580
+ Batch 400:
581
+ input_ids shape: torch.Size([16, 256])
582
+ attention_mask shape: torch.Size([16, 256])
583
+ labels shape: torch.Size([16])
584
+ input_ids max value: 15987
585
+ Vocab size: 16000
586
+ Batch 500:
587
+ input_ids shape: torch.Size([16, 256])
588
+ attention_mask shape: torch.Size([16, 256])
589
+ labels shape: torch.Size([16])
590
+ input_ids max value: 15973
591
+ Vocab size: 16000
592
+ Batch 600:
593
+ input_ids shape: torch.Size([16, 256])
594
+ attention_mask shape: torch.Size([16, 256])
595
+ labels shape: torch.Size([16])
596
+ input_ids max value: 15973
597
+ Vocab size: 16000
598
+ Batch 700:
599
+ input_ids shape: torch.Size([16, 256])
600
+ attention_mask shape: torch.Size([16, 256])
601
+ labels shape: torch.Size([16])
602
+ input_ids max value: 15973
603
+ Vocab size: 16000
604
+ Batch 800:
605
+ input_ids shape: torch.Size([16, 256])
606
+ attention_mask shape: torch.Size([16, 256])
607
+ labels shape: torch.Size([16])
608
+ input_ids max value: 15973
609
+ Vocab size: 16000
610
+ Batch 900:
611
+ input_ids shape: torch.Size([16, 256])
612
+ attention_mask shape: torch.Size([16, 256])
613
+ labels shape: torch.Size([16])
614
+ input_ids max value: 15956
615
+ Vocab size: 16000
616
+ Epoch 3/3:
617
+ Train Loss: 0.5426, Train Accuracy: 0.8275
618
+ Val Loss: 0.5413, Val Accuracy: 0.8275, Val F1: 0.7986
619
+
620
+ Test Results for General tokenizer:
621
+ Accuracy: 0.8281
622
+ F1 Score: 0.7992
623
+ AUC-ROC: 0.8504
624
+
625
+ Summary of Results:
626
+
627
+ All Cluster Tokenizer:
628
+ Accuracy: 0.7781
629
+ F1 Score: 0.7465
630
+ AUC-ROC: 0.8821
631
+
632
+ Final Tokenizer:
633
+ Accuracy: 0.7560
634
+ F1 Score: 0.7388
635
+ AUC-ROC: 0.8423
636
+
637
+ General Tokenizer:
638
+ Accuracy: 0.8281
639
+ F1 Score: 0.7992
640
+ AUC-ROC: 0.8504
641
+
642
+ Class distribution in training set:
643
+ Class Biology: 439 samples
644
+ Class Chemistry: 454 samples
645
+ Class Computer Science: 1358 samples
646
+ Class Mathematics: 9480 samples
647
+ Class Physics: 2733 samples
648
+ Class Statistics: 200 samples
FineTune_withPlots32k1115474.out ADDED
@@ -0,0 +1,1071 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Loading pytorch-gpu/py3/2.1.1
2
+ Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda
3
+ gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2
4
+ sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4
5
+ + HF_DATASETS_OFFLINE=1
6
+ + TRANSFORMERS_OFFLINE=1
7
+ + python3 FIneTune_withPlots.py
8
+
9
+ Checking label assignment:
10
+
11
+ Domain: Mathematics
12
+ Categories: math.OA math.PR
13
+ Abstract: we study the distributional behavior for products and for sums of boolean independent random variabl...
14
+
15
+ Domain: Computer Science
16
+ Categories: cs.CL physics.soc-ph
17
+ Abstract: zipfs law states that if words of language are ranked in the order of decreasing frequency in texts ...
18
+
19
+ Domain: Physics
20
+ Categories: physics.atom-ph
21
+ Abstract: the effects of parity and time reversal violating potential in particular the tensorpseudotensor ele...
22
+
23
+ Domain: Chemistry
24
+ Categories: nlin.AO
25
+ Abstract: over a period of approximately five years pankaj ghemawat of harvard business school and daniel levi...
26
+
27
+ Domain: Statistics
28
+ Categories: stat.AP
29
+ Abstract: we consider data consisting of photon counts of diffracted xray radiation as a function of the angle...
30
+
31
+ Domain: Biology
32
+ Categories: q-bio.PE q-bio.GN
33
+ Abstract: this paper develops simplified mathematical models describing the mutationselection balance for the ...
34
+ /linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
35
+ warnings.warn(
36
+
37
+ Training with All Cluster tokenizer:
38
+ Vocabulary size: 29376
39
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
40
+ Initialized model with vocabulary size: 29376
41
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
42
+ scaler = amp.GradScaler()
43
+ Batch 0:
44
+ input_ids shape: torch.Size([16, 256])
45
+ attention_mask shape: torch.Size([16, 256])
46
+ labels shape: torch.Size([16])
47
+ input_ids max value: 29374
48
+ Vocab size: 29376
49
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
50
+ with amp.autocast():
51
+ Batch 100:
52
+ input_ids shape: torch.Size([16, 256])
53
+ attention_mask shape: torch.Size([16, 256])
54
+ labels shape: torch.Size([16])
55
+ input_ids max value: 29374
56
+ Vocab size: 29376
57
+ Batch 200:
58
+ input_ids shape: torch.Size([16, 256])
59
+ attention_mask shape: torch.Size([16, 256])
60
+ labels shape: torch.Size([16])
61
+ input_ids max value: 29374
62
+ Vocab size: 29376
63
+ Batch 300:
64
+ input_ids shape: torch.Size([16, 256])
65
+ attention_mask shape: torch.Size([16, 256])
66
+ labels shape: torch.Size([16])
67
+ input_ids max value: 29374
68
+ Vocab size: 29376
69
+ Batch 400:
70
+ input_ids shape: torch.Size([16, 256])
71
+ attention_mask shape: torch.Size([16, 256])
72
+ labels shape: torch.Size([16])
73
+ input_ids max value: 29374
74
+ Vocab size: 29376
75
+ Batch 500:
76
+ input_ids shape: torch.Size([16, 256])
77
+ attention_mask shape: torch.Size([16, 256])
78
+ labels shape: torch.Size([16])
79
+ input_ids max value: 29374
80
+ Vocab size: 29376
81
+ Batch 600:
82
+ input_ids shape: torch.Size([16, 256])
83
+ attention_mask shape: torch.Size([16, 256])
84
+ labels shape: torch.Size([16])
85
+ input_ids max value: 29374
86
+ Vocab size: 29376
87
+ Batch 700:
88
+ input_ids shape: torch.Size([16, 256])
89
+ attention_mask shape: torch.Size([16, 256])
90
+ labels shape: torch.Size([16])
91
+ input_ids max value: 29374
92
+ Vocab size: 29376
93
+ Batch 800:
94
+ input_ids shape: torch.Size([16, 256])
95
+ attention_mask shape: torch.Size([16, 256])
96
+ labels shape: torch.Size([16])
97
+ input_ids max value: 29374
98
+ Vocab size: 29376
99
+ Batch 900:
100
+ input_ids shape: torch.Size([16, 256])
101
+ attention_mask shape: torch.Size([16, 256])
102
+ labels shape: torch.Size([16])
103
+ input_ids max value: 29374
104
+ Vocab size: 29376
105
+ Epoch 1/5:
106
+ Train Loss: 0.8540, Train Accuracy: 0.7226
107
+ Val Loss: 0.6542, Val Accuracy: 0.7833, Val F1: 0.7250
108
+ Batch 0:
109
+ input_ids shape: torch.Size([16, 256])
110
+ attention_mask shape: torch.Size([16, 256])
111
+ labels shape: torch.Size([16])
112
+ input_ids max value: 29374
113
+ Vocab size: 29376
114
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
115
+ with amp.autocast():
116
+ Batch 100:
117
+ input_ids shape: torch.Size([16, 256])
118
+ attention_mask shape: torch.Size([16, 256])
119
+ labels shape: torch.Size([16])
120
+ input_ids max value: 29374
121
+ Vocab size: 29376
122
+ Batch 200:
123
+ input_ids shape: torch.Size([16, 256])
124
+ attention_mask shape: torch.Size([16, 256])
125
+ labels shape: torch.Size([16])
126
+ input_ids max value: 29374
127
+ Vocab size: 29376
128
+ Batch 300:
129
+ input_ids shape: torch.Size([16, 256])
130
+ attention_mask shape: torch.Size([16, 256])
131
+ labels shape: torch.Size([16])
132
+ input_ids max value: 29374
133
+ Vocab size: 29376
134
+ Batch 400:
135
+ input_ids shape: torch.Size([16, 256])
136
+ attention_mask shape: torch.Size([16, 256])
137
+ labels shape: torch.Size([16])
138
+ input_ids max value: 29374
139
+ Vocab size: 29376
140
+ Batch 500:
141
+ input_ids shape: torch.Size([16, 256])
142
+ attention_mask shape: torch.Size([16, 256])
143
+ labels shape: torch.Size([16])
144
+ input_ids max value: 29374
145
+ Vocab size: 29376
146
+ Batch 600:
147
+ input_ids shape: torch.Size([16, 256])
148
+ attention_mask shape: torch.Size([16, 256])
149
+ labels shape: torch.Size([16])
150
+ input_ids max value: 29374
151
+ Vocab size: 29376
152
+ Batch 700:
153
+ input_ids shape: torch.Size([16, 256])
154
+ attention_mask shape: torch.Size([16, 256])
155
+ labels shape: torch.Size([16])
156
+ input_ids max value: 29374
157
+ Vocab size: 29376
158
+ Batch 800:
159
+ input_ids shape: torch.Size([16, 256])
160
+ attention_mask shape: torch.Size([16, 256])
161
+ labels shape: torch.Size([16])
162
+ input_ids max value: 29374
163
+ Vocab size: 29376
164
+ Batch 900:
165
+ input_ids shape: torch.Size([16, 256])
166
+ attention_mask shape: torch.Size([16, 256])
167
+ labels shape: torch.Size([16])
168
+ input_ids max value: 29374
169
+ Vocab size: 29376
170
+ Epoch 2/5:
171
+ Train Loss: 0.6120, Train Accuracy: 0.8040
172
+ Val Loss: 0.6541, Val Accuracy: 0.7765, Val F1: 0.7610
173
+ Batch 0:
174
+ input_ids shape: torch.Size([16, 256])
175
+ attention_mask shape: torch.Size([16, 256])
176
+ labels shape: torch.Size([16])
177
+ input_ids max value: 29374
178
+ Vocab size: 29376
179
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
180
+ with amp.autocast():
181
+ Batch 100:
182
+ input_ids shape: torch.Size([16, 256])
183
+ attention_mask shape: torch.Size([16, 256])
184
+ labels shape: torch.Size([16])
185
+ input_ids max value: 29374
186
+ Vocab size: 29376
187
+ Batch 200:
188
+ input_ids shape: torch.Size([16, 256])
189
+ attention_mask shape: torch.Size([16, 256])
190
+ labels shape: torch.Size([16])
191
+ input_ids max value: 29374
192
+ Vocab size: 29376
193
+ Batch 300:
194
+ input_ids shape: torch.Size([16, 256])
195
+ attention_mask shape: torch.Size([16, 256])
196
+ labels shape: torch.Size([16])
197
+ input_ids max value: 29374
198
+ Vocab size: 29376
199
+ Batch 400:
200
+ input_ids shape: torch.Size([16, 256])
201
+ attention_mask shape: torch.Size([16, 256])
202
+ labels shape: torch.Size([16])
203
+ input_ids max value: 29374
204
+ Vocab size: 29376
205
+ Batch 500:
206
+ input_ids shape: torch.Size([16, 256])
207
+ attention_mask shape: torch.Size([16, 256])
208
+ labels shape: torch.Size([16])
209
+ input_ids max value: 29374
210
+ Vocab size: 29376
211
+ Batch 600:
212
+ input_ids shape: torch.Size([16, 256])
213
+ attention_mask shape: torch.Size([16, 256])
214
+ labels shape: torch.Size([16])
215
+ input_ids max value: 29374
216
+ Vocab size: 29376
217
+ Batch 700:
218
+ input_ids shape: torch.Size([16, 256])
219
+ attention_mask shape: torch.Size([16, 256])
220
+ labels shape: torch.Size([16])
221
+ input_ids max value: 29374
222
+ Vocab size: 29376
223
+ Batch 800:
224
+ input_ids shape: torch.Size([16, 256])
225
+ attention_mask shape: torch.Size([16, 256])
226
+ labels shape: torch.Size([16])
227
+ input_ids max value: 29374
228
+ Vocab size: 29376
229
+ Batch 900:
230
+ input_ids shape: torch.Size([16, 256])
231
+ attention_mask shape: torch.Size([16, 256])
232
+ labels shape: torch.Size([16])
233
+ input_ids max value: 29374
234
+ Vocab size: 29376
235
+ Epoch 3/5:
236
+ Train Loss: 0.5221, Train Accuracy: 0.8347
237
+ Val Loss: 0.6959, Val Accuracy: 0.7582, Val F1: 0.7540
238
+ Batch 0:
239
+ input_ids shape: torch.Size([16, 256])
240
+ attention_mask shape: torch.Size([16, 256])
241
+ labels shape: torch.Size([16])
242
+ input_ids max value: 29374
243
+ Vocab size: 29376
244
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
245
+ with amp.autocast():
246
+ Batch 100:
247
+ input_ids shape: torch.Size([16, 256])
248
+ attention_mask shape: torch.Size([16, 256])
249
+ labels shape: torch.Size([16])
250
+ input_ids max value: 29374
251
+ Vocab size: 29376
252
+ Batch 200:
253
+ input_ids shape: torch.Size([16, 256])
254
+ attention_mask shape: torch.Size([16, 256])
255
+ labels shape: torch.Size([16])
256
+ input_ids max value: 29374
257
+ Vocab size: 29376
258
+ Batch 300:
259
+ input_ids shape: torch.Size([16, 256])
260
+ attention_mask shape: torch.Size([16, 256])
261
+ labels shape: torch.Size([16])
262
+ input_ids max value: 29374
263
+ Vocab size: 29376
264
+ Batch 400:
265
+ input_ids shape: torch.Size([16, 256])
266
+ attention_mask shape: torch.Size([16, 256])
267
+ labels shape: torch.Size([16])
268
+ input_ids max value: 29374
269
+ Vocab size: 29376
270
+ Batch 500:
271
+ input_ids shape: torch.Size([16, 256])
272
+ attention_mask shape: torch.Size([16, 256])
273
+ labels shape: torch.Size([16])
274
+ input_ids max value: 29374
275
+ Vocab size: 29376
276
+ Batch 600:
277
+ input_ids shape: torch.Size([16, 256])
278
+ attention_mask shape: torch.Size([16, 256])
279
+ labels shape: torch.Size([16])
280
+ input_ids max value: 29374
281
+ Vocab size: 29376
282
+ Batch 700:
283
+ input_ids shape: torch.Size([16, 256])
284
+ attention_mask shape: torch.Size([16, 256])
285
+ labels shape: torch.Size([16])
286
+ input_ids max value: 29374
287
+ Vocab size: 29376
288
+ Batch 800:
289
+ input_ids shape: torch.Size([16, 256])
290
+ attention_mask shape: torch.Size([16, 256])
291
+ labels shape: torch.Size([16])
292
+ input_ids max value: 29374
293
+ Vocab size: 29376
294
+ Batch 900:
295
+ input_ids shape: torch.Size([16, 256])
296
+ attention_mask shape: torch.Size([16, 256])
297
+ labels shape: torch.Size([16])
298
+ input_ids max value: 29374
299
+ Vocab size: 29376
300
+ Epoch 4/5:
301
+ Train Loss: 0.4214, Train Accuracy: 0.8676
302
+ Val Loss: 0.5618, Val Accuracy: 0.8204, Val F1: 0.7935
303
+ Batch 0:
304
+ input_ids shape: torch.Size([16, 256])
305
+ attention_mask shape: torch.Size([16, 256])
306
+ labels shape: torch.Size([16])
307
+ input_ids max value: 29374
308
+ Vocab size: 29376
309
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
310
+ with amp.autocast():
311
+ Batch 100:
312
+ input_ids shape: torch.Size([16, 256])
313
+ attention_mask shape: torch.Size([16, 256])
314
+ labels shape: torch.Size([16])
315
+ input_ids max value: 29374
316
+ Vocab size: 29376
317
+ Batch 200:
318
+ input_ids shape: torch.Size([16, 256])
319
+ attention_mask shape: torch.Size([16, 256])
320
+ labels shape: torch.Size([16])
321
+ input_ids max value: 29374
322
+ Vocab size: 29376
323
+ Batch 300:
324
+ input_ids shape: torch.Size([16, 256])
325
+ attention_mask shape: torch.Size([16, 256])
326
+ labels shape: torch.Size([16])
327
+ input_ids max value: 29374
328
+ Vocab size: 29376
329
+ Batch 400:
330
+ input_ids shape: torch.Size([16, 256])
331
+ attention_mask shape: torch.Size([16, 256])
332
+ labels shape: torch.Size([16])
333
+ input_ids max value: 29374
334
+ Vocab size: 29376
335
+ Batch 500:
336
+ input_ids shape: torch.Size([16, 256])
337
+ attention_mask shape: torch.Size([16, 256])
338
+ labels shape: torch.Size([16])
339
+ input_ids max value: 29374
340
+ Vocab size: 29376
341
+ Batch 600:
342
+ input_ids shape: torch.Size([16, 256])
343
+ attention_mask shape: torch.Size([16, 256])
344
+ labels shape: torch.Size([16])
345
+ input_ids max value: 29374
346
+ Vocab size: 29376
347
+ Batch 700:
348
+ input_ids shape: torch.Size([16, 256])
349
+ attention_mask shape: torch.Size([16, 256])
350
+ labels shape: torch.Size([16])
351
+ input_ids max value: 29374
352
+ Vocab size: 29376
353
+ Batch 800:
354
+ input_ids shape: torch.Size([16, 256])
355
+ attention_mask shape: torch.Size([16, 256])
356
+ labels shape: torch.Size([16])
357
+ input_ids max value: 29374
358
+ Vocab size: 29376
359
+ Batch 900:
360
+ input_ids shape: torch.Size([16, 256])
361
+ attention_mask shape: torch.Size([16, 256])
362
+ labels shape: torch.Size([16])
363
+ input_ids max value: 29374
364
+ Vocab size: 29376
365
+ Epoch 5/5:
366
+ Train Loss: 0.3263, Train Accuracy: 0.8953
367
+ Val Loss: 0.5990, Val Accuracy: 0.8125, Val F1: 0.8073
368
+
369
+ Test Results for All Cluster tokenizer:
370
+ Accuracy: 0.8125
371
+ F1 Score: 0.8071
372
+ AUC-ROC: 0.8733
373
+
374
+ Training with Final tokenizer:
375
+ Vocabulary size: 27998
376
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
377
+ Initialized model with vocabulary size: 27998
378
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
379
+ scaler = amp.GradScaler()
380
+ Batch 0:
381
+ input_ids shape: torch.Size([16, 256])
382
+ attention_mask shape: torch.Size([16, 256])
383
+ labels shape: torch.Size([16])
384
+ input_ids max value: 27997
385
+ Vocab size: 27998
386
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
387
+ with amp.autocast():
388
+ Batch 100:
389
+ input_ids shape: torch.Size([16, 256])
390
+ attention_mask shape: torch.Size([16, 256])
391
+ labels shape: torch.Size([16])
392
+ input_ids max value: 27997
393
+ Vocab size: 27998
394
+ Batch 200:
395
+ input_ids shape: torch.Size([16, 256])
396
+ attention_mask shape: torch.Size([16, 256])
397
+ labels shape: torch.Size([16])
398
+ input_ids max value: 27997
399
+ Vocab size: 27998
400
+ Batch 300:
401
+ input_ids shape: torch.Size([16, 256])
402
+ attention_mask shape: torch.Size([16, 256])
403
+ labels shape: torch.Size([16])
404
+ input_ids max value: 27997
405
+ Vocab size: 27998
406
+ Batch 400:
407
+ input_ids shape: torch.Size([16, 256])
408
+ attention_mask shape: torch.Size([16, 256])
409
+ labels shape: torch.Size([16])
410
+ input_ids max value: 27997
411
+ Vocab size: 27998
412
+ Batch 500:
413
+ input_ids shape: torch.Size([16, 256])
414
+ attention_mask shape: torch.Size([16, 256])
415
+ labels shape: torch.Size([16])
416
+ input_ids max value: 27997
417
+ Vocab size: 27998
418
+ Batch 600:
419
+ input_ids shape: torch.Size([16, 256])
420
+ attention_mask shape: torch.Size([16, 256])
421
+ labels shape: torch.Size([16])
422
+ input_ids max value: 27997
423
+ Vocab size: 27998
424
+ Batch 700:
425
+ input_ids shape: torch.Size([16, 256])
426
+ attention_mask shape: torch.Size([16, 256])
427
+ labels shape: torch.Size([16])
428
+ input_ids max value: 27997
429
+ Vocab size: 27998
430
+ Batch 800:
431
+ input_ids shape: torch.Size([16, 256])
432
+ attention_mask shape: torch.Size([16, 256])
433
+ labels shape: torch.Size([16])
434
+ input_ids max value: 27997
435
+ Vocab size: 27998
436
+ Batch 900:
437
+ input_ids shape: torch.Size([16, 256])
438
+ attention_mask shape: torch.Size([16, 256])
439
+ labels shape: torch.Size([16])
440
+ input_ids max value: 27997
441
+ Vocab size: 27998
442
+ Epoch 1/5:
443
+ Train Loss: 0.8917, Train Accuracy: 0.7102
444
+ Val Loss: 0.7550, Val Accuracy: 0.7533, Val F1: 0.7130
445
+ Batch 0:
446
+ input_ids shape: torch.Size([16, 256])
447
+ attention_mask shape: torch.Size([16, 256])
448
+ labels shape: torch.Size([16])
449
+ input_ids max value: 27997
450
+ Vocab size: 27998
451
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
452
+ with amp.autocast():
453
+ Batch 100:
454
+ input_ids shape: torch.Size([16, 256])
455
+ attention_mask shape: torch.Size([16, 256])
456
+ labels shape: torch.Size([16])
457
+ input_ids max value: 27997
458
+ Vocab size: 27998
459
+ Batch 200:
460
+ input_ids shape: torch.Size([16, 256])
461
+ attention_mask shape: torch.Size([16, 256])
462
+ labels shape: torch.Size([16])
463
+ input_ids max value: 27997
464
+ Vocab size: 27998
465
+ Batch 300:
466
+ input_ids shape: torch.Size([16, 256])
467
+ attention_mask shape: torch.Size([16, 256])
468
+ labels shape: torch.Size([16])
469
+ input_ids max value: 27997
470
+ Vocab size: 27998
471
+ Batch 400:
472
+ input_ids shape: torch.Size([16, 256])
473
+ attention_mask shape: torch.Size([16, 256])
474
+ labels shape: torch.Size([16])
475
+ input_ids max value: 27997
476
+ Vocab size: 27998
477
+ Batch 500:
478
+ input_ids shape: torch.Size([16, 256])
479
+ attention_mask shape: torch.Size([16, 256])
480
+ labels shape: torch.Size([16])
481
+ input_ids max value: 27997
482
+ Vocab size: 27998
483
+ Batch 600:
484
+ input_ids shape: torch.Size([16, 256])
485
+ attention_mask shape: torch.Size([16, 256])
486
+ labels shape: torch.Size([16])
487
+ input_ids max value: 27997
488
+ Vocab size: 27998
489
+ Batch 700:
490
+ input_ids shape: torch.Size([16, 256])
491
+ attention_mask shape: torch.Size([16, 256])
492
+ labels shape: torch.Size([16])
493
+ input_ids max value: 27997
494
+ Vocab size: 27998
495
+ Batch 800:
496
+ input_ids shape: torch.Size([16, 256])
497
+ attention_mask shape: torch.Size([16, 256])
498
+ labels shape: torch.Size([16])
499
+ input_ids max value: 27997
500
+ Vocab size: 27998
501
+ Batch 900:
502
+ input_ids shape: torch.Size([16, 256])
503
+ attention_mask shape: torch.Size([16, 256])
504
+ labels shape: torch.Size([16])
505
+ input_ids max value: 27997
506
+ Vocab size: 27998
507
+ Epoch 2/5:
508
+ Train Loss: 0.6483, Train Accuracy: 0.7855
509
+ Val Loss: 0.6702, Val Accuracy: 0.7822, Val F1: 0.7506
510
+ Batch 0:
511
+ input_ids shape: torch.Size([16, 256])
512
+ attention_mask shape: torch.Size([16, 256])
513
+ labels shape: torch.Size([16])
514
+ input_ids max value: 27997
515
+ Vocab size: 27998
516
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
517
+ with amp.autocast():
518
+ Batch 100:
519
+ input_ids shape: torch.Size([16, 256])
520
+ attention_mask shape: torch.Size([16, 256])
521
+ labels shape: torch.Size([16])
522
+ input_ids max value: 27997
523
+ Vocab size: 27998
524
+ Batch 200:
525
+ input_ids shape: torch.Size([16, 256])
526
+ attention_mask shape: torch.Size([16, 256])
527
+ labels shape: torch.Size([16])
528
+ input_ids max value: 27997
529
+ Vocab size: 27998
530
+ Batch 300:
531
+ input_ids shape: torch.Size([16, 256])
532
+ attention_mask shape: torch.Size([16, 256])
533
+ labels shape: torch.Size([16])
534
+ input_ids max value: 27997
535
+ Vocab size: 27998
536
+ Batch 400:
537
+ input_ids shape: torch.Size([16, 256])
538
+ attention_mask shape: torch.Size([16, 256])
539
+ labels shape: torch.Size([16])
540
+ input_ids max value: 27997
541
+ Vocab size: 27998
542
+ Batch 500:
543
+ input_ids shape: torch.Size([16, 256])
544
+ attention_mask shape: torch.Size([16, 256])
545
+ labels shape: torch.Size([16])
546
+ input_ids max value: 27997
547
+ Vocab size: 27998
548
+ Batch 600:
549
+ input_ids shape: torch.Size([16, 256])
550
+ attention_mask shape: torch.Size([16, 256])
551
+ labels shape: torch.Size([16])
552
+ input_ids max value: 27997
553
+ Vocab size: 27998
554
+ Batch 700:
555
+ input_ids shape: torch.Size([16, 256])
556
+ attention_mask shape: torch.Size([16, 256])
557
+ labels shape: torch.Size([16])
558
+ input_ids max value: 27997
559
+ Vocab size: 27998
560
+ Batch 800:
561
+ input_ids shape: torch.Size([16, 256])
562
+ attention_mask shape: torch.Size([16, 256])
563
+ labels shape: torch.Size([16])
564
+ input_ids max value: 27997
565
+ Vocab size: 27998
566
+ Batch 900:
567
+ input_ids shape: torch.Size([16, 256])
568
+ attention_mask shape: torch.Size([16, 256])
569
+ labels shape: torch.Size([16])
570
+ input_ids max value: 27997
571
+ Vocab size: 27998
572
+ Epoch 3/5:
573
+ Train Loss: 0.5660, Train Accuracy: 0.8135
574
+ Val Loss: 0.6397, Val Accuracy: 0.7983, Val F1: 0.7548
575
+ Batch 0:
576
+ input_ids shape: torch.Size([16, 256])
577
+ attention_mask shape: torch.Size([16, 256])
578
+ labels shape: torch.Size([16])
579
+ input_ids max value: 27997
580
+ Vocab size: 27998
581
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
582
+ with amp.autocast():
583
+ Batch 100:
584
+ input_ids shape: torch.Size([16, 256])
585
+ attention_mask shape: torch.Size([16, 256])
586
+ labels shape: torch.Size([16])
587
+ input_ids max value: 27997
588
+ Vocab size: 27998
589
+ Batch 200:
590
+ input_ids shape: torch.Size([16, 256])
591
+ attention_mask shape: torch.Size([16, 256])
592
+ labels shape: torch.Size([16])
593
+ input_ids max value: 27997
594
+ Vocab size: 27998
595
+ Batch 300:
596
+ input_ids shape: torch.Size([16, 256])
597
+ attention_mask shape: torch.Size([16, 256])
598
+ labels shape: torch.Size([16])
599
+ input_ids max value: 27997
600
+ Vocab size: 27998
601
+ Batch 400:
602
+ input_ids shape: torch.Size([16, 256])
603
+ attention_mask shape: torch.Size([16, 256])
604
+ labels shape: torch.Size([16])
605
+ input_ids max value: 27997
606
+ Vocab size: 27998
607
+ Batch 500:
608
+ input_ids shape: torch.Size([16, 256])
609
+ attention_mask shape: torch.Size([16, 256])
610
+ labels shape: torch.Size([16])
611
+ input_ids max value: 27997
612
+ Vocab size: 27998
613
+ Batch 600:
614
+ input_ids shape: torch.Size([16, 256])
615
+ attention_mask shape: torch.Size([16, 256])
616
+ labels shape: torch.Size([16])
617
+ input_ids max value: 27997
618
+ Vocab size: 27998
619
+ Batch 700:
620
+ input_ids shape: torch.Size([16, 256])
621
+ attention_mask shape: torch.Size([16, 256])
622
+ labels shape: torch.Size([16])
623
+ input_ids max value: 27997
624
+ Vocab size: 27998
625
+ Batch 800:
626
+ input_ids shape: torch.Size([16, 256])
627
+ attention_mask shape: torch.Size([16, 256])
628
+ labels shape: torch.Size([16])
629
+ input_ids max value: 27997
630
+ Vocab size: 27998
631
+ Batch 900:
632
+ input_ids shape: torch.Size([16, 256])
633
+ attention_mask shape: torch.Size([16, 256])
634
+ labels shape: torch.Size([16])
635
+ input_ids max value: 27997
636
+ Vocab size: 27998
637
+ Epoch 4/5:
638
+ Train Loss: 0.4725, Train Accuracy: 0.8545
639
+ Val Loss: 0.7259, Val Accuracy: 0.7707, Val F1: 0.7672
640
+ Batch 0:
641
+ input_ids shape: torch.Size([16, 256])
642
+ attention_mask shape: torch.Size([16, 256])
643
+ labels shape: torch.Size([16])
644
+ input_ids max value: 27997
645
+ Vocab size: 27998
646
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
647
+ with amp.autocast():
648
+ Batch 100:
649
+ input_ids shape: torch.Size([16, 256])
650
+ attention_mask shape: torch.Size([16, 256])
651
+ labels shape: torch.Size([16])
652
+ input_ids max value: 27997
653
+ Vocab size: 27998
654
+ Batch 200:
655
+ input_ids shape: torch.Size([16, 256])
656
+ attention_mask shape: torch.Size([16, 256])
657
+ labels shape: torch.Size([16])
658
+ input_ids max value: 27997
659
+ Vocab size: 27998
660
+ Batch 300:
661
+ input_ids shape: torch.Size([16, 256])
662
+ attention_mask shape: torch.Size([16, 256])
663
+ labels shape: torch.Size([16])
664
+ input_ids max value: 27997
665
+ Vocab size: 27998
666
+ Batch 400:
667
+ input_ids shape: torch.Size([16, 256])
668
+ attention_mask shape: torch.Size([16, 256])
669
+ labels shape: torch.Size([16])
670
+ input_ids max value: 27997
671
+ Vocab size: 27998
672
+ Batch 500:
673
+ input_ids shape: torch.Size([16, 256])
674
+ attention_mask shape: torch.Size([16, 256])
675
+ labels shape: torch.Size([16])
676
+ input_ids max value: 27997
677
+ Vocab size: 27998
678
+ Batch 600:
679
+ input_ids shape: torch.Size([16, 256])
680
+ attention_mask shape: torch.Size([16, 256])
681
+ labels shape: torch.Size([16])
682
+ input_ids max value: 27997
683
+ Vocab size: 27998
684
+ Batch 700:
685
+ input_ids shape: torch.Size([16, 256])
686
+ attention_mask shape: torch.Size([16, 256])
687
+ labels shape: torch.Size([16])
688
+ input_ids max value: 27997
689
+ Vocab size: 27998
690
+ Batch 800:
691
+ input_ids shape: torch.Size([16, 256])
692
+ attention_mask shape: torch.Size([16, 256])
693
+ labels shape: torch.Size([16])
694
+ input_ids max value: 27997
695
+ Vocab size: 27998
696
+ Batch 900:
697
+ input_ids shape: torch.Size([16, 256])
698
+ attention_mask shape: torch.Size([16, 256])
699
+ labels shape: torch.Size([16])
700
+ input_ids max value: 27997
701
+ Vocab size: 27998
702
+ Epoch 5/5:
703
+ Train Loss: 0.3889, Train Accuracy: 0.8792
704
+ Val Loss: 0.5967, Val Accuracy: 0.8174, Val F1: 0.7926
705
+
706
+ Test Results for Final tokenizer:
707
+ Accuracy: 0.8174
708
+ F1 Score: 0.7925
709
+ AUC-ROC: 0.8663
710
+
711
+ Training with General tokenizer:
712
+ Vocabulary size: 30522
713
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
714
+ Initialized model with vocabulary size: 30522
715
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
716
+ scaler = amp.GradScaler()
717
+ Batch 0:
718
+ input_ids shape: torch.Size([16, 256])
719
+ attention_mask shape: torch.Size([16, 256])
720
+ labels shape: torch.Size([16])
721
+ input_ids max value: 29605
722
+ Vocab size: 30522
723
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
724
+ with amp.autocast():
725
+ Batch 100:
726
+ input_ids shape: torch.Size([16, 256])
727
+ attention_mask shape: torch.Size([16, 256])
728
+ labels shape: torch.Size([16])
729
+ input_ids max value: 29438
730
+ Vocab size: 30522
731
+ Batch 200:
732
+ input_ids shape: torch.Size([16, 256])
733
+ attention_mask shape: torch.Size([16, 256])
734
+ labels shape: torch.Size([16])
735
+ input_ids max value: 29300
736
+ Vocab size: 30522
737
+ Batch 300:
738
+ input_ids shape: torch.Size([16, 256])
739
+ attention_mask shape: torch.Size([16, 256])
740
+ labels shape: torch.Size([16])
741
+ input_ids max value: 29464
742
+ Vocab size: 30522
743
+ Batch 400:
744
+ input_ids shape: torch.Size([16, 256])
745
+ attention_mask shape: torch.Size([16, 256])
746
+ labels shape: torch.Size([16])
747
+ input_ids max value: 29494
748
+ Vocab size: 30522
749
+ Batch 500:
750
+ input_ids shape: torch.Size([16, 256])
751
+ attention_mask shape: torch.Size([16, 256])
752
+ labels shape: torch.Size([16])
753
+ input_ids max value: 29464
754
+ Vocab size: 30522
755
+ Batch 600:
756
+ input_ids shape: torch.Size([16, 256])
757
+ attention_mask shape: torch.Size([16, 256])
758
+ labels shape: torch.Size([16])
759
+ input_ids max value: 29464
760
+ Vocab size: 30522
761
+ Batch 700:
762
+ input_ids shape: torch.Size([16, 256])
763
+ attention_mask shape: torch.Size([16, 256])
764
+ labels shape: torch.Size([16])
765
+ input_ids max value: 29464
766
+ Vocab size: 30522
767
+ Batch 800:
768
+ input_ids shape: torch.Size([16, 256])
769
+ attention_mask shape: torch.Size([16, 256])
770
+ labels shape: torch.Size([16])
771
+ input_ids max value: 29340
772
+ Vocab size: 30522
773
+ Batch 900:
774
+ input_ids shape: torch.Size([16, 256])
775
+ attention_mask shape: torch.Size([16, 256])
776
+ labels shape: torch.Size([16])
777
+ input_ids max value: 29454
778
+ Vocab size: 30522
779
+ Epoch 1/5:
780
+ Train Loss: 0.8557, Train Accuracy: 0.7257
781
+ Val Loss: 0.6864, Val Accuracy: 0.7724, Val F1: 0.7309
782
+ Batch 0:
783
+ input_ids shape: torch.Size([16, 256])
784
+ attention_mask shape: torch.Size([16, 256])
785
+ labels shape: torch.Size([16])
786
+ input_ids max value: 29300
787
+ Vocab size: 30522
788
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
789
+ with amp.autocast():
790
+ Batch 100:
791
+ input_ids shape: torch.Size([16, 256])
792
+ attention_mask shape: torch.Size([16, 256])
793
+ labels shape: torch.Size([16])
794
+ input_ids max value: 29464
795
+ Vocab size: 30522
796
+ Batch 200:
797
+ input_ids shape: torch.Size([16, 256])
798
+ attention_mask shape: torch.Size([16, 256])
799
+ labels shape: torch.Size([16])
800
+ input_ids max value: 29494
801
+ Vocab size: 30522
802
+ Batch 300:
803
+ input_ids shape: torch.Size([16, 256])
804
+ attention_mask shape: torch.Size([16, 256])
805
+ labels shape: torch.Size([16])
806
+ input_ids max value: 29474
807
+ Vocab size: 30522
808
+ Batch 400:
809
+ input_ids shape: torch.Size([16, 256])
810
+ attention_mask shape: torch.Size([16, 256])
811
+ labels shape: torch.Size([16])
812
+ input_ids max value: 29535
813
+ Vocab size: 30522
814
+ Batch 500:
815
+ input_ids shape: torch.Size([16, 256])
816
+ attention_mask shape: torch.Size([16, 256])
817
+ labels shape: torch.Size([16])
818
+ input_ids max value: 29577
819
+ Vocab size: 30522
820
+ Batch 600:
821
+ input_ids shape: torch.Size([16, 256])
822
+ attention_mask shape: torch.Size([16, 256])
823
+ labels shape: torch.Size([16])
824
+ input_ids max value: 29598
825
+ Vocab size: 30522
826
+ Batch 700:
827
+ input_ids shape: torch.Size([16, 256])
828
+ attention_mask shape: torch.Size([16, 256])
829
+ labels shape: torch.Size([16])
830
+ input_ids max value: 29605
831
+ Vocab size: 30522
832
+ Batch 800:
833
+ input_ids shape: torch.Size([16, 256])
834
+ attention_mask shape: torch.Size([16, 256])
835
+ labels shape: torch.Size([16])
836
+ input_ids max value: 29160
837
+ Vocab size: 30522
838
+ Batch 900:
839
+ input_ids shape: torch.Size([16, 256])
840
+ attention_mask shape: torch.Size([16, 256])
841
+ labels shape: torch.Size([16])
842
+ input_ids max value: 29532
843
+ Vocab size: 30522
844
+ Epoch 2/5:
845
+ Train Loss: 0.5995, Train Accuracy: 0.8029
846
+ Val Loss: 0.6449, Val Accuracy: 0.7882, Val F1: 0.7366
847
+ Batch 0:
848
+ input_ids shape: torch.Size([16, 256])
849
+ attention_mask shape: torch.Size([16, 256])
850
+ labels shape: torch.Size([16])
851
+ input_ids max value: 29536
852
+ Vocab size: 30522
853
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
854
+ with amp.autocast():
855
+ Batch 100:
856
+ input_ids shape: torch.Size([16, 256])
857
+ attention_mask shape: torch.Size([16, 256])
858
+ labels shape: torch.Size([16])
859
+ input_ids max value: 29464
860
+ Vocab size: 30522
861
+ Batch 200:
862
+ input_ids shape: torch.Size([16, 256])
863
+ attention_mask shape: torch.Size([16, 256])
864
+ labels shape: torch.Size([16])
865
+ input_ids max value: 29536
866
+ Vocab size: 30522
867
+ Batch 300:
868
+ input_ids shape: torch.Size([16, 256])
869
+ attention_mask shape: torch.Size([16, 256])
870
+ labels shape: torch.Size([16])
871
+ input_ids max value: 29464
872
+ Vocab size: 30522
873
+ Batch 400:
874
+ input_ids shape: torch.Size([16, 256])
875
+ attention_mask shape: torch.Size([16, 256])
876
+ labels shape: torch.Size([16])
877
+ input_ids max value: 29464
878
+ Vocab size: 30522
879
+ Batch 500:
880
+ input_ids shape: torch.Size([16, 256])
881
+ attention_mask shape: torch.Size([16, 256])
882
+ labels shape: torch.Size([16])
883
+ input_ids max value: 29464
884
+ Vocab size: 30522
885
+ Batch 600:
886
+ input_ids shape: torch.Size([16, 256])
887
+ attention_mask shape: torch.Size([16, 256])
888
+ labels shape: torch.Size([16])
889
+ input_ids max value: 29413
890
+ Vocab size: 30522
891
+ Batch 700:
892
+ input_ids shape: torch.Size([16, 256])
893
+ attention_mask shape: torch.Size([16, 256])
894
+ labels shape: torch.Size([16])
895
+ input_ids max value: 29346
896
+ Vocab size: 30522
897
+ Batch 800:
898
+ input_ids shape: torch.Size([16, 256])
899
+ attention_mask shape: torch.Size([16, 256])
900
+ labels shape: torch.Size([16])
901
+ input_ids max value: 29451
902
+ Vocab size: 30522
903
+ Batch 900:
904
+ input_ids shape: torch.Size([16, 256])
905
+ attention_mask shape: torch.Size([16, 256])
906
+ labels shape: torch.Size([16])
907
+ input_ids max value: 29280
908
+ Vocab size: 30522
909
+ Epoch 3/5:
910
+ Train Loss: 0.5332, Train Accuracy: 0.8291
911
+ Val Loss: 0.6577, Val Accuracy: 0.7942, Val F1: 0.7687
912
+ Batch 0:
913
+ input_ids shape: torch.Size([16, 256])
914
+ attention_mask shape: torch.Size([16, 256])
915
+ labels shape: torch.Size([16])
916
+ input_ids max value: 29464
917
+ Vocab size: 30522
918
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
919
+ with amp.autocast():
920
+ Batch 100:
921
+ input_ids shape: torch.Size([16, 256])
922
+ attention_mask shape: torch.Size([16, 256])
923
+ labels shape: torch.Size([16])
924
+ input_ids max value: 29464
925
+ Vocab size: 30522
926
+ Batch 200:
927
+ input_ids shape: torch.Size([16, 256])
928
+ attention_mask shape: torch.Size([16, 256])
929
+ labels shape: torch.Size([16])
930
+ input_ids max value: 29535
931
+ Vocab size: 30522
932
+ Batch 300:
933
+ input_ids shape: torch.Size([16, 256])
934
+ attention_mask shape: torch.Size([16, 256])
935
+ labels shape: torch.Size([16])
936
+ input_ids max value: 29413
937
+ Vocab size: 30522
938
+ Batch 400:
939
+ input_ids shape: torch.Size([16, 256])
940
+ attention_mask shape: torch.Size([16, 256])
941
+ labels shape: torch.Size([16])
942
+ input_ids max value: 29461
943
+ Vocab size: 30522
944
+ Batch 500:
945
+ input_ids shape: torch.Size([16, 256])
946
+ attention_mask shape: torch.Size([16, 256])
947
+ labels shape: torch.Size([16])
948
+ input_ids max value: 29536
949
+ Vocab size: 30522
950
+ Batch 600:
951
+ input_ids shape: torch.Size([16, 256])
952
+ attention_mask shape: torch.Size([16, 256])
953
+ labels shape: torch.Size([16])
954
+ input_ids max value: 29300
955
+ Vocab size: 30522
956
+ Batch 700:
957
+ input_ids shape: torch.Size([16, 256])
958
+ attention_mask shape: torch.Size([16, 256])
959
+ labels shape: torch.Size([16])
960
+ input_ids max value: 29536
961
+ Vocab size: 30522
962
+ Batch 800:
963
+ input_ids shape: torch.Size([16, 256])
964
+ attention_mask shape: torch.Size([16, 256])
965
+ labels shape: torch.Size([16])
966
+ input_ids max value: 29513
967
+ Vocab size: 30522
968
+ Batch 900:
969
+ input_ids shape: torch.Size([16, 256])
970
+ attention_mask shape: torch.Size([16, 256])
971
+ labels shape: torch.Size([16])
972
+ input_ids max value: 29536
973
+ Vocab size: 30522
974
+ Epoch 4/5:
975
+ Train Loss: 0.4665, Train Accuracy: 0.8555
976
+ Val Loss: 0.6495, Val Accuracy: 0.7931, Val F1: 0.7709
977
+ Batch 0:
978
+ input_ids shape: torch.Size([16, 256])
979
+ attention_mask shape: torch.Size([16, 256])
980
+ labels shape: torch.Size([16])
981
+ input_ids max value: 29454
982
+ Vocab size: 30522
983
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
984
+ with amp.autocast():
985
+ Batch 100:
986
+ input_ids shape: torch.Size([16, 256])
987
+ attention_mask shape: torch.Size([16, 256])
988
+ labels shape: torch.Size([16])
989
+ input_ids max value: 29598
990
+ Vocab size: 30522
991
+ Batch 200:
992
+ input_ids shape: torch.Size([16, 256])
993
+ attention_mask shape: torch.Size([16, 256])
994
+ labels shape: torch.Size([16])
995
+ input_ids max value: 29336
996
+ Vocab size: 30522
997
+ Batch 300:
998
+ input_ids shape: torch.Size([16, 256])
999
+ attention_mask shape: torch.Size([16, 256])
1000
+ labels shape: torch.Size([16])
1001
+ input_ids max value: 29602
1002
+ Vocab size: 30522
1003
+ Batch 400:
1004
+ input_ids shape: torch.Size([16, 256])
1005
+ attention_mask shape: torch.Size([16, 256])
1006
+ labels shape: torch.Size([16])
1007
+ input_ids max value: 29598
1008
+ Vocab size: 30522
1009
+ Batch 500:
1010
+ input_ids shape: torch.Size([16, 256])
1011
+ attention_mask shape: torch.Size([16, 256])
1012
+ labels shape: torch.Size([16])
1013
+ input_ids max value: 29464
1014
+ Vocab size: 30522
1015
+ Batch 600:
1016
+ input_ids shape: torch.Size([16, 256])
1017
+ attention_mask shape: torch.Size([16, 256])
1018
+ labels shape: torch.Size([16])
1019
+ input_ids max value: 29513
1020
+ Vocab size: 30522
1021
+ Batch 700:
1022
+ input_ids shape: torch.Size([16, 256])
1023
+ attention_mask shape: torch.Size([16, 256])
1024
+ labels shape: torch.Size([16])
1025
+ input_ids max value: 29464
1026
+ Vocab size: 30522
1027
+ Batch 800:
1028
+ input_ids shape: torch.Size([16, 256])
1029
+ attention_mask shape: torch.Size([16, 256])
1030
+ labels shape: torch.Size([16])
1031
+ input_ids max value: 29536
1032
+ Vocab size: 30522
1033
+ Batch 900:
1034
+ input_ids shape: torch.Size([16, 256])
1035
+ attention_mask shape: torch.Size([16, 256])
1036
+ labels shape: torch.Size([16])
1037
+ input_ids max value: 29535
1038
+ Vocab size: 30522
1039
+ Epoch 5/5:
1040
+ Train Loss: 0.3991, Train Accuracy: 0.8781
1041
+ Val Loss: 0.6572, Val Accuracy: 0.7948, Val F1: 0.7804
1042
+
1043
+ Test Results for General tokenizer:
1044
+ Accuracy: 0.7945
1045
+ F1 Score: 0.7802
1046
+ AUC-ROC: 0.8825
1047
+
1048
+ Summary of Results:
1049
+
1050
+ All Cluster Tokenizer:
1051
+ Accuracy: 0.8125
1052
+ F1 Score: 0.8071
1053
+ AUC-ROC: 0.8733
1054
+
1055
+ Final Tokenizer:
1056
+ Accuracy: 0.8174
1057
+ F1 Score: 0.7925
1058
+ AUC-ROC: 0.8663
1059
+
1060
+ General Tokenizer:
1061
+ Accuracy: 0.7945
1062
+ F1 Score: 0.7802
1063
+ AUC-ROC: 0.8825
1064
+
1065
+ Class distribution in training set:
1066
+ Class Biology: 439 samples
1067
+ Class Chemistry: 454 samples
1068
+ Class Computer Science: 1358 samples
1069
+ Class Mathematics: 9480 samples
1070
+ Class Physics: 2733 samples
1071
+ Class Statistics: 200 samples
General_tokenizer_plot.png ADDED

Git LFS Details

  • SHA256: 7ded448b6424ef58f395e4776cf11f10332a50a7a1525216b256663d1f93788c
  • Pointer size: 130 Bytes
  • Size of remote file: 61.4 kB