|
Loading pytorch-gpu/py3/2.1.1 |
|
Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda |
|
gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2 |
|
sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4 |
|
+ HF_DATASETS_OFFLINE=1 |
|
+ TRANSFORMERS_OFFLINE=1 |
|
+ python3 FIneTune_withPlots.py |
|
|
|
Checking label assignment: |
|
|
|
Domain: Mathematics |
|
Categories: math.KT math.RT |
|
Abstract: we compute the hochschild cohomology and homology of a class of quantum exterior algebras with coeff... |
|
|
|
Domain: Computer Science |
|
Categories: cs.AI cs.LO |
|
Abstract: this paper presents experiments on common knowledge logic conducted with the help of the proof assis... |
|
|
|
Domain: Physics |
|
Categories: physics.ins-det physics.gen-ph |
|
Abstract: soil bulk density affects water storage water and nutrient movement and plant root activity in the s... |
|
|
|
Domain: Chemistry |
|
Categories: nlin.CD |
|
Abstract: two chaotic systems which interact by mutually exchanging a signal built from their delayed internal... |
|
|
|
Domain: Statistics |
|
Categories: stat.ME stat.AP |
|
Abstract: it is difficult to accurately estimate the rates of rape and domestic violence due to the sensitive ... |
|
|
|
Domain: Biology |
|
Categories: q-bio.PE |
|
Abstract: the distribution of genetic polymorphisms in a population contains information about the mutation ra... |
|
/linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won |
|
warnings.warn( |
|
|
|
Training with All Cluster tokenizer: |
|
Vocabulary size: 16005 |
|
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge |
|
Initialized model with vocabulary size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 1/5: |
|
Train Loss: 0.8860, Train Accuracy: 0.7123 |
|
Val Loss: 0.6624, Val Accuracy: 0.7811, Val F1: 0.7137 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 2/5: |
|
Train Loss: 0.6292, Train Accuracy: 0.7928 |
|
Val Loss: 0.6377, Val Accuracy: 0.7942, Val F1: 0.7572 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 3/5: |
|
Train Loss: 0.5420, Train Accuracy: 0.8283 |
|
Val Loss: 0.6224, Val Accuracy: 0.7983, Val F1: 0.7744 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 4/5: |
|
Train Loss: 0.4496, Train Accuracy: 0.8583 |
|
Val Loss: 0.6285, Val Accuracy: 0.8109, Val F1: 0.7863 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 5/5: |
|
Train Loss: 0.3687, Train Accuracy: 0.8816 |
|
Val Loss: 0.6460, Val Accuracy: 0.8111, Val F1: 0.7860 |
|
|
|
Test Results for All Cluster tokenizer: |
|
Accuracy: 0.8111 |
|
F1 Score: 0.7860 |
|
AUC-ROC: 0.8681 |
|
|
|
Training with Final tokenizer: |
|
Vocabulary size: 18524 |
|
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge |
|
Initialized model with vocabulary size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Epoch 1/5: |
|
Train Loss: 0.9291, Train Accuracy: 0.6943 |
|
Val Loss: 0.7526, Val Accuracy: 0.7593, Val F1: 0.6923 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Epoch 2/5: |
|
Train Loss: 0.6952, Train Accuracy: 0.7752 |
|
Val Loss: 0.6884, Val Accuracy: 0.7705, Val F1: 0.7291 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Epoch 3/5: |
|
Train Loss: 0.6147, Train Accuracy: 0.7993 |
|
Val Loss: 0.6780, Val Accuracy: 0.7874, Val F1: 0.7596 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Epoch 4/5: |
|
Train Loss: 0.5494, Train Accuracy: 0.8242 |
|
Val Loss: 0.6878, Val Accuracy: 0.7920, Val F1: 0.7655 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Epoch 5/5: |
|
Train Loss: 0.4703, Train Accuracy: 0.8558 |
|
Val Loss: 0.7217, Val Accuracy: 0.8046, Val F1: 0.7712 |
|
|
|
Test Results for Final tokenizer: |
|
Accuracy: 0.8043 |
|
F1 Score: 0.7709 |
|
AUC-ROC: 0.8254 |
|
|
|
Training with General tokenizer: |
|
Vocabulary size: 30522 |
|
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge |
|
Initialized model with vocabulary size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29521 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29446 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29320 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29336 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29280 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29130 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29445 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29469 |
|
Vocab size: 30522 |
|
Epoch 1/5: |
|
Train Loss: 0.9230, Train Accuracy: 0.6966 |
|
Val Loss: 0.7881, Val Accuracy: 0.7465, Val F1: 0.6718 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29462 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29477 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29402 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 28993 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29238 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29558 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29433 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29339 |
|
Vocab size: 30522 |
|
Epoch 2/5: |
|
Train Loss: 0.6269, Train Accuracy: 0.7939 |
|
Val Loss: 0.6425, Val Accuracy: 0.7959, Val F1: 0.7705 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29160 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29535 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29160 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29458 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29560 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29605 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29513 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29532 |
|
Vocab size: 30522 |
|
Epoch 3/5: |
|
Train Loss: 0.5377, Train Accuracy: 0.8242 |
|
Val Loss: 0.6742, Val Accuracy: 0.7797, Val F1: 0.7674 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29494 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29461 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29454 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29602 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29238 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29292 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29390 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Epoch 4/5: |
|
Train Loss: 0.4776, Train Accuracy: 0.8478 |
|
Val Loss: 0.5951, Val Accuracy: 0.8095, Val F1: 0.7732 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 28987 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29605 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29083 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29532 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29605 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29417 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29280 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29390 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29441 |
|
Vocab size: 30522 |
|
Epoch 5/5: |
|
Train Loss: 0.3833, Train Accuracy: 0.8814 |
|
Val Loss: 0.6523, Val Accuracy: 0.7882, Val F1: 0.7792 |
|
|
|
Test Results for General tokenizer: |
|
Accuracy: 0.7885 |
|
F1 Score: 0.7796 |
|
AUC-ROC: 0.8664 |
|
|
|
Summary of Results: |
|
|
|
All Cluster Tokenizer: |
|
Accuracy: 0.8111 |
|
F1 Score: 0.7860 |
|
AUC-ROC: 0.8681 |
|
|
|
Final Tokenizer: |
|
Accuracy: 0.8043 |
|
F1 Score: 0.7709 |
|
AUC-ROC: 0.8254 |
|
|
|
General Tokenizer: |
|
Accuracy: 0.7885 |
|
F1 Score: 0.7796 |
|
AUC-ROC: 0.8664 |
|
|
|
Class distribution in training set: |
|
Class Biology: 439 samples |
|
Class Chemistry: 454 samples |
|
Class Computer Science: 1358 samples |
|
Class Mathematics: 9480 samples |
|
Class Physics: 2733 samples |
|
Class Statistics: 200 samples |
|
|