|
Loading pytorch-gpu/py3/2.1.1 |
|
Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda |
|
gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2 |
|
sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4 |
|
+ HF_DATASETS_OFFLINE=1 |
|
+ TRANSFORMERS_OFFLINE=1 |
|
+ python3 OnlyGeneralTokenizer.py |
|
|
|
Checking label assignment: |
|
|
|
Domain: Mathematics |
|
Categories: math.OA |
|
Abstract: a result of akemann anderson and pedersen states that if a sequence of pure states of a calgebra a a... |
|
|
|
Domain: Computer Science |
|
Categories: cs.PL |
|
Abstract: a rigid loop is a forloop with a counter not accessible to the loop body or any other part of a prog... |
|
|
|
Domain: Physics |
|
Categories: physics.gen-ph |
|
Abstract: fractional calculus and qdeformed lie algebras are closely related both concepts expand the scope of... |
|
|
|
Domain: Chemistry |
|
Categories: quant-ph nlin.CD |
|
Abstract: we study scarring phenomena in open quantum systems we show numerical evidence that individual reson... |
|
|
|
Domain: Statistics |
|
Categories: stat.ME |
|
Abstract: chess and chance are seemingly strange bedfellows luck andor randomness have no apparent role in mov... |
|
|
|
Domain: Biology |
|
Categories: q-bio.MN |
|
Abstract: in the simplest view of transcriptional regulation the expression of a gene is turned on or off by c... |
|
/linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won |
|
warnings.warn( |
|
|
|
Training with All Cluster tokenizer: |
|
Vocabulary size: 16005 |
|
Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at |
|
Initialized model with vocabulary size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 1/3: |
|
Val Accuracy: 0.7549, Val F1: 0.6896 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 2/3: |
|
Val Accuracy: 0.7473, Val F1: 0.7221 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 3/3: |
|
Val Accuracy: 0.8081, Val F1: 0.7870 |
|
|
|
Test Results for All Cluster tokenizer: |
|
Accuracy: 0.8084 |
|
F1 Score: 0.7874 |
|
AUC-ROC: 0.8421 |
|
|
|
Training with Final tokenizer: |
|
Vocabulary size: 15253 |
|
Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at |
|
Initialized model with vocabulary size: 15253 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Epoch 1/3: |
|
Val Accuracy: 0.7096, Val F1: 0.6564 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Epoch 2/3: |
|
Val Accuracy: 0.7246, Val F1: 0.6799 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15252 |
|
Vocab size: 15253 |
|
Epoch 3/3: |
|
Val Accuracy: 0.7661, Val F1: 0.7440 |
|
|
|
Test Results for Final tokenizer: |
|
Accuracy: 0.7661 |
|
F1 Score: 0.7441 |
|
AUC-ROC: 0.8256 |
|
|
|
Training with General tokenizer: |
|
Vocabulary size: 30522 |
|
Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at |
|
Initialized model with vocabulary size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29402 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29535 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29494 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29454 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 28993 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29602 |
|
Vocab size: 30522 |
|
Epoch 1/3: |
|
Val Accuracy: 0.7601, Val F1: 0.7079 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29098 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29339 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29560 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29458 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
Epoch 2/3: |
|
Val Accuracy: 0.8002, Val F1: 0.7716 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29605 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29237 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29292 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29461 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29566 |
|
Vocab size: 30522 |
|
Epoch 3/3: |
|
Val Accuracy: 0.8160, Val F1: 0.7785 |
|
|
|
Test Results for General tokenizer: |
|
Accuracy: 0.8160 |
|
F1 Score: 0.7785 |
|
AUC-ROC: 0.8630 |
|
|
|
Summary of Results: |
|
|
|
All Cluster Tokenizer: |
|
Accuracy: 0.8084 |
|
F1 Score: 0.7874 |
|
AUC-ROC: 0.8421 |
|
|
|
Final Tokenizer: |
|
Accuracy: 0.7661 |
|
F1 Score: 0.7441 |
|
AUC-ROC: 0.8256 |
|
|
|
General Tokenizer: |
|
Accuracy: 0.8160 |
|
F1 Score: 0.7785 |
|
AUC-ROC: 0.8630 |
|
|
|
Class distribution in training set: |
|
Class Biology: 439 samples |
|
Class Chemistry: 454 samples |
|
Class Computer Science: 1358 samples |
|
Class Mathematics: 9480 samples |
|
Class Physics: 2733 samples |
|
Class Statistics: 200 samples |
|
|