d0choa's picture
Update README.md
e88d51c
|
raw
history blame
2.04 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
  - medical
model-index:
  - name: stop_reasons_classificator_multilabel
    results: []
datasets:
  - opentargets/clinical_trial_reason_to_stop
language:
  - en
metrics:
  - accuracy
library_name: transformers

clinical_trial_stop_reasons

This model is a fine-tuned version of bert-base-uncased on the task of classification of why a clinical trial has stopped early.

The dataset containing 3,747 manually curated reasons used for fine-tuning is available at Hub.

More details on the model training are available in the github project (link) and in the associated publication (TBC).

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 7

Training results

Training Loss Epoch Step Validation Loss Accuracy Thresh
No log 1.0 106 0.1824 0.9475
No log 2.0 212 0.1339 0.9630
No log 3.0 318 0.1109 0.9689
No log 4.0 424 0.0988 0.9741
0.1439 5.0 530 0.0943 0.9743
0.1439 6.0 636 0.0891 0.9763
0.1439 7.0 742 0.0899 0.9760

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.12.1+cu102
  • Datasets 2.9.0
  • Tokenizers 0.13.2