Edit model card

bert-finetuned-math-prob-classification

This model is a fine-tuned version of bert-base-uncased on the part of the competition_math dataset. Specifically, it was trained as a multi-class multi-label model on the problem text. The problem types (labels) used here are "Counting & Probability", "Prealgebra", "Algebra", "Number Theory", "Geometry", "Intermediate Algebra", and "Precalculus".

Model description

See the bert-base-uncased model for more details. The only architectural modification made was to the classification head. Here, 7 classes were used.

Intended uses & limitations

This model is intended for demonstration purposes only. The problem type data was in English and contains many LaTeX tokens.

Training and evaluation data

The problem field of competition_math dataset was used for training and evaluation input data. The target data was taken from the type field.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

This fine-tuned model achieves the following result on the problem type competition math test set:

                        precision    recall  f1-score   support

               Algebra       0.78      0.79      0.79      1187
Counting & Probability       0.75      0.81      0.78       474
              Geometry       0.76      0.83      0.79       479
  Intermediate Algebra       0.86      0.84      0.85       903
         Number Theory       0.79      0.82      0.80       540
            Prealgebra       0.66      0.61      0.63       871
           Precalculus       0.95      0.89      0.92       546

              accuracy                           0.79      5000
             macro avg       0.79      0.80      0.79      5000
          weighted avg       0.79      0.79      0.79      5000

Framework versions

  • Transformers 4.22.2
  • Pytorch 1.12.1+cu113
  • Datasets 2.5.1
  • Tokenizers 0.12.1
Downloads last month
51
Safetensors
Model size
109M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lschlessinger/bert-finetuned-math-prob-classification

Finetuned
(2047)
this model

Dataset used to train lschlessinger/bert-finetuned-math-prob-classification