ufal
/

Image classification using fine-tuned ViT - for historical :bowtie: documents sorting

Goal: solve a task of archive page images sorting (for their further content-based processing)

Scope: Processing of images, training and evaluation of ViT model, input file/directory processing, class 🏷️ (category) results of top N predictions output, predictions summarizing into a tabular format, HF 😊 hub support for the model

Versions 🏁

There are currently 2 version of the model available for download, both of them have the same set of categories, but different data annotations. The latest approved v2.2 is considered to be default and can be found in the main branch of HF 😊 hub ^1 πŸ”—

Version Base Pages PDFs Description
v2.0 vit-base-patch16-224 10073 3896 annotations with mistakes, more heterogenous data
v2.1 vit-base-patch16-224 11940 5002 main: more diverse pages in each category, less annotation mistakes
v2.2 vit-base-patch16-224 15855 5730 same data as v2.1 + some restored pages from v2.0
v3.2 vit-base-patch16-384 15855 5730 same data as v2.2, but a bit larger model base with higher resolution
v5.2 vit-large-patch16-384 15855 5730 same data as v2.2, but the largest model base with higher resolution
v1.2 efficientnetv2_s.in21k 15855 5730 same data as v2.2, but the smallest model base (CNN)
v4.2 efficientnetv2_l.in21k_ft_in1k 15855 5730 same data as v2.2, CNN base model smaller than the largest, may be more accurate
Version Parameters (M) Resolution (px)
efficientnetv2_s.in21k 48 300
vit-base-patch16-224 87 224
vit-base-patch16-384 87 384
efficientnetv2_l.in21k_ft_in1k 119 384
vit-large-patch16-384 305 384

Model description πŸ“‡

architecture.png

πŸ”² Fine-tuned model repository: vit-historical-page ^1 πŸ”—

πŸ”³ Base model repository:

  • Google's vit-base-patch16-224, vit-base-patch16-384, and vit-large-patch16-284 ^2 ^6 ^7 πŸ”—
  • timm's efficientnetv2_s.in21k and efficientnetv2_l.in21k_ft_in1k ^8 ^9 πŸ”—

Data πŸ“œ

Training set of the model: 8950 images for v2.0

Training set of the model: 10745 images for v2.1

Training set of the model: 14565 images for v2.X

The dataset is provided under Public Domain license, and consists of 15855 PNG images of pages from the archival documents. The source image files and their annotation can be found in the LINDAT repository ^10 πŸ”—.

Categories 🏷️

Label️ Description
DRAW πŸ“ˆ - drawings, maps, paintings, schematics, or graphics, potentially containing some text labels or captions
DRAW_L πŸ“ˆπŸ“ - drawings, etc but presented within a table-like layout or includes a legend formatted as a table
LINE_HW βœοΈπŸ“ - handwritten text organized in a tabular or form-like structure
LINE_P πŸ“ - printed text organized in a tabular or form-like structure
LINE_T πŸ“ - machine-typed text organized in a tabular or form-like structure
PHOTO πŸŒ„ - photographs or photographic cutouts, potentially with text captions
PHOTO_L πŸŒ„πŸ“ - photos presented within a table-like layout or accompanied by tabular annotations
TEXT πŸ“° - mixtures of printed, handwritten, and/or typed text, potentially with minor graphical elements
TEXT_HW βœοΈπŸ“„ - only handwritten text in paragraph or block form (non-tabular)
TEXT_P πŸ“„ - only printed text in paragraph or block form (non-tabular)
TEXT_T πŸ“„ - only machine-typed text in paragraph or block form (non-tabular)

Evaluation set: 1586 images (taken from v2.2 annotations)

dataset_timeline.png

Data preprocessing

During training the following transforms were applied randomly with a 50% chance:

  • transforms.ColorJitter(brightness 0.5)
  • transforms.ColorJitter(contrast 0.5)
  • transforms.ColorJitter(saturation 0.5)
  • transforms.ColorJitter(hue 0.5)
  • transforms.Lambda(lambda img: ImageEnhance.Sharpness(img).enhance(random.uniform(0.5, 1.5)))
  • transforms.Lambda(lambda img: img.filter(ImageFilter.GaussianBlur(radius=random.uniform(0, 2))))

Training Hyperparameters

  • eval_strategy "epoch"
  • save_strategy "epoch"
  • learning_rate 5e-5
  • per_device_train_batch_size 8
  • per_device_eval_batch_size 8
  • num_train_epochs 3
  • warmup_ratio 0.1
  • logging_steps 10
  • load_best_model_at_end True
  • metric_for_best_model "accuracy"

Results πŸ“Š

Revision Top-1 Top-3
v1.2 97.73 99.87
v2.2 97.54 99.94
v3.2 96.49 99.94
v4.2 97.73 99.87
v5.2 97.86 99.87

v2.2 Evaluation set's accuracy (Top-1): 97.54%

TOP-1 confusion matrix - trained ViT

v3.2 Evaluation set's accuracy (Top-1): 96.49%

TOP-1 confusion matrix - trained ViT

v5.2 Evaluation set's accuracy (Top-1): 97.73%

TOP-1 confusion matrix - trained ViT

v1.2 Evaluation set's accuracy (Top-1): 97.73%

TOP-1 confusion matrix - trained ViT

v4.2 Evaluation set's accuracy (Top-1): 97.86%

TOP-1 confusion matrix - trained ViT

Result tables

Table columns

  • FILE - name of the file
  • PAGE - number of the page
  • CLASS-N - label of the category 🏷️, guess TOP-N
  • SCORE-N - score of the category 🏷️, guess TOP-N
  • TRUE - actual label of the category 🏷️

Contacts πŸ“§

For support write to πŸ“§ [email protected] πŸ“§

Official repository: UFAL ^3

Acknowledgements πŸ™

  • Developed by UFAL ^5 πŸ‘₯
  • Funded by ATRIUM ^4 πŸ’°
  • Shared by ATRIUM ^4 & UFAL ^5
  • Model type:
    • fine-tuned ViT with a 224x224 ^2 πŸ”— or 384x384 ^6 ^7 πŸ”— resolution size
    • fine-tuned EffNetV2 with a 300x300 ^8 πŸ”— or 384x384 ^9 πŸ”— resolution size

©️ 2022 UFAL & ATRIUM

Downloads last month
3
Safetensors
Model size
85.8M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ufal/vit-historical-page

Finetuned
(877)
this model