Image classification using fine-tuned ViT - for historical :bowtie: documents sorting
Goal: solve a task of archive page images sorting (for their further content-based processing)
Scope: Processing of images, training and evaluation of ViT model, input file/directory processing, class π·οΈ (category) results of top N predictions output, predictions summarizing into a tabular format, HF π hub support for the model
Versions π
There are currently 2 version of the model available for download, both of them have the same set of categories,
but different data annotations. The latest approved v2.2
is considered to be default and can be found in the main
branch
of HF π hub ^1 π
Version | Base | Pages | PDFs | Description |
---|---|---|---|---|
v2.0 |
vit-base-patch16-224 |
10073 | 3896 | annotations with mistakes, more heterogenous data |
v2.1 |
vit-base-patch16-224 |
11940 | 5002 | main : more diverse pages in each category, less annotation mistakes |
v2.2 |
vit-base-patch16-224 |
15855 | 5730 | same data as v2.1 + some restored pages from v2.0 |
v3.2 |
vit-base-patch16-384 |
15855 | 5730 | same data as v2.2 , but a bit larger model base with higher resolution |
v5.2 |
vit-large-patch16-384 |
15855 | 5730 | same data as v2.2 , but the largest model base with higher resolution |
v1.2 |
efficientnetv2_s.in21k |
15855 | 5730 | same data as v2.2 , but the smallest model base (CNN) |
v4.2 |
efficientnetv2_l.in21k_ft_in1k |
15855 | 5730 | same data as v2.2 , CNN base model smaller than the largest, may be more accurate |
Version | Parameters (M) | Resolution (px) |
---|---|---|
efficientnetv2_s.in21k |
48 | 300 |
vit-base-patch16-224 |
87 | 224 |
vit-base-patch16-384 |
87 | 384 |
efficientnetv2_l.in21k_ft_in1k |
119 | 384 |
vit-large-patch16-384 |
305 | 384 |
Model description π
π² Fine-tuned model repository: vit-historical-page ^1 π
π³ Base model repository:
- Google's vit-base-patch16-224, vit-base-patch16-384, and vit-large-patch16-284 ^2 ^6 ^7 π
- timm's efficientnetv2_s.in21k and efficientnetv2_l.in21k_ft_in1k ^8 ^9 π
Data π
Training set of the model: 8950 images for v2.0
Training set of the model: 10745 images for v2.1
Training set of the model: 14565 images for v2.X
The dataset is provided under Public Domain license, and consists of 15855 PNG images of pages from the archival documents. The source image files and their annotation can be found in the LINDAT repository ^10 π.
Categories π·οΈ
LabelοΈ | Description |
---|---|
DRAW |
π - drawings, maps, paintings, schematics, or graphics, potentially containing some text labels or captions |
DRAW_L |
ππ - drawings, etc but presented within a table-like layout or includes a legend formatted as a table |
LINE_HW |
βοΈπ - handwritten text organized in a tabular or form-like structure |
LINE_P |
π - printed text organized in a tabular or form-like structure |
LINE_T |
π - machine-typed text organized in a tabular or form-like structure |
PHOTO |
π - photographs or photographic cutouts, potentially with text captions |
PHOTO_L |
ππ - photos presented within a table-like layout or accompanied by tabular annotations |
TEXT |
π° - mixtures of printed, handwritten, and/or typed text, potentially with minor graphical elements |
TEXT_HW |
βοΈπ - only handwritten text in paragraph or block form (non-tabular) |
TEXT_P |
π - only printed text in paragraph or block form (non-tabular) |
TEXT_T |
π - only machine-typed text in paragraph or block form (non-tabular) |
Evaluation set: 1586 images (taken from v2.2
annotations)
Data preprocessing
During training the following transforms were applied randomly with a 50% chance:
- transforms.ColorJitter(brightness 0.5)
- transforms.ColorJitter(contrast 0.5)
- transforms.ColorJitter(saturation 0.5)
- transforms.ColorJitter(hue 0.5)
- transforms.Lambda(lambda img: ImageEnhance.Sharpness(img).enhance(random.uniform(0.5, 1.5)))
- transforms.Lambda(lambda img: img.filter(ImageFilter.GaussianBlur(radius=random.uniform(0, 2))))
Training Hyperparameters
- eval_strategy "epoch"
- save_strategy "epoch"
- learning_rate 5e-5
- per_device_train_batch_size 8
- per_device_eval_batch_size 8
- num_train_epochs 3
- warmup_ratio 0.1
- logging_steps 10
- load_best_model_at_end True
- metric_for_best_model "accuracy"
Results π
Revision | Top-1 | Top-3 |
---|---|---|
v1.2 |
97.73 | 99.87 |
v2.2 |
97.54 | 99.94 |
v3.2 |
96.49 | 99.94 |
v4.2 |
97.73 | 99.87 |
v5.2 |
97.86 | 99.87 |
v2.2 Evaluation set's accuracy (Top-1): 97.54%
v3.2 Evaluation set's accuracy (Top-1): 96.49%
v5.2 Evaluation set's accuracy (Top-1): 97.73%
v1.2 Evaluation set's accuracy (Top-1): 97.73%
v4.2 Evaluation set's accuracy (Top-1): 97.86%
Result tables
v2.2 Manually β checked evaluation dataset results (TOP-1): model_TOP-3_EVAL.csv π
v2.2 Manually β checked evaluation dataset results (TOP-3): model_TOP-3_EVAL.csv π
v3.2 Manually β checked evaluation dataset results (TOP-1): model_TOP-3_EVAL.csv π
v3.2 Manually β checked evaluation dataset results (TOP-3): model_TOP-3_EVAL.csv π
v5.2 Manually β checked evaluation dataset results (TOP-1): model_TOP-3_EVAL.csv π
v5.2 Manually β checked evaluation dataset results (TOP-3): model_TOP-3_EVAL.csv π
v1.2 Manually β checked evaluation dataset results (TOP-1): model_TOP-3_EVAL.csv π
v1.2 Manually β checked evaluation dataset results (TOP-3): model_TOP-3_EVAL.csv π
v4.2 Manually β checked evaluation dataset results (TOP-1): model_TOP-3_EVAL.csv π
v4.2 Manually β checked evaluation dataset results (TOP-3): model_TOP-3_EVAL.csv π
Table columns
- FILE - name of the file
- PAGE - number of the page
- CLASS-N - label of the category π·οΈ, guess TOP-N
- SCORE-N - score of the category π·οΈ, guess TOP-N
- TRUE - actual label of the category π·οΈ
Contacts π§
For support write to π§ [email protected] π§
Official repository: UFAL ^3
Acknowledgements π
Β©οΈ 2022 UFAL & ATRIUM
- Downloads last month
- 3
Model tree for ufal/vit-historical-page
Base model
google/vit-base-patch16-224