File size: 3,366 Bytes
4ff63d9
 
b231bbf
 
4ff63d9
 
b231bbf
 
4ff63d9
 
 
b231bbf
 
 
 
 
 
4ff63d9
 
 
 
88695c4
4ff63d9
 
 
 
 
b231bbf
 
 
 
4ff63d9
 
 
25a993b
b231bbf
4ff63d9
 
 
b231bbf
 
 
 
4ff63d9
 
 
b231bbf
 
 
 
 
 
25a993b
b231bbf
 
 
 
 
4ff63d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b231bbf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
library_name: transformers
base_model:
- facebook/detr-resnet-50
tags:
- generated_from_trainer
- industry
- construction
model-index:
- name: finetuned-ViT-model
  results: []
license: mit
datasets:
- hf-vision/hardhat
language:
- en
pipeline_tag: object-detection
---

# finetuned-ViT-model

This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on an the [Hard Hat Dataset](https://huggingface.co/datasets/hf-vision/hardhat) 
It achieves the following results on the evaluation set:
- Loss: 0.9937

## Model description

This model is a demonstration project for the Hugging Face Certification assignment and was created for educational purpose.
It is a fine-tuned Vision Transformer (ViT) for object detection, specifically trained to detect hard hats, heads, and people in images. It uses the `facebook/detr-resnet-50-dc5` checkpoint as a base and is further trained on the `hf-vision/hardhat` dataset. 

The model leverages the transformer architecture to process image patches and predict bounding boxes and labels for the objects of interest.

## Intended uses & limitations

- **Intended Uses:** This model can be used to demonstrate object detection with ViT. It can potentially be used in safety applications to identify individuals wearing or not wearing hardhats in construction sites or industrial environments.
- **Limitations:** This model has been limitedly trained and may not generalize well to images with significantly different characteristics, viewpoints, or lighting conditions. It is not intended for production use without further evaluation and validation.

## Training and evaluation data

- **Dataset:** The model was trained on the `hf-vision/hardhat` dataset from Hugging Face Datasets. This dataset contains images of construction sites and industrial settings with annotations for hardhats, heads, and people.
- **Data splits:** The dataset is divided into "train" and "test" splits.  
- **Data augmentation:** Data augmentation was applied during training using `albumentations` to improve model generalization. These included random horizontal flipping and random brightness/contrast adjustments.


## Training procedure

- **Base model:** The model was initialized from the `facebook/detr-resnet-50-dc5` checkpoint, a pre-trained DETR model with a ResNet-50 backbone.
- **Fine-tuning:** The model was fine-tuned using the Hugging Face `Trainer` with the following hyperparameters:
    - Learning rate: 1e-6
    - Weight decay: 1e-4
    - Batch size: 1
    - Epochs: 3
    - Max steps: 2500
    - Optimizer: AdamW 
- **Evaluation:** The model was evaluated on the test set using standard object detection metrics, including COCO metrics (Average Precision, Average Recall). 
- **Hardware:** Training was performed on Google Colab using GPU acceleration.


### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 500
- mixed_precision_training: Native AMP

### Training results



### Framework versions

- Transformers 4.50.1
- Pytorch 2.5.1+cu121
- Datasets 3.4.1
- Tokenizers 0.21.0