Tony Fang commited on
Commit
2dcb6dd
·
1 Parent(s): 900cef8

edited README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -2
README.md CHANGED
@@ -139,9 +139,36 @@ cd transformer_benchmark
139
  python train.py --config Configs/conditional_detr.yaml
140
  ```
141
 
142
- ### Identity Classification
143
  - Use `tracklet_id` (1-8) from the PKL file as labels.
144
- - **Temporal Split**: 30% train / 30% val / 40% test (chronological order).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
145
 
146
  ## Key Results
147
 
 
139
  python train.py --config Configs/conditional_detr.yaml
140
  ```
141
 
142
+ ### Temporal Classification
143
  - Use `tracklet_id` (1-8) from the PKL file as labels.
144
+ - **Temporal Split**: 30% train / 30% val / 40% test (chronological order).
145
+
146
+ ### Benchmark vision models for temporal classification:
147
+
148
+ Step 1: cropping the bounding boxes from `pmfeed_4_3_16.mp4` using the correct labels in `pmfeed_4_3_16_bboxes_and_labels.pkl`. Then convert the folder of images cropped from `pmfeed_4_3_16.mp4` into lmdb dataset for fast loading:
149
+ ```
150
+ cd identification_benchmark
151
+ python crop_pmfeed_4_3_16.py
152
+ python construct_lmdb.py
153
+ ```
154
+
155
+ Step 2: get embeddings from vision model:
156
+ ```
157
+ cd big_model_inference
158
+ ```
159
+ Use `inference_resnet.py` to get embeddings from resnet and `inference_transformers.py` to get embeddings from transformer weights available on Huggingface:
160
+ ```
161
+ python inference_resnet.py --resnet_type resnet18
162
+ python inference_transformers.py --model_name facebook/convnextv2-nano-1k-224
163
+ ```
164
+
165
+ Step 3: use the embeddings and labels obtained from step 2 to conduct knn evaluation and linear classification:
166
+
167
+ ```
168
+ cd ../classification
169
+ python train.py
170
+ python knn_evaluation.py
171
+ ```
172
 
173
  ## Key Results
174