Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,171 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: other
|
| 3 |
-
license_name: sla0044
|
| 4 |
-
license_link: >-
|
| 5 |
-
https://github.com/STMicroelectronics/stm32aimodelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_custom_dataset/LICENSE.md
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: sla0044
|
| 4 |
+
license_link: >-
|
| 5 |
+
https://github.com/STMicroelectronics/stm32aimodelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_custom_dataset/LICENSE.md
|
| 6 |
+
pipeline_tag: object-detection
|
| 7 |
+
---
|
| 8 |
+
# Tiny Yolo v2 quantized
|
| 9 |
+
|
| 10 |
+
## **Use case** : `Object detection`
|
| 11 |
+
|
| 12 |
+
# Model description
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
Tiny Yolo v2 is a real-time object detection model targeted for real-time processing implemented in Tensorflow.
|
| 16 |
+
|
| 17 |
+
The model is quantized in int8 format using tensorflow lite converter.
|
| 18 |
+
|
| 19 |
+
## Network information
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
| Network information | Value |
|
| 23 |
+
|-------------------------|-----------------|
|
| 24 |
+
| Framework | TensorFlow Lite |
|
| 25 |
+
| Quantization | int8 |
|
| 26 |
+
| Provenance | https://github.com/AlexeyAB/darknet
|
| 27 |
+
| Paper | https://pjreddie.com/media/files/papers/YOLO9000.pdf |
|
| 28 |
+
|
| 29 |
+
The models are quantized using tensorflow lite converter.
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
## Network inputs / outputs
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
For an image resolution of NxM and NC classes
|
| 36 |
+
|
| 37 |
+
| Input Shape | Description |
|
| 38 |
+
| ----- | ----------- |
|
| 39 |
+
| (1, W, H, 3) | Single NxM RGB image with UINT8 values between 0 and 255 |
|
| 40 |
+
|
| 41 |
+
| Output Shape | Description |
|
| 42 |
+
| ----- | ----------- |
|
| 43 |
+
| (1, WxH, NAx(5+NC)) | FLOAT values Where WXH is the resolution of the output grid cell, NA is the number of anchors and NC is the number of classes|
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
## Recommended Platforms
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
| Platform | Supported | Recommended |
|
| 50 |
+
|----------|-----------|-------------|
|
| 51 |
+
| STM32L0 | [] | [] |
|
| 52 |
+
| STM32L4 | [] | [] |
|
| 53 |
+
| STM32U5 | [] | [] |
|
| 54 |
+
| STM32H7 | [x] | [] |
|
| 55 |
+
| STM32MP1 | [x] | [x] |
|
| 56 |
+
| STM32MP2 | [x] | [x] |
|
| 57 |
+
| STM32N6 | [x] | [x] |
|
| 58 |
+
|
| 59 |
+
# Performances
|
| 60 |
+
|
| 61 |
+
## Metrics
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
|
| 65 |
+
|
| 66 |
+
### Reference **NPU** memory footprint based on COCO Person dataset (see Accuracy for details on dataset)
|
| 67 |
+
|Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB)| Weights Flash (KiB) | STM32Cube.AI version | STEdgeAI Core version |
|
| 68 |
+
|----------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
| 69 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | COCO-Person | Int8 | 224x224x3 | STM32N6 | 392 | 0.0 | 10804.81 | 10.0.0 | 2.0.0 |
|
| 70 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_custom_dataset/st_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | ST-Person | Int8 | 224x224x3 | STM32N6 | 392 | 0.0 | 10804.81 | 10.0.0 | 2.0.0 |
|
| 71 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416_int8.tflite) | COCO-Person | Int8 | 416x416x3 | STM32N6 | 1880.12 | 0.0 | 10829 | 10.0.0 | 2.0.0 |
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
### Reference **NPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
|
| 75 |
+
| Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STM32Cube.AI version | STEdgeAI Core version |
|
| 76 |
+
|--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
| 77 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | COCO-Person | Int8 | 224x224x3 | STM32N6570-DK | NPU/MCU | 30.67 | 32.61 |10.0.0 | 2.0.0 |
|
| 78 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_custom_dataset/st_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | ST-Person | Int8 | 224x224x3 | STM32N6570-DK | NPU/MCU | 30.67 | 32.61| 10.0.0 | 2.0.0 |
|
| 79 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416_int8.tflite) | COCO-Person | Int8 | 416x416x3 | STM32N6570-DK | NPU/MCU | 50.91 | 19.64 | 10.0.0 | 2.0.0 |
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
### Reference **MCU** memory footprint based on COCO Person dataset (see Accuracy for details on dataset)
|
| 83 |
+
|
| 84 |
+
| Model | Format | Resolution | Series | Activation RAM | Runtime RAM | Weights Flash | Code Flash | Total RAM | Total Flash | STM32Cube.AI version |
|
| 85 |
+
|-------------------|--------|--------------|---------|----------------|-------------|---------------|------------|-------------|--------------|-----------------------|
|
| 86 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_192/tiny_yolo_v2_192_int8.tflite) | Int8 | 192x192x3 | STM32H7 | 220.6 KiB | 7.98 KiB | 10775.98 KiB | 55.85 KiB | 228.58 KiB | 10831.83 KiB | 10.0.0 |
|
| 87 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | Int8 | 224x224x3 | STM32H7 | 249.35 KiB | 7.98 KiB | 10775.98 KiB | 55.8 KiB | 257.33 KiB | 10831.78 KiB | 10.0.0 |
|
| 88 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416_int8.tflite) | Int8 | 416x416x3 | STM32H7 | 1263.07 KiB | 8.03 KiB | 10775.98 KiB | 55.85 KiB | 1271.1 KiB | 10831.83 KiB | 10.0.0 |
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
### Reference **MCU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
| Model | Format | Resolution | Board | Execution Engine | Frequency | Inference time (ms) | STM32Cube.AI version |
|
| 95 |
+
|------------------|--------|------------|------------------|------------------|-------------|---------------------|-----------------------|
|
| 96 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_192/tiny_yolo_v2_192_int8.tflite) | Int8 | 192x192x3 | STM32H747I-DISCO | 1 CPU | 400 MHz | 3006.3 ms | 10.0.0 |
|
| 97 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | Int8 | 224x224x3 | STM32H747I-DISCO | 1 CPU | 400 MHz | 2742.3 ms | 10.0.0 |
|
| 98 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416_int8.tflite) | Int8 | 416x416x3 | STM32H747I-DISCO | 1 CPU | 400 MHz | 10468.2 ms | 10.0.0 |
|
| 99 |
+
|
| 100 |
+
|
| 101 |
+
### Reference **MPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
| Model | Format | Resolution | Quantization | Board | Execution Engine | Frequency | Inference time (ms) | %NPU | %GPU | %CPU | X-LINUX-AI version | Framework |
|
| 105 |
+
|--------------|--------|------------|---------------|-------------------|------------------|-----------|---------------------|-------|-------|------|--------------------|-----------------------|
|
| 106 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | Int8 | 224x224x3 | per-channel** | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 120.8 ms | 3.45 | 96.55 |0 | v5.1.0 | OpenVX |
|
| 107 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416_int8.tflite) | Int8 | 416x416x3 | per-channel** | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 425.6 ms | 2.74 | 97.26 |0 | v5.1.0 | OpenVX |
|
| 108 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | Int8 | 224x224x3 | per-channel | STM32MP157F-DK2 | 2 CPU | 800 MHz | 410.50 ms | NA | NA |100 | v5.1.0 | TensorFlowLite 2.11.0 |
|
| 109 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416_int8.tflite) | Int8 | 416x416x3 | per-channel | STM32MP157F-DK2 | 2 CPU | 800 MHz | 1347 ms | NA | NA |100 | v5.1.0 | TensorFlowLite 2.11.0 |
|
| 110 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | Int8 | 224x224x3 | per-channel | STM32MP135F-DK2 | 1 CPU | 1000 MHz | 619.70 ms | NA | NA |100 | v5.1.0 | TensorFlowLite 2.11.0 |
|
| 111 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416_int8.tflite) | Int8 | 416x416x3 | per-channel | STM32MP135F-DK2 | 1 CPU | 1000 MHz | 2105 ms | NA | NA |100 | v5.1.0 | TensorFlowLite 2.11.0 |
|
| 112 |
+
|
| 113 |
+
** **To get the most out of MP25 NPU hardware acceleration, please use per-tensor quantization**
|
| 114 |
+
|
| 115 |
+
### AP on COCO Person dataset
|
| 116 |
+
|
| 117 |
+
Dataset details: [link](https://cocodataset.org/#download) , License [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode) , Quotation[[1]](#1) , Number of classes: 80, Number of images: 118,287
|
| 118 |
+
|
| 119 |
+
| Model | Format | Resolution | AP |
|
| 120 |
+
|-------|--------|------------|----------------|
|
| 121 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_192/tiny_yolo_v2_192_int8.tflite) | Int8 | 192x192x3 | 33.7 % |
|
| 122 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_192/tiny_yolo_v2_192.h5) | Float | 192x192x3 | 34.5 % |
|
| 123 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | Int8 | 224x224x3 | 37.3 % |
|
| 124 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_224/tiny_yolo_v2_224.h5) | Float | 224x224x3 | 38.4 % |
|
| 125 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416_int8.tflite) | Int8 | 416x416x3 | 50.7 % |
|
| 126 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_public_dataset/coco_2017_person/tiny_yolo_v2_416/tiny_yolo_v2_416.h5) | Float | 416x416x3 | 51.5 % |
|
| 127 |
+
|
| 128 |
+
\* EVAL_IOU = 0.4, NMS_THRESH = 0.5, SCORE_THRESH =0.001
|
| 129 |
+
|
| 130 |
+
### AP on ST Person dataset
|
| 131 |
+
|
| 132 |
+
| Model | Format | Resolution | AP |
|
| 133 |
+
|-------|--------|------------|----------------|
|
| 134 |
+
| [tiny_yolo_v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/object_detection/tiny_yolo_v2/ST_pretrainedmodel_custom_dataset/st_person/tiny_yolo_v2_224/tiny_yolo_v2_224_int8.tflite) | Int8 | 224x224x3 | 34.0 % |
|
| 135 |
+
|
| 136 |
+
|
| 137 |
+
\* EVAL_IOU = 0.4, NMS_THRESH = 0.5, SCORE_THRESH =0.001
|
| 138 |
+
|
| 139 |
+
## Retraining and Integration in a simple example:
|
| 140 |
+
|
| 141 |
+
Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services)
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
# References
|
| 145 |
+
|
| 146 |
+
|
| 147 |
+
<a id="1">[1]</a>
|
| 148 |
+
“Microsoft COCO: Common Objects in Context”. [Online]. Available: https://cocodataset.org/#download.
|
| 149 |
+
@article{DBLP:journals/corr/LinMBHPRDZ14,
|
| 150 |
+
author = {Tsung{-}Yi Lin and
|
| 151 |
+
Michael Maire and
|
| 152 |
+
Serge J. Belongie and
|
| 153 |
+
Lubomir D. Bourdev and
|
| 154 |
+
Ross B. Girshick and
|
| 155 |
+
James Hays and
|
| 156 |
+
Pietro Perona and
|
| 157 |
+
Deva Ramanan and
|
| 158 |
+
Piotr Doll{'{a} }r and
|
| 159 |
+
C. Lawrence Zitnick},
|
| 160 |
+
title = {Microsoft {COCO:} Common Objects in Context},
|
| 161 |
+
journal = {CoRR},
|
| 162 |
+
volume = {abs/1405.0312},
|
| 163 |
+
year = {2014},
|
| 164 |
+
url = {http://arxiv.org/abs/1405.0312},
|
| 165 |
+
archivePrefix = {arXiv},
|
| 166 |
+
eprint = {1405.0312},
|
| 167 |
+
timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
|
| 168 |
+
biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
|
| 169 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
| 170 |
+
}
|
| 171 |
+
|