Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -35,10 +35,13 @@ More details on model performance across various devices, can be found
|
|
| 35 |
- Model size: 20.9 MB
|
| 36 |
|
| 37 |
|
|
|
|
|
|
|
| 38 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 39 |
| ---|---|---|---|---|---|---|---|
|
| 40 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite |
|
| 41 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.
|
|
|
|
| 42 |
|
| 43 |
|
| 44 |
## Installation
|
|
@@ -99,15 +102,17 @@ python -m qai_hub_models.models.mobilenet_v3_large.export
|
|
| 99 |
Profile Job summary of MobileNet-v3-Large
|
| 100 |
--------------------------------------------------
|
| 101 |
Device: Snapdragon X Elite CRD (11)
|
| 102 |
-
Estimated Inference Time: 1.
|
| 103 |
Estimated Peak Memory Range: 0.57-0.57 MB
|
| 104 |
Compute Units: NPU (144) | Total (144)
|
| 105 |
|
| 106 |
|
| 107 |
```
|
|
|
|
|
|
|
| 108 |
## How does this work?
|
| 109 |
|
| 110 |
-
This [export script](https://
|
| 111 |
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
|
| 112 |
on-device. Lets go through each step below in detail:
|
| 113 |
|
|
@@ -184,6 +189,7 @@ spot check the output with expected output.
|
|
| 184 |
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
| 185 |
|
| 186 |
|
|
|
|
| 187 |
## Run demo on a cloud-hosted device
|
| 188 |
|
| 189 |
You can also run the demo on-device.
|
|
@@ -220,7 +226,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
|
| 220 |
## License
|
| 221 |
- The license for the original implementation of MobileNet-v3-Large can be found
|
| 222 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
| 223 |
-
- The license for the compiled assets for on-device deployment can be found [here](
|
| 224 |
|
| 225 |
## References
|
| 226 |
* [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
|
|
|
|
| 35 |
- Model size: 20.9 MB
|
| 36 |
|
| 37 |
|
| 38 |
+
|
| 39 |
+
|
| 40 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 41 |
| ---|---|---|---|---|---|---|---|
|
| 42 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.999 ms | 0 - 2 MB | FP16 | NPU | [MobileNet-v3-Large.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Large/blob/main/MobileNet-v3-Large.tflite)
|
| 43 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.048 ms | 1 - 46 MB | FP16 | NPU | [MobileNet-v3-Large.so](https://huggingface.co/qualcomm/MobileNet-v3-Large/blob/main/MobileNet-v3-Large.so)
|
| 44 |
+
|
| 45 |
|
| 46 |
|
| 47 |
## Installation
|
|
|
|
| 102 |
Profile Job summary of MobileNet-v3-Large
|
| 103 |
--------------------------------------------------
|
| 104 |
Device: Snapdragon X Elite CRD (11)
|
| 105 |
+
Estimated Inference Time: 1.20 ms
|
| 106 |
Estimated Peak Memory Range: 0.57-0.57 MB
|
| 107 |
Compute Units: NPU (144) | Total (144)
|
| 108 |
|
| 109 |
|
| 110 |
```
|
| 111 |
+
|
| 112 |
+
|
| 113 |
## How does this work?
|
| 114 |
|
| 115 |
+
This [export script](https://aihub.qualcomm.com/models/mobilenet_v3_large/qai_hub_models/models/MobileNet-v3-Large/export.py)
|
| 116 |
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
|
| 117 |
on-device. Lets go through each step below in detail:
|
| 118 |
|
|
|
|
| 189 |
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
| 190 |
|
| 191 |
|
| 192 |
+
|
| 193 |
## Run demo on a cloud-hosted device
|
| 194 |
|
| 195 |
You can also run the demo on-device.
|
|
|
|
| 226 |
## License
|
| 227 |
- The license for the original implementation of MobileNet-v3-Large can be found
|
| 228 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
| 229 |
+
- The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
|
| 230 |
|
| 231 |
## References
|
| 232 |
* [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
|