YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
MobileNet v2 1.0 224 Optimized for ARM Ethos-U55
This repository contains MobileNet v2 1.0 224 model optimized for the ARM Ethos-U55 NPU using the Vela compiler.
Model Overview
- Base Model: MobileNet v2 1.0 224 INT8 quantized
- Source: ARM ML-zoo repository
- Target Hardware: ARM Ethos-U55 NPU
- Optimization: Compiled with Vela compiler for optimal performance
Files Description
Model Files
mobilenet_v2_1.0_224_INT8_vela.tflite
- Vela-optimized model for Ethos-U55mobilenet_v2_1.0_224_INT8.tflite
- Original INT8 quantized modellabelmappings.txt
- ImageNet class labels (1001 classes)
Configuration Files
u55_eval_with_TA_config_400_and_200_MHz.ini
- Vela configuration for U55 evaluation
Performance Reports
mobilenet_v2_1.0_224_INT8_summary_Ethos_U55_400MHz_SRAM_3.2_GBs_Flash_0.05_GBs.csv
- Performance summarymobilenet_v2_1.0_224_INT8_per-layer.csv
- Per-layer performance analysis
Performance Summary
System Configuration: Ethos_U55_400MHz_SRAM_3.2_GBs_Flash_0.05_GBs
- Total SRAM used: 353.50 KiB
- Total On-chip Flash used: 3614.39 KiB
- NPU cycles: 6,019,265 cycles/batch
- Network Tops/s: 0.04 Tops/s
- CPU operators: 0 (0.0%)
- NPU operators: 95 (100.0%)
Model Details
- Input Shape: (224, 224, 3)
- Quantization: INT8 weights and activations
- MAC Operations: 304,452,946 MACs/batch
- All operators: Compatible with Ethos-U55
Usage
The Vela-optimized model (mobilenet_v2_1.0_224_INT8_vela.tflite
) is ready for deployment on ARM Ethos-U55 systems.
Compilation Command
The model was compiled using the following Vela command:
vela --accelerator-config=ethos-u55-128 \
--optimise Size \
--config u55_eval_with_TA_config_400_and_200_MHz.ini \
--memory-mode Sram_Only \
--system-config Ethos_U55_400MHz_SRAM_3.2_GBs_Flash_0.05_GBs \
mobilenet_v2_1.0_224_INT8.tflite \
--verbose-cycle-estimate \
--verbose-performance \
--output-dir vela_output
References
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support