DeepLabv3Plus

This version of deeplabv3plus_mobilenet has been converted to run on the Axera NPU using w8a16 quantization.

Compatible with Pulsar2 version: 5.0-patch1

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through

Support Platform

Chips Models Time
AX650 deeplabv3plus_mobilenet_u16 13.4 ms
AX637 deeplabv3plus_mobilenet_u16 39.4 ms

How to use

Download all files from this repository to the device

python env requirement

pyaxengine

https://github.com/AXERA-TECH/pyaxengine

wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc2/axengine-0.1.3-py3-none-any.whl
pip install axengine-0.1.3-py3-none-any.whl

others

Maybe None.

Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)

Input image:

run

python3 infer.py --img samples/1_image.png --model models-ax637/deeplabv3plus_mobilenet_u16.axmodel

Output image:

Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support