SuperResolution

This version of SuperResolution has been converted to run on the Axera NPU using w8a8 quantization.

This model has been optimized with the following LoRA:

Compatible with Pulsar2 version: 4.2

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through

Support Platform

Chips model cost
AX650 EDSR 800 ms
ESPCN 22 ms

How to use

Download all files from this repository to the device


root@ax650:~/SuperResolution# tree
.
|-- model_convert
|	-- axmodel
|   	`-- edsr_baseline_x2_1.axmodel
|   	`-- espcn_x2_T9.axmodel
|	-- onnx
|   	`-- edsr_baseline_x2_1.onnx
|   	`-- espcn_x2_T9.onnx
|   `-- build_config_edsr.json
|   `-- build_config_espcn.json
|-- python
|   `-- run_onnx.py
|   `-- run_axmodel.py
|   `-- common.py
|   `-- imgproc.py
|-- video
|   `-- test_1920x1080.mp4
|   `-- 1.png
|   `-- 2.png

Requirements

pip install -r python/requirements.txt

pyaxengine ๆ˜ฏ npu ็š„ python api๏ผŒ่ฏฆ็ป†ๅฎ‰่ฃ…่ฏทๅ‚่€ƒ: https://github.com/AXERA-TECH/pyaxengine

Inference

Input Data:

โ””โ”€โ”€ video
    โ””โ”€โ”€ test_1920x1080.mp4

Inference with AX650 Host, such as M4N-Dock(็ˆฑ่ŠฏๆดพPro)

root@ax650 ~/SuperResolution # python python/run_axmodel.py --model model_convert/axmodel/edsr_baseline_x2_1.axmodel --scale 2 --dir_demo video/test_1920x1080.mp4
[INFO] Available providers:  ['AxEngineExecutionProvider']
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Chip type: ChipType.MC50
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Engine version: 2.12.0s
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 4.2 6bff2f67
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 267/267 [10:06<00:00,  2.27s/it]
Total time: 99.582 seconds for 267 frames
Average time: 0.373 seconds for each frame

The output file in experiment/test_1920x1080_x2.avi

Output Data:

โ”œโ”€โ”€ experiment
โ”‚   โ””โ”€โ”€ test_1920x1080_x2.avi

Example Image

Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support