--- license: bsd-3-clause language: - en base_model: - depth-anything/Depth-Anything-V2-Small pipeline_tag: depth-estimation tags: - Depth-Anything-V2 --- # Depth-Anything-V2 This version of Depth-Anything-V2 has been converted to run on the Axera NPU using **w8a16** quantization. This model has been optimized with the following LoRA: Compatible with Pulsar2 version: 3.4 ## Convert tools links: For those who are interested in model conversion, you can try to export axmodel through - [The repo of original](https://depth-anything-v2.github.io/) - [The repo of AXera Platform](https://github.com/AXERA-TECH/DepthAnythingV2.axera), which you can get the detial of guide - [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html) ## Support Platform - AX650 - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html) - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html) - AX630C - [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html) - [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM) - [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit) |Chips|Time| |--|--| |AX650| 33 ms | |AX630C| 310 ms | ## How to use Download all files from this repository to the device ``` root@ax650:~/AXERA-TECH/Depth-Anything-V2# tree . |-- README.md |-- calib-cocotest2017.tar |-- config.json |-- depth_anything_v2_vits.onnx |-- depth_anything_v2_vits_ax620e.axmodel |-- depth_anything_v2_vits_ax650.axmodel |-- examples | |-- demo01.jpg .... | `-- demo20.jpg |-- output-ax.png `-- python |-- infer.py |-- infer_onnx.py |-- output.png `-- requirements.txt 2 directories, 31 files root@ax650:~/AXERA-TECH/Depth-Anything-V2# ``` ### python env requirement #### pyaxengine https://github.com/AXERA-TECH/pyaxengine ``` wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc1/axengine-0.1.3-py3-none-any.whl pip install axengine-0.1.3-py3-none-any.whl ``` #### others Maybe None. #### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) Input image: ![](examples/demo01.jpg) ``` root@ax650:~/AXERA-TECH/Depth-Anything-V2# python3 python/infer.py --model depth_anything_v2_vits_ax650.axmodel --img examples/demo01.jpg [INFO] Available providers: ['AxEngineExecutionProvider'] [INFO] Using provider: AxEngineExecutionProvider [INFO] Chip type: ChipType.MC50 [INFO] VNPU type: VNPUType.DISABLED [INFO] Engine version: 2.12.0s [INFO] Model type: 2 (triple core) [INFO] Compiler version: 3.3 ae03a08f root@ax650:~/AXERA-TECH/Depth-Anything-V2# ls ``` Output image: ![](output-ax.png)