MoveNet Instructions
Overview
This demonstrates the use of the NPU for running the MoveNet model using ONNX Runtime with AMD's VAIP execution provider. CPU support is also included.
There are two models. One is a float32 model, the other is an int8 quantized model. The int8 model requires an xclbin file to be specified for the VAIP EP to run. It also needs special handling to ensure that the model is cached correctly.
Requirements
RAI 1.5 installation
Installation
- git clone https://huggingface.co/datasets/amd/movenet_demo_RAI_1.5/
- Unzip the two cache folders in the repo - these should be located in the same directory as
movenet_demo.py
- Copy the
xclbins
directory fromC:\Program Files\RyzenAI\1.5.0\voe-4.0-win_amd64\
to the same directory asmovenet_demo.py
- Note: your RAI installation path may vary
- Install any requirements using the
requirements.txt
file:pip install -r requirements.txt
- Activate RAI 1.5 conda env
conda activate ryzen-ai-1.5.0
Usage
The movenet_demo.py
script runs MoveNet pose estimation on a single image. It automatically detects whether you're using the FP32 or INT8 model and selects the appropriate pre-compiled cache.
Basic syntax:
python movenet_demo.py [options]
Available Parameters:
--image
: Input image path (default:images/input/test_image.jpg
)--model
: ONNX model path (default:./movenet_fp32.onnx
)--npu
: Use NPU acceleration (required for INT8 model)--output, -o
: Output image path (default:images/output/npu_webcam_image.png
)--loops
: Number of inference loops for benchmarking (default: 1)--threshold
: Keypoint confidence threshold 0.0-1.0 (default: 0.3)
Basic Usage
Run with FP32 model on CPU:
python movenet_demo.py --model ./movenet_fp32.onnx
Run with INT8 model on NPU:
python movenet_demo.py --model ./movenet_int8.onnx --npu
Use custom input image:
python movenet_demo.py --image path/to/your/image.jpg
Save output to specific location:
python movenet_demo.py --output results/my_result.png
Advanced Options
Performance benchmarking with multiple inference loops:
python movenet_demo.py --loops 100 # Run 100 inferences for timing
Adjust keypoint confidence threshold:
python movenet_demo.py --threshold 0.5 # Only show keypoints with >50% confidence
python movenet_demo.py --threshold 0.1 # Show more keypoints (lower threshold)
Complete example with all options:
python movenet_demo.py \
--image images/input/my_photo.jpg \
--model ./movenet_int8.onnx \
--npu \
--output results/pose_output.png \
--loops 50 \
--threshold 0.4
Examples
Example 1: Basic FP32 CPU Inference
python movenet_demo.py --model ./movenet_fp32.onnx --loops 1
Output:
model ./movenet_fp32.onnx
Detected model type: fp32
Running on CPU
Model: fp32 | Input: int32 | Outputs: 1
Running 1 inference loops...
Latency: 11.8ms avg (11.8-11.8ms) | 84.6 FPS
Using keypoint threshold: 0.3
Keypoints detected: 5/17
Saved: images/output/npu_webcam_image.png
Example 2: Performance Benchmarking
python movenet_demo.py --model ./movenet_fp32.onnx --loops 10
Output:
model ./movenet_fp32.onnx
Detected model type: fp32
Running on CPU
Model: fp32 | Input: int32 | Outputs: 1
Running 10 inference loops...
Latency: 2.1ms avg (1.0-4.0ms) | 474.8 FPS
Using keypoint threshold: 0.3
Keypoints detected: 5/17
Saved: images/output/npu_webcam_image.png
Example 3: INT8 NPU Inference
python movenet_demo.py --model ./movenet_int8.onnx --npu --loops 1
Output:
model ./movenet_int8.onnx
Detected model type: int8
Running on NPU
Using xclbin: C:\...\xclbins\strix\AMD_AIE2P_4x4_Overlay.xclbin
[VitisAI EP initialization messages...]
Model: int8 | Input: float32 | Outputs: 4
Running 1 inference loops...
Latency: 4.0ms avg (4.0-4.0ms) | 251.1 FPS
Using keypoint threshold: 0.3
Keypoints detected: 0/17
Saved: images/output/npu_webcam_image.png
Example 4: Lower Threshold for More Keypoints
python movenet_demo.py --model ./movenet_fp32.onnx --threshold 0.1
Output:
model ./movenet_fp32.onnx
Detected model type: fp32
Running on CPU
Model: fp32 | Input: int32 | Outputs: 1
Running 1 inference loops...
Latency: 2.0ms avg (2.0-2.0ms) | 498.9 FPS
Using keypoint threshold: 0.1
Keypoints detected: 8/17
Saved: images/output/npu_webcam_image.png
Example 5: Custom Output Path
python movenet_demo.py --model ./movenet_fp32.onnx --output results/example_output.png
Output:
model ./movenet_fp32.onnx
Detected model type: fp32
Running on CPU
Model: fp32 | Input: int32 | Outputs: 1
Running 1 inference loops...
Latency: 2.5ms avg (2.5-2.5ms) | 398.7 FPS
Using keypoint threshold: 0.3
Keypoints detected: 5/17
Saved: results/example_output.png
Notes
- Downloads last month
- 12