The ONNX format model of UltraSharp V2 FP16 has an inference error.
#3
by
nukui
- opened
I am using ONNX Runtime version 1.22.0, which performs well for FP32 model inference and successfully runs the UltraSharp Lite FP16 model. However, for the UltraSharp FP16 model, it returns a tensor filled with NaN values.
Can you explain your approach to running inference on the FP16 ONNX model? I'd really appreciate it.
Thanks for your reply, the other models work great!
nukui
changed discussion status to
closed