Spaces:
Running
on
Zero
Running
on
Zero
Show inference time for both models
#2
by
vikhyatk
- opened
Added an extra output that shows inference time for both models. I removed the @GPU
annotation on the 'detect' function because it was causing a ~400ms hit to the first model's inference time (presumably because of ZeroGPU initialization that the second model is able to take advantage of). With the annotation removed both models get the performance hit resulting in a more apples-to-apples comparison.
Here's what it looks like:
Thanks! This is super useful!
sergiopaniego
changed pull request status to
merged