Spaces:
Running
Running
Different results between Jax Space and the HF Transformers Space
#2
by
Shalev
- opened
From https://huggingface.co/spaces/big-vision/paligemma - the Jax model works well.
But the https://huggingface.co/spaces/big-vision/paligemma-hf space just selects the entire image (on the same input). I'm trying to reproduce the (better) Jax behavior on HF transformers, but I can't figure out what's being done differently on the Jax side. Any tips would be appreciated!
Seeing similar issues, is there a difference in the HF version?
Hi, how can we decode the segmentation tokens into binary mask for object segmentation?
@D-Anel you can check the code here - https://huggingface.co/spaces/big-vision/paligemma-hf/blob/main/app.py#L43