Spaces:
Running
on
Zero
TypeError: argument of type 'bool' is not iterable
Hi @hysts , @JeffreyXiang 👋
I am having a problem with my Gradio application in Hugging Face Spaces, and I would greatly appreciate your help. I get the following error in the logs when trying to run it:
TypeError: argument of type 'bool' is not iterable
This error occurs specifically in the get_type function of gradio_client/utils.py, and seems to be related to the handling of a JSON schema:
File "/usr/local/lib/python3. 10/site-packages/gradio_client/utils.py", line 863, in get_type
if "const" in schema:
TypeError: argument of type 'bool' is not iterable
It seems that the error is triggered when I try to launch the app with demo.launch().
Do you have any idea what could be causing this or how I could fix it? Any guidance would be very welcome 🙏
Thanks in advance!
Hi
@cavargas10
Looking at this GitHub issue, it seems that the error is due to the pydantic
version. Not sure if your Space is compatible with the latest gradio, but could you try upgrading gradio version in your Space? You might also try pinning the pydantic
version, though I'm not sure which versions would work.
BTW, for general gradio-related questions like this, you might want to ask in the gradio-questions
channel on the HF Discord. The Gradio team members check it regularly, so even if it's something I can't answer, there's a good chance someone else will help out.
You can find the solution here https://huggingface.co/spaces/agents-course/First_agent_template/discussions/251
You just need to upgrade the readme.md
Hi @cavargas10
Looking at this GitHub issue, it seems that the error is due to thepydantic
version. Not sure if your Space is compatible with the latest gradio, but could you try upgrading gradio version in your Space? You might also try pinning thepydantic
version, though I'm not sure which versions would work.BTW, for general gradio-related questions like this, you might want to ask in the
gradio-questions
channel on the HF Discord. The Gradio team members check it regularly, so even if it's something I can't answer, there's a good chance someone else will help out.
Hi @hysts ,
Thank you for your response and suggestions. I was able to fix the configuration of the project by updating the README and making some adjustments. However, I’m now encountering this error when trying to run the project, and I’m not entirely sure what’s causing it.
Here’s the traceback I’m getting:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 140, in worker_init
torch.init(nvidia_uuid)
File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 373, in init
torch.Tensor([0]).cuda()
File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 314, in _lazy_init
torch._C._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 304: OS call failed or operation not supported on this OS
I don’t fully understand the issue—it seems related to CUDA, but I’m not sure how to proceed. This is an implementation based on the project from this repository.
I apologize for bothering you with this, but could you take a look at it when you have some time and help me figure out if it can be fixed? I’d really appreciate your guidance.
Thanks so much for your help!
Hmm, I don't know what's causing it either. As the original code was working, the error probably comes from the changes you made. I don't have the bandwidth to look into it, but you might want to try asking in https://huggingface.co/spaces/zero-gpu-explorers/README/discussions. Someone else might be able to help.