cuda

#40
by LHJ0 - opened

Is there any solution to solve this question:
ValueError: Calling cuda() is not supported for 4-bit or 8-bit quantized models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct dtype.

Sign up or log in to comment