AWQ quantized model incompatible with bfloat16

#1
by alfredplpl - opened

Hi,

I tried running this model with bfloat16, but encountered an error during inference. It seems that the AWQ quantized version does not support bfloat16 and only works properly with float16.

Could you confirm if this is expected behavior? If so, it might be helpful to document this limitation explicitly, as some users may assume bfloat16 is supported.

Thanks!

Stockmark Inc. org

Thank you for pointing this out. We've updated the model card to include a usage example and added a note to explicitly mention that float16 should be used when loading the model. Hopefully, this helps avoid confusion for future users.

Thanks!

Thanks.
This point would be helpful for the future users.

alfredplpl changed discussion status to closed

Sign up or log in to comment