float32 please!!!!

#9
by ctranslate2-4you - opened

Hello,

The original Exaone models are uploaded in float32. Can you upload all of the "deep" models also in float32, either just float32 or in addition to bfloat16?

The reason I ask is because if a person's GPU doesn't support bfloat16, and let's assume that they specify float16, there is a noticeable loss in accuracy.

Basically, converting from float32 to float16 is BETTER than converting from bfloat16 to float16.

If not on here, can you provide me another download link to float32 versions of these great "deep" models?

LG AI Research org

Hello @ctranslate2-4you ,

The EXAONE Deep models were trained using BFloat16 (BF16) precision weights. If your GPU doesn't support BF16, you can convert the model weights from BF16 to FP32, and then to FP16, using CPU.

Please don't hesitate to reach out if you encounter any issues during the BF16 to FP32 conversion process on CPU.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment