Text Classification
Transformers
Safetensors
English
HHEMv2Config
custom_code

Confusion about unlimited context length

#21
by zhm0 - opened

We found in the configuration_hhem_v2.py file that hhem-2.1 is still based on flan-t5-base, and the output during inference is "Token indices sequence length is longer than the specified maximum sequence length for this model (1706 > 512). Running this sequence through the model will result in indexing errors". Does the unlimited context length require parameter control?

Thanks for your interest in HHEM. This issue has been asked by other users. You can simply ignore it.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment