add LGAI-EXAONE/EXAONE-3.5-32B-Instruct as another baseline?
#3
by
yhg0112
- opened
huge kudos to OLMo team for making OLMO-2-32B-Instruct fully open source.
Any plan to add "LGAI-EXAONE/EXAONE-3.5-32B-Instruct" as another baseline?
We trained "LGAI-EXAONE/EXAONE-3.5-32B-Instruct" with 6.5T tokens for pre-train, which makes ours a fair baseline to OLMO-2-32B-Instruct.
Here's our open-weighted model:
https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-32B-Instruct
Hey @yhg0112 , thanks for sharing this. I will pass this information to the core team :)