After some heated discussion ๐ฅ, we clarify our intent re. storage limits on the Hub
TL;DR: - public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible - private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)
We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community ๐ฅ
๐ฅ Today, Writer dropped Palmyra-Med-70b and Palmyra-Fin-70b, two new domain-specific models that are setting a new standard for medical and financial model performance.
TL;DR Palmyra-Med-70b ๐ข 8k and 32k versions available ๐ MMLU performance of ~86%, outperforming other top models ๐จโโ๏ธ Great for diagnosing, planning treatments, medical research, insurance coding and billing ๐ Open-model license for non-commercial use cases ๐ค Available on Hugging Face: Writer/Palmyra-Med-70B ๐พ Live on NVIDIA NIM: https://build.nvidia.com/writer/palmyra-med-70b
Palmyra-Fin-70b ๐ Passed the CFA Level III exam with a 73% score โ the first model to do so ๐ธ Skilled at complex tasks like investment research, financial analysis, and sentiment analysis ๐ Outperformed other top models on a long-fin-eval test of real-world use cases ๐ Open-model license for non-commercial use cases ๐ค Available on Hugging Face: https://huggingface.co/Writer/Palmyra-Fin-70B-32K ๐พ Live on NVIDIA NIM: https://build.nvidia.com/writer/palmyra-fin-70b-32k
Current ranking of pre-trained (nonchat) open access LLMs according to the leaderboard. 1-4 are from China-based groups. Does training models with Chinese somehow lead to better metrics? ๐ค WDYT?