Very cool thanks! I think OpenAI already hate Open Source :)))))
Products that are trying so hard to monetize are created in one day.
Kh
raidhon
AI & ML interests
Fine-tuning, Dataset creation, Time Series
Recent Activity
commented on
an
article
1 day ago
Open-source DeepResearch โ Freeing our search agents
upvoted
an
article
1 day ago
Open-source DeepResearch โ Freeing our search agents
liked
a model
13 days ago
hexgrad/Kokoro-82M
Organizations
None yet
raidhon's activity
![](https://cdn-avatars.huggingface.co/v1/production/uploads/64175bc2b03817ada642291f/V3mhc8Y0saSgXbp--2HcE.png)
commented on
Open-source DeepResearch โ Freeing our search agents
1 day ago
![](https://cdn-avatars.huggingface.co/v1/production/uploads/64175bc2b03817ada642291f/V3mhc8Y0saSgXbp--2HcE.png)
upvoted
an
article
1 day ago
Article
Open-source DeepResearch โ Freeing our search agents
โข
636
Can't reproduce the evaluation result of GPQA dataset
5
#47 opened about 2 months ago
by
Rinn000
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/7ptMrC4HycrFqQTAYpERw.jpeg)
Yes, it's been tested, and it's false. It's even worse than the regular LLAMA 3.1 70b. It's even funny to compare it to Claude.
https://www.reddit.com/r/LocalLLaMA/s/BH5A2ngyui
![](https://cdn-avatars.huggingface.co/v1/production/uploads/64175bc2b03817ada642291f/V3mhc8Y0saSgXbp--2HcE.png)
replied to
hrishbhdalal's
post
9 months ago
Yeah, I was thinking the same thing. A large vocabulary does improve the performance of smaller LLMs and judging by the GPT-4o the same is true for larger LLM. Give it a try. I'm just doing this for small size models up to 3B parameters.