Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
0.6
TFLOPS
4
5
conf
deundido
Follow
Gargaz's profile picture
1 follower
ยท
2 following
https://gw2.kilitary.ru?x=color
commandmenttwo
kilitary
AI & ML interests
Research interestEsV
Recent Activity
published
a Space
2 days ago
deundido/mutarinew
reacted
to
m-ric
's
post
with โ
11 months ago
๐๐ผ๐ผ๐ด๐น๐ฒ ๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ : ๐๐ฐ๐ฎ๐น๐ถ๐ป๐ด ๐๐ฝ ๐ถ๐ป๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ ๐ฐ๐ผ๐บ๐ฝ๐๐๐ฒ ๐ฏ๐ฒ๐ฎ๐๐ ๐ญ๐ฐ๐ ๐น๐ฎ๐ฟ๐ด๐ฒ๐ฟ ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ Remember scaling laws? These are empirical laws that say "the bigger your model, the better it gets". More precisely, "as your compute increases exponentially, loss decreases in a linear fashion". They have wild implications, suggesting that spending 100x more training compute would make you super-LLMs. That's why companies are racing to build the biggest AI superclusters ever, and Meta bought 350k H100 GPUs, which probably cost in the order of $1B. But think of this : we're building huge reasoning machines, but only ask them to do one pass through the model to get one token of the final answer : i.e., we expend a minimal effort on inference. That's like building a Caterpillar truck and making it run on a lawnmower's motor. ๐๐ต Couldn't we optimize this? ๐ค ๐ก So instead of scaling up on training by training even bigger models on many more trillions of tokens, Google researchers explored this under-explored avenue : scaling up inference compute. They combine two methods to use more compute : either a reviser that iterated to adapt the model distribution, or generate N different completions (for instance through Beam Search) and select only the best one using an additional verifier model. They use a Palm-2 model (released in May 23) on the MATH dataset : Palm-2 has the advantage of getting a low performance on MATH, but not zero, so that improvements will be noticeable. And the results show that for the same fixed amount of inference compute: ๐ฅ a smaller model with more effort on decoding beats a x14 bigger model using naive greedy sampling. That means that you can divide your training costs by 14 and still get the same perf for the same inference cost! Take that, scaling laws. Mark Zuckerberg, you're welcome, hope I can get some of these H100s. Read the paper here ๐ https://huggingface.co/papers/2408.03314
liked
a model
11 months ago
deundido/xffgyrtxkt
View all activity
Organizations
spaces
3
Sort:ย Recently updated
Sleeping
Mutarinew
๐
desc request = cs ed reply
Running
1
AutoTrain Advanced
๐
Runtime error
Shiny for Python template
๐
models
1
deundido/xffgyrtxkt
Updated
Apr 29, 2024
โข
1
datasets
0
None public yet