Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
159
42
63
Nathan Habib
SaylorTwift
Follow
tengomucho's profile picture
LiteMind's profile picture
ChrislyTech7's profile picture
219 followers
ยท
286 following
nathanhabib1011
NathanHB
AI & ML interests
None yet
Recent Activity
liked
a Space
about 3 hours ago
bookbot/Image-Upscaling-Playground
liked
a dataset
about 7 hours ago
openai/BrowseCompLongContext
reacted
to
eliebak
's
post
with ๐ฅ
about 7 hours ago
Kimi K2 tech report is full of gems as always. Here are my notes on it: > MuonClip: Pretty crazy how after 70k the training stabilizes and the QK-clip is basically inactive. There is also no loss in perf with QK-clip which is not trivial at all (at small scale but with aggressive threshold). Also a cool explanation of why muon makes the logit explode in appendix E (tl;dr is that muon makes the singular value of the update matrix higher) > Sparsity scaling laws to justify their ratio, they have a very solid training infra that allows the model to be trained at this sparsity level, they could have increased even more but as sparsity increases the training becomes less efficient. > They diminish the number of attention heads to make it more efficient for long context since attention heads are a big bottleneck for long context. They also remove 2 of the 3 "first dense" layers in the dsv3 arch. With the sparsity and attention heads (divided by 2) they achieve 83% increased flops compared to deepseek v3 arch at 128k. > Data: Rephrasing is KEY. They do a lot more synthetic data generation and rephrase their corpus to have different styles, for longer documents they do it by chunk. I'm (half) surprised by the fact that ONLY 1 epoch (assuming same number of training tokens I think?) of data rephrased 10 times has better accuracy than 10 epochs of the same data rephrased once. > They do rewriting for Math and Knowledge, for Math they apply the ShallowMath recipe and instruct the model to rephrase in a "learning note" style > They talk about diversity and probably have some internal stuff/eval to test that, as always still a bit unclear for me how to properly measure that. The infra is also very nice, quick summary: > PP=16 (1F1B schedule, a bit custom), EP=16, zero1 > No FP8 computation but for storage of specific layers, selective recomputation for inexpensive block, activation offloading to CPU
View all activity
Organizations
SaylorTwift
's models
2
Sort:ย Recently updated
SaylorTwift/gpt2_test
Text Generation
โข
0.1B
โข
Updated
Sep 23, 2024
โข
1.66k
SaylorTwift/xlm-roberta-base-finetuned-panx-fr
Updated
Mar 13, 2023