Nathan Habib

SaylorTwift

AI & ML interests

None yet

Recent Activity

liked a Space about 3 hours ago
bookbot/Image-Upscaling-Playground
liked a dataset about 7 hours ago
openai/BrowseCompLongContext
reacted to eliebak's post with ๐Ÿ”ฅ about 7 hours ago
Kimi K2 tech report is full of gems as always. Here are my notes on it: > MuonClip: Pretty crazy how after 70k the training stabilizes and the QK-clip is basically inactive. There is also no loss in perf with QK-clip which is not trivial at all (at small scale but with aggressive threshold). Also a cool explanation of why muon makes the logit explode in appendix E (tl;dr is that muon makes the singular value of the update matrix higher) > Sparsity scaling laws to justify their ratio, they have a very solid training infra that allows the model to be trained at this sparsity level, they could have increased even more but as sparsity increases the training becomes less efficient. > They diminish the number of attention heads to make it more efficient for long context since attention heads are a big bottleneck for long context. They also remove 2 of the 3 "first dense" layers in the dsv3 arch. With the sparsity and attention heads (divided by 2) they achieve 83% increased flops compared to deepseek v3 arch at 128k. > Data: Rephrasing is KEY. They do a lot more synthetic data generation and rephrase their corpus to have different styles, for longer documents they do it by chunk. I'm (half) surprised by the fact that ONLY 1 epoch (assuming same number of training tokens I think?) of data rephrased 10 times has better accuracy than 10 epochs of the same data rephrased once. > They do rewriting for Math and Knowledge, for Math they apply the ShallowMath recipe and instruct the model to rephrase in a "learning note" style > They talk about diversity and probably have some internal stuff/eval to test that, as always still a bit unclear for me how to properly measure that. The infra is also very nice, quick summary: > PP=16 (1F1B schedule, a bit custom), EP=16, zero1 > No FP8 computation but for storage of specific layers, selective recomputation for inexpensive block, activation offloading to CPU
View all activity

Organizations

Hugging Face's profile picture Evaluation datasets's profile picture Hugging Test Lab's profile picture HuggingFaceGECLM's profile picture BigCode's profile picture Hugging Face H4's profile picture BigCode Data's profile picture Hugging Face Smol Models Research's profile picture Hugging Face Smol Cluster's profile picture Open LLM Leaderboard's profile picture huggingPartyParis's profile picture Qwen's profile picture gg-hf's profile picture Nanotron Research's profile picture FineData's profile picture HF-contamination-detection's profile picture Top Contributors: Dataset Downloads's profile picture hsramall's profile picture La Leaderboard's profile picture gg-tt's profile picture HuggingFaceEval's profile picture Novel Challenge's profile picture LLHF's profile picture SLLHF's profile picture lbhf's profile picture Lighteval testing org's profile picture Hugging Face Science's profile picture Coordination Nationale pour l'IA's profile picture open-llm-leaderboard-react's profile picture Prompt Leaderboard's profile picture wut?'s profile picture Your Bench's profile picture Open R1's profile picture gg-hf-g's profile picture OpenEvals's profile picture arc-agi-community's profile picture yofo's profile picture LightEval Internal Testing's profile picture