view article Article Efficient LLM Pretraining: Packed Sequences and Masked Attention Oct 7, 2024 • 65
Running 3.68k The Ultra-Scale Playbook 🌌 3.68k The ultimate guide to training LLM on large GPU Clusters
ModernBERT Collection Bringing BERT into modernity via both architecture changes and scaling • 3 items • Updated Dec 19, 2024 • 157
Running Featured 1.29k FineWeb: decanting the web for the finest text data at scale 🍷 1.29k Read about FineWeb, a large web‑text dataset for LLMs