Running 1.14k 1.14k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
michaelbenayoun/llama-2-tiny-4kv-heads-4layers-random Text Generation • Updated Oct 14, 2024 • 5.27k
Distributed Training Collection Papers and resources related to distributed training. • 5 items • Updated Jun 3, 2024