SVD-Free Low-Rank Adaptive Gradient Optimization for Large Language Models Paper • 2505.17967 • Published 19 days ago • 17
Quartet: Native FP4 Training Can Be Optimal for Large Language Models Paper • 2505.14669 • Published 21 days ago • 73
HALO: Hadamard-Assisted Lossless Optimization for Efficient Low-Precision LLM Training and Fine-Tuning Paper • 2501.02625 • Published Jan 5 • 16
QuEST: Stable Training of LLMs with 1-Bit Weights and Activations Paper • 2502.05003 • Published Feb 7 • 44
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence Paper • 2405.15593 • Published May 24, 2024 • 1