-
Sequence Parallelism: Long Sequence Training from System Perspective
Paper • 2105.13120 • Published • 5 -
Ring Attention with Blockwise Transformers for Near-Infinite Context
Paper • 2310.01889 • Published • 11 -
Striped Attention: Faster Ring Attention for Causal Transformers
Paper • 2311.09431 • Published • 4 -
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
Paper • 2309.14509 • Published • 18
Maozhou Ge
Gmc2
AI & ML interests
None yet
Recent Activity
liked
a model
1 day ago
Qwen/Qwen2.5-VL-32B-Instruct
liked
a model
1 day ago
deepseek-ai/DeepSeek-V3-0324
upvoted
a
collection
2 days ago
🌾Oat-Zero: Understanding R1-Zero-Like Training
Organizations
None yet
Collections
1
models
None public yet
datasets
None public yet