-
Language Modeling Is Compression
Paper • 2309.10668 • Published • 82 -
Small-scale proxies for large-scale Transformer training instabilities
Paper • 2309.14322 • Published • 19 -
Evaluating Cognitive Maps and Planning in Large Language Models with CogEval
Paper • 2309.15129 • Published • 6 -
Vision Transformers Need Registers
Paper • 2309.16588 • Published • 77
Collections
Discover the best community collections!
Collections including paper arxiv:2309.16588
-
Language Modeling Is Compression
Paper • 2309.10668 • Published • 82 -
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Paper • 2309.08968 • Published • 22 -
Vision Transformers Need Registers
Paper • 2309.16588 • Published • 77 -
Localizing and Editing Knowledge in Text-to-Image Generative Models
Paper • 2310.13730 • Published • 6
-
Self-Alignment with Instruction Backtranslation
Paper • 2308.06259 • Published • 40 -
ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation
Paper • 2308.03793 • Published • 10 -
From Sparse to Soft Mixtures of Experts
Paper • 2308.00951 • Published • 20 -
Revisiting DETR Pre-training for Object Detection
Paper • 2308.01300 • Published • 9
-
Mobile V-MoEs: Scaling Down Vision Transformers via Sparse Mixture-of-Experts
Paper • 2309.04354 • Published • 13 -
Vision Transformers Need Registers
Paper • 2309.16588 • Published • 77 -
AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models
Paper • 2309.16414 • Published • 19 -
MotionLM: Multi-Agent Motion Forecasting as Language Modeling
Paper • 2309.16534 • Published • 15