Listener-Rewarded Thinking in VLMs for Image Preferences Paper • 2506.22832 • Published 12 days ago • 24
General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing Paper • 2309.16710 • Published Aug 17, 2023
NAG-GS: Semi-Implicit, Accelerated and Robust Stochastic Optimizer Paper • 2209.14937 • Published Sep 29, 2022
Sparse and Transferable Universal Singular Vectors Attack Paper • 2401.14031 • Published Jan 25, 2024
SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers Paper • 2410.07383 • Published Oct 9, 2024
Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural Network Paper • 2008.05441 • Published Aug 12, 2020
Diagonal Batching Unlocks Parallelism in Recurrent Memory Transformers for Long Contexts Paper • 2506.05229 • Published Jun 5 • 37
Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models Paper • 2506.06751 • Published Jun 7 • 72
Confidence Is All You Need: Few-Shot RL Fine-Tuning of Language Models Paper • 2506.06395 • Published Jun 5 • 126
Test-Time Reasoning Through Visual Human Preferences with VLMs and Soft Rewards Paper • 2503.19948 • Published Mar 25
cadrille: Multi-modal CAD Reconstruction with Online Reinforcement Learning Paper • 2505.22914 • Published May 28 • 35
Seedance 1.0: Exploring the Boundaries of Video Generation Models Paper • 2506.09113 • Published 29 days ago • 95
Confidence Is All You Need: Few-Shot RL Fine-Tuning of Language Models Paper • 2506.06395 • Published Jun 5 • 126
Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models Paper • 2506.06751 • Published Jun 7 • 72
Image Reconstruction as a Tool for Feature Analysis Paper • 2506.07803 • Published about 1 month ago • 29
Will It Still Be True Tomorrow? Multilingual Evergreen Question Classification to Improve Trustworthy QA Paper • 2505.21115 • Published May 27 • 135
Diagonal Batching Unlocks Parallelism in Recurrent Memory Transformers for Long Contexts Paper • 2506.05229 • Published Jun 5 • 37
Exploring the Latent Capacity of LLMs for One-Step Text Generation Paper • 2505.21189 • Published May 27 • 62
Exploring the Latent Capacity of LLMs for One-Step Text Generation Paper • 2505.21189 • Published May 27 • 62
The Shape of Learning: Anisotropy and Intrinsic Dimensions in Transformer-Based Models Paper • 2311.05928 • Published Nov 10, 2023 • 1