Improved Baselines with Visual Instruction Tuning Paper β’ 2310.03744 β’ Published Oct 5, 2023 β’ 37
DeepSeek-VL: Towards Real-World Vision-Language Understanding Paper β’ 2403.05525 β’ Published Mar 8 β’ 39
Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities Paper β’ 2308.12966 β’ Published Aug 24, 2023 β’ 7
LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model Paper β’ 2404.01331 β’ Published Mar 29 β’ 25
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI Paper β’ 2311.16502 β’ Published Nov 27, 2023 β’ 35
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models Paper β’ 2403.18814 β’ Published Mar 27 β’ 44
Kosmos-2: Grounding Multimodal Large Language Models to the World Paper β’ 2306.14824 β’ Published Jun 26, 2023 β’ 34
CogVLM: Visual Expert for Pretrained Language Models Paper β’ 2311.03079 β’ Published Nov 6, 2023 β’ 23
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens Paper β’ 2404.03413 β’ Published Apr 4 β’ 25
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models Paper β’ 2401.15947 β’ Published Jan 29 β’ 49
Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning Paper β’ 2402.11690 β’ Published Feb 18 β’ 7
TextHawk: Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models Paper β’ 2404.09204 β’ Published Apr 14 β’ 10
BLINK: Multimodal Large Language Models Can See but Not Perceive Paper β’ 2404.12390 β’ Published Apr 18 β’ 24
LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images Paper β’ 2403.11703 β’ Published Mar 18 β’ 16
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language Models Paper β’ 2404.13013 β’ Published Apr 19 β’ 30
TextSquare: Scaling up Text-Centric Visual Instruction Tuning Paper β’ 2404.12803 β’ Published Apr 19 β’ 29
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models Paper β’ 2310.14566 β’ Published Oct 23, 2023 β’ 25
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding Paper β’ 2306.17107 β’ Published Jun 29, 2023 β’ 11
LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing Paper β’ 2311.00571 β’ Published Nov 1, 2023 β’ 40
To See is to Believe: Prompting GPT-4V for Better Visual Instruction Tuning Paper β’ 2311.07574 β’ Published Nov 13, 2023 β’ 14
SILC: Improving Vision Language Pretraining with Self-Distillation Paper β’ 2310.13355 β’ Published Oct 20, 2023 β’ 6
Woodpecker: Hallucination Correction for Multimodal Large Language Models Paper β’ 2310.16045 β’ Published Oct 24, 2023 β’ 14
SEED-X: Multimodal Models with Unified Multi-granularity Comprehension and Generation Paper β’ 2404.14396 β’ Published Apr 22 β’ 18
SEED-Bench-2-Plus: Benchmarking Multimodal Large Language Models with Text-Rich Visual Comprehension Paper β’ 2404.16790 β’ Published Apr 25 β’ 7
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning Paper β’ 2404.16994 β’ Published Apr 25 β’ 35