--- title: README emoji: 🏆 colorFrom: red colorTo: indigo sdk: static pinned: false --- # MAIR Lab — Mila, Quebec AI Institute The **Multimodal Artificial Intelligence Research (MAIR) Lab** at [Mila](https://mila.quebec/en/) advances the science of **foundation models** that can see, interact, and act in the physical world. Our research explores how these models **understand the visual world**, and how they can be adapted through **fine-tuning**, **parameter-efficient methods**, **reinforcement learning**, and other approaches to unlock new capabilities. We apply these techniques across a range of multimodal tasks — from **visual question answering** and **instruction-guided image editing** to **reasoning-intensive re-ranking** and **multimodal content generation**. Beyond developing methods, we create **datasets and benchmarks** that challenge models to reason deeply, generalize across modalities, and operate with **cultural awareness** in diverse global contexts. Our goal is to move beyond surface-level recognition toward **AI systems that truly understand, reason, and interact** — bridging vision, language, and human values. **→ Explore our [models](https://huggingface.co/mair-lab) and [datasets](https://huggingface.co/mair-lab?sort=modified) to help shape the future of multimodal AI.**