Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,15 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# MAIR Lab β Mila, Quebec AI Institute
|
| 11 |
+
|
| 12 |
+
The **Multimodal Artificial Intelligence Research (MAIR) Lab** at [Mila](https://mila.quebec/en/) advances the science of **foundation models** that can see, interact, and act in the physical world.
|
| 13 |
+
|
| 14 |
+
Our research explores how these models **understand the visual world**, and how they can be adapted through **fine-tuning**, **parameter-efficient methods**, **reinforcement learning**, and other approaches to unlock new capabilities. We apply these techniques across a range of multimodal tasks β from **visual question answering** and **instruction-guided image editing** to **reasoning-intensive re-ranking** and **multimodal content generation**.
|
| 15 |
+
|
| 16 |
+
Beyond developing methods, we create **datasets and benchmarks** that challenge models to reason deeply, generalize across modalities, and operate with **cultural awareness** in diverse global contexts.
|
| 17 |
+
|
| 18 |
+
Our goal is to move beyond surface-level recognition toward **AI systems that truly understand, reason, and interact** β bridging vision, language, and human values.
|
| 19 |
+
|
| 20 |
+
**β Explore our [models](https://huggingface.co/mair-lab) and [datasets](https://huggingface.co/mair-lab?sort=modified) to help shape the future of multimodal AI.**
|
| 21 |
+
|