Update README.md
Browse files
README.md
CHANGED
@@ -5,13 +5,15 @@ tags:
|
|
5 |
- vision-language-model
|
6 |
- contrastive learning
|
7 |
- self-supervised learning
|
|
|
|
|
8 |
---
|
9 |
|
10 |
-
**COSMOS Model**
|
11 |
|
12 |
Authors: [Sanghwan Kim](https://kim-sanghwan.github.io/), [Rui Xiao](https://www.eml-munich.de/people/rui-xiao), [Mariana-Iuliana Georgescu](https://lilygeorgescu.github.io/), [Stephan Alaniz](https://www.eml-munich.de/people/stephan-alaniz), [Zeynep Akata](https://www.eml-munich.de/people/zeynep-akata)
|
13 |
|
14 |
-
COSMOS is introduced in the paper [COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training](https://arxiv.org/abs/2412.01814). COSMOS is trained in self-supervised learning framework with multi-modal augmentation and cross-attention module. It outperforms CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. COSMOS also achieves strong performance in downstream tasks including zero-shot image-text retrieval, classification, and semantic segmentation
|
15 |
|
16 |
**Usage**
|
17 |
|
|
|
5 |
- vision-language-model
|
6 |
- contrastive learning
|
7 |
- self-supervised learning
|
8 |
+
pipeline_tag: image-text-to-text
|
9 |
+
library_name: transformers
|
10 |
---
|
11 |
|
12 |
+
**[CVPR 2025] COSMOS Model**
|
13 |
|
14 |
Authors: [Sanghwan Kim](https://kim-sanghwan.github.io/), [Rui Xiao](https://www.eml-munich.de/people/rui-xiao), [Mariana-Iuliana Georgescu](https://lilygeorgescu.github.io/), [Stephan Alaniz](https://www.eml-munich.de/people/stephan-alaniz), [Zeynep Akata](https://www.eml-munich.de/people/zeynep-akata)
|
15 |
|
16 |
+
COSMOS is introduced in the paper [COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training](https://arxiv.org/abs/2412.01814). COSMOS is trained in self-supervised learning framework with multi-modal augmentation and cross-attention module. It outperforms CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. COSMOS also achieves strong performance in downstream tasks including zero-shot image-text retrieval, classification, and semantic segmentation.
|
17 |
|
18 |
**Usage**
|
19 |
|