Update README.md
Browse files
README.md
CHANGED
@@ -23,12 +23,14 @@ metrics:
|
|
23 |
- recall
|
24 |
- MRR
|
25 |
---
|
26 |
-
# Marqo
|
27 |
Marqo-FashionSigLIP leverages Generalised Contrastive Learning ([GCL](https://www.marqo.ai/blog/generalized-contrastive-learning-for-multi-modal-retrieval-and-ranking)) which allows the model to be trained on not just text descriptions but also categories, style, colors, materials, keywords and fine-details to provide highly relevant search results on fashion products.
|
28 |
The model was fine-tuned from ViT-B-16-SigLIP (webli).
|
29 |
|
30 |
**Github Page**: [Marqo-FashionCLIP](https://github.com/marqo-ai/marqo-FashionCLIP)
|
31 |
|
|
|
|
|
32 |
|
33 |
## Usage
|
34 |
The model can be seamlessly used with [OpenCLIP](https://github.com/mlfoundations/open_clip) by
|
|
|
23 |
- recall
|
24 |
- MRR
|
25 |
---
|
26 |
+
# Marqo-FashionSigLIP Model Card
|
27 |
Marqo-FashionSigLIP leverages Generalised Contrastive Learning ([GCL](https://www.marqo.ai/blog/generalized-contrastive-learning-for-multi-modal-retrieval-and-ranking)) which allows the model to be trained on not just text descriptions but also categories, style, colors, materials, keywords and fine-details to provide highly relevant search results on fashion products.
|
28 |
The model was fine-tuned from ViT-B-16-SigLIP (webli).
|
29 |
|
30 |
**Github Page**: [Marqo-FashionCLIP](https://github.com/marqo-ai/marqo-FashionCLIP)
|
31 |
|
32 |
+
**Blog**: [Marqo Blog](https://www.marqo.ai/blog/search-model-for-fashion)
|
33 |
+
|
34 |
|
35 |
## Usage
|
36 |
The model can be seamlessly used with [OpenCLIP](https://github.com/mlfoundations/open_clip) by
|