Update README.md
Browse files
README.md
CHANGED
@@ -1,197 +1,200 @@
|
|
1 |
-
---
|
2 |
-
tags:
|
3 |
-
- feature-extraction
|
4 |
-
- sentence-similarity
|
5 |
-
- mteb
|
6 |
-
- clip
|
7 |
-
- vision
|
8 |
-
- transformers.js
|
9 |
-
language: en
|
10 |
-
inference: false
|
11 |
-
license: apache-2.0
|
12 |
-
library_name: transformers
|
13 |
-
---
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
<
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
<
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
#
|
59 |
-
|
60 |
-
|
61 |
-
#
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
print(text_embeddings[
|
76 |
-
print(text_embeddings[
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
const
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
const
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
const {
|
98 |
-
|
99 |
-
//
|
100 |
-
const
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
const
|
109 |
-
|
110 |
-
//
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
console.log(cos_sim(text_embeds[
|
115 |
-
console.log(cos_sim(text_embeds[
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
|
130 |
-
|
131 |
-
|
132 |
-
|
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
|
141 |
-
|
142 |
-
|
143 |
-
|
|
144 |
-
|
145 |
-
|
146 |
-
|
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
}
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
2
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
|
|
|
|
|
|
197 |
```
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- feature-extraction
|
4 |
+
- sentence-similarity
|
5 |
+
- mteb
|
6 |
+
- clip
|
7 |
+
- vision
|
8 |
+
- transformers.js
|
9 |
+
language: en
|
10 |
+
inference: false
|
11 |
+
license: apache-2.0
|
12 |
+
library_name: transformers
|
13 |
+
---
|
14 |
+
|
15 |
+
> [!WARNING]
|
16 |
+
> This is a testing repository to experiment with new functionality. Refer to [jinaai/jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1) for the original model.
|
17 |
+
|
18 |
+
<br><br>
|
19 |
+
|
20 |
+
<p align="center">
|
21 |
+
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
|
22 |
+
</p>
|
23 |
+
|
24 |
+
|
25 |
+
<p align="center">
|
26 |
+
<b>The embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
|
27 |
+
</p>
|
28 |
+
|
29 |
+
<p align="center">
|
30 |
+
<b>Jina CLIP: your CLIP model is also your text retriever!</b>
|
31 |
+
</p>
|
32 |
+
|
33 |
+
|
34 |
+
## Intended Usage & Model Info
|
35 |
+
|
36 |
+
`jina-clip-v1` is a state-of-the-art English **multimodal (text-image) embedding model**.
|
37 |
+
|
38 |
+
Traditional text embedding models, such as [jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en), excel in text-to-text retrieval but incapable of cross-modal tasks. Models like [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) effectively align image and text embeddings but are not optimized for text-to-text retrieval due to their training methodologies and context limitations.
|
39 |
+
|
40 |
+
`jina-clip-v1` bridges this gap by offering robust performance in both domains.
|
41 |
+
Its text component matches the retrieval efficiency of `jina-embeddings-v2-base-en`, while its overall architecture sets a new benchmark for cross-modal retrieval.
|
42 |
+
This dual capability makes it an excellent tool for multimodal retrieval-augmented generation (MuRAG) applications, enabling seamless text-to-text and text-to-image searches within a single model.
|
43 |
+
|
44 |
+
|
45 |
+
## Data & Parameters
|
46 |
+
|
47 |
+
[Check out our paper](https://arxiv.org/abs/2405.20204)
|
48 |
+
|
49 |
+
## Usage
|
50 |
+
|
51 |
+
1. The easiest way to starting using jina-clip-v1-en is to use Jina AI's [Embeddings API](https://jina.ai/embeddings/).
|
52 |
+
2. Alternatively, you can use Jina CLIP directly via transformers package.
|
53 |
+
|
54 |
+
```python
|
55 |
+
!pip install transformers einops timm pillow
|
56 |
+
from transformers import AutoModel
|
57 |
+
|
58 |
+
# Initialize the model
|
59 |
+
model = AutoModel.from_pretrained('jinaai/jina-clip-v1', trust_remote_code=True)
|
60 |
+
|
61 |
+
# New meaningful sentences
|
62 |
+
sentences = ['A blue cat', 'A red cat']
|
63 |
+
|
64 |
+
# Public image URLs
|
65 |
+
image_urls = [
|
66 |
+
'https://i.pinimg.com/600x315/21/48/7e/21487e8e0970dd366dafaed6ab25d8d8.jpg',
|
67 |
+
'https://i.pinimg.com/736x/c9/f2/3e/c9f23e212529f13f19bad5602d84b78b.jpg'
|
68 |
+
]
|
69 |
+
|
70 |
+
# Encode text and images
|
71 |
+
text_embeddings = model.encode_text(sentences)
|
72 |
+
image_embeddings = model.encode_image(image_urls) # also accepts PIL.image, local filenames, dataURI
|
73 |
+
|
74 |
+
# Compute similarities
|
75 |
+
print(text_embeddings[0] @ text_embeddings[1].T) # text embedding similarity
|
76 |
+
print(text_embeddings[0] @ image_embeddings[0].T) # text-image cross-modal similarity
|
77 |
+
print(text_embeddings[0] @ image_embeddings[1].T) # text-image cross-modal similarity
|
78 |
+
print(text_embeddings[1] @ image_embeddings[0].T) # text-image cross-modal similarity
|
79 |
+
print(text_embeddings[1] @ image_embeddings[1].T)# text-image cross-modal similarity
|
80 |
+
```
|
81 |
+
|
82 |
+
3. JavaScript developers can use Jina CLIP via the [Transformers.js](https://huggingface.co/docs/transformers.js) library. Note that to use this model, you need to install Transformers.js [v3](https://github.com/xenova/transformers.js/tree/v3) from source using `npm install xenova/transformers.js#v3`.
|
83 |
+
|
84 |
+
```js
|
85 |
+
import { AutoTokenizer, CLIPTextModelWithProjection, AutoProcessor, CLIPVisionModelWithProjection, RawImage, cos_sim } from '@xenova/transformers';
|
86 |
+
|
87 |
+
// Load tokenizer and text model
|
88 |
+
const tokenizer = await AutoTokenizer.from_pretrained('jinaai/jina-clip-v1');
|
89 |
+
const text_model = await CLIPTextModelWithProjection.from_pretrained('jinaai/jina-clip-v1');
|
90 |
+
|
91 |
+
// Load processor and vision model
|
92 |
+
const processor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch32');
|
93 |
+
const vision_model = await CLIPVisionModelWithProjection.from_pretrained('jinaai/jina-clip-v1');
|
94 |
+
|
95 |
+
// Run tokenization
|
96 |
+
const texts = ['A blue cat', 'A red cat'];
|
97 |
+
const text_inputs = tokenizer(texts, { padding: true, truncation: true });
|
98 |
+
|
99 |
+
// Compute text embeddings
|
100 |
+
const { text_embeds } = await text_model(text_inputs);
|
101 |
+
|
102 |
+
// Read images and run processor
|
103 |
+
const urls = [
|
104 |
+
'https://i.pinimg.com/600x315/21/48/7e/21487e8e0970dd366dafaed6ab25d8d8.jpg',
|
105 |
+
'https://i.pinimg.com/736x/c9/f2/3e/c9f23e212529f13f19bad5602d84b78b.jpg'
|
106 |
+
];
|
107 |
+
const image = await Promise.all(urls.map(url => RawImage.read(url)));
|
108 |
+
const image_inputs = await processor(image);
|
109 |
+
|
110 |
+
// Compute vision embeddings
|
111 |
+
const { image_embeds } = await vision_model(image_inputs);
|
112 |
+
|
113 |
+
// Compute similarities
|
114 |
+
console.log(cos_sim(text_embeds[0].data, text_embeds[1].data)) // text embedding similarity
|
115 |
+
console.log(cos_sim(text_embeds[0].data, image_embeds[0].data)) // text-image cross-modal similarity
|
116 |
+
console.log(cos_sim(text_embeds[0].data, image_embeds[1].data)) // text-image cross-modal similarity
|
117 |
+
console.log(cos_sim(text_embeds[1].data, image_embeds[0].data)) // text-image cross-modal similarity
|
118 |
+
console.log(cos_sim(text_embeds[1].data, image_embeds[1].data)) // text-image cross-modal similarity
|
119 |
+
```
|
120 |
+
|
121 |
+
## Performance
|
122 |
+
|
123 |
+
### Text-Image Retrieval
|
124 |
+
|
125 |
+
| Name | Flickr Image Retr. R@1 | Flickr Image Retr. R@5 | Flickr Text Retr. R@1 | Flickr Text Retr. R@5 |
|
126 |
+
|------------------|-------------------------|-------------------------|-----------------------|-----------------------|
|
127 |
+
| ViT-B-32 | 0.597 | 0.8398 | 0.781 | 0.938 |
|
128 |
+
| ViT-B-16 | 0.6216 | 0.8572 | 0.822 | 0.966 |
|
129 |
+
| jina-clip | 0.6748 | 0.8902 | 0.811 | 0.965 |
|
130 |
+
|
131 |
+
|
132 |
+
| Name | MSCOCO Image Retr. R@1 | MSCOCO Image Retr. R@5 | MSCOCO Text Retr. R@1 | MSCOCO Text Retr. R@5 |
|
133 |
+
|------------------|-------------------------|-------------------------|-----------------------|-----------------------|
|
134 |
+
| ViT-B-32 | 0.342 | 0.6001 | 0.5234 | 0.7634 |
|
135 |
+
| ViT-B-16 | 0.3309 | 0.5842 | 0.5242 | 0.767 |
|
136 |
+
| jina-clip | 0.4111 | 0.6644 | 0.5544 | 0.7904 |
|
137 |
+
|
138 |
+
### Text-Text Retrieval
|
139 |
+
|
140 |
+
| Name | STS12 | STS15 | STS17 | STS13 | STS14 | STS16 | STS22 | STSBenchmark | SummEval |
|
141 |
+
|-----------------------|--------|--------|--------|--------|--------|--------|--------|--------------|----------|
|
142 |
+
| jina-embeddings-v2 | 0.7427 | 0.8755 | 0.8888 | 0.833 | 0.7917 | 0.836 | 0.6346 | 0.8404 | 0.3056 |
|
143 |
+
| jina-clip | 0.7352 | 0.8746 | 0.8976 | 0.8323 | 0.7868 | 0.8377 | 0.6583 | 0.8493 | 0.3048 |
|
144 |
+
|
145 |
+
|
146 |
+
| Name | ArguAna | FiQA2018 | NFCorpus | Quora | SCIDOCS | SciFact | TRECCOVID |
|
147 |
+
|--------------------|---------|----------|----------|-------|---------|---------|-----------|
|
148 |
+
| jina-embeddings-v2 | 0.4418 | 0.4158 | 0.3245 | 0.882 | 0.1986 | 0.6668 | 0.6591 |
|
149 |
+
| jina-clip | 0.4933 | 0.3827 | 0.3352 | 0.8789| 0.2024 | 0.6734 | 0.7161 |
|
150 |
+
|
151 |
+
## Contact
|
152 |
+
|
153 |
+
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
|
154 |
+
|
155 |
+
## Citation
|
156 |
+
|
157 |
+
If you find `jina-clip-v1` useful in your research, please cite the following paper:
|
158 |
+
|
159 |
+
```bibtex
|
160 |
+
@misc{2405.20204,
|
161 |
+
Author = {Andreas Koukounas and Georgios Mastrapas and Michael Günther and Bo Wang and Scott Martens and Isabelle Mohr and Saba Sturua and Mohammad Kalim Akram and Joan Fontanals Martínez and Saahil Ognawala and Susana Guzman and Maximilian Werk and Nan Wang and Han Xiao},
|
162 |
+
Title = {Jina CLIP: Your CLIP Model Is Also Your Text Retriever},
|
163 |
+
Year = {2024},
|
164 |
+
Eprint = {arXiv:2405.20204},
|
165 |
+
}
|
166 |
+
```
|
167 |
+
|
168 |
+
## FAQ
|
169 |
+
|
170 |
+
### I encounter this problem, what should I do?
|
171 |
+
|
172 |
+
```
|
173 |
+
ValueError: The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed (model has <class 'transformers_modules.jinaai.jina-clip-implementation.7f069e2d54d609ef1ad2eb578c7bf07b5a51de41.configuration_clip.JinaCLIPConfig'> and you passed <class 'transformers_modules.jinaai.jina-clip-implementation.7f069e2d54d609ef1ad2eb578c7bf07b5a51de41.configuration_cli.JinaCLIPConfig'>. Fix one of those so they match!
|
174 |
+
```
|
175 |
+
|
176 |
+
There was a bug in Transformers library between 4.40.x to 4.41.1. You can update transformers to >4.41.2 or <=4.40.0
|
177 |
+
|
178 |
+
### Given one query, how can I merge its text-text and text-image cosine similarity?
|
179 |
+
|
180 |
+
Our emperical study shows that text-text cosine similarity is normally larger than text-image cosine similarity!
|
181 |
+
If you want to merge two scores, we recommended 2 ways:
|
182 |
+
|
183 |
+
1. weighted average of text-text sim and text-image sim:
|
184 |
+
|
185 |
+
```python
|
186 |
+
combined_scores = sim(text, text) + lambda * sim(text, image) # optimal lambda depends on your dataset, but in general lambda=2 can be a good choice.
|
187 |
+
```
|
188 |
+
|
189 |
+
2. apply z-score normalization before merging scores:
|
190 |
+
|
191 |
+
```python
|
192 |
+
# pseudo code
|
193 |
+
query_document_mean = np.mean(cos_sim_text_texts)
|
194 |
+
query_document_std = np.std(cos_sim_text_texts)
|
195 |
+
text_image_mean = np.mean(cos_sim_text_images)
|
196 |
+
text_image_std = np.std(cos_sim_text_images)
|
197 |
+
|
198 |
+
query_document_sim_normalized = (cos_sim_query_documents - query_document_mean) / query_document_std
|
199 |
+
text_image_sim_normalized = (cos_sim_text_images - text_image_mean) / text_image_std
|
200 |
```
|