chore: update readme (#27)
Browse files- chore: update readme (6ffe83197714733320728b0493c7e924b8015f24)
README.md
CHANGED
|
@@ -46,7 +46,7 @@ This dual capability makes it an excellent tool for multimodal retrieval-augment
|
|
| 46 |
## Usage
|
| 47 |
|
| 48 |
1. The easiest way to starting using jina-clip-v1-en is to use Jina AI's [Embeddings API](https://jina.ai/embeddings/).
|
| 49 |
-
2. Alternatively, you can use Jina CLIP directly via transformers package.
|
| 50 |
|
| 51 |
```python
|
| 52 |
!pip install transformers einops timm pillow
|
|
@@ -76,6 +76,28 @@ print(text_embeddings[1] @ image_embeddings[0].T) # text-image cross-modal simil
|
|
| 76 |
print(text_embeddings[1] @ image_embeddings[1].T)# text-image cross-modal similarity
|
| 77 |
```
|
| 78 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
3. JavaScript developers can use Jina CLIP via the [Transformers.js](https://huggingface.co/docs/transformers.js) library. Note that to use this model, you need to install Transformers.js [v3](https://github.com/xenova/transformers.js/tree/v3) from source using `npm install xenova/transformers.js#v3`.
|
| 80 |
|
| 81 |
```js
|
|
|
|
| 46 |
## Usage
|
| 47 |
|
| 48 |
1. The easiest way to starting using jina-clip-v1-en is to use Jina AI's [Embeddings API](https://jina.ai/embeddings/).
|
| 49 |
+
2. Alternatively, you can use Jina CLIP directly via transformers/sentence-transformers package.
|
| 50 |
|
| 51 |
```python
|
| 52 |
!pip install transformers einops timm pillow
|
|
|
|
| 76 |
print(text_embeddings[1] @ image_embeddings[1].T)# text-image cross-modal similarity
|
| 77 |
```
|
| 78 |
|
| 79 |
+
or sentence-transformers:
|
| 80 |
+
|
| 81 |
+
```python
|
| 82 |
+
# !pip install -U sentence-transformers
|
| 83 |
+
from sentence_transformers import SentenceTransformer
|
| 84 |
+
|
| 85 |
+
# Initialize the model
|
| 86 |
+
model = SentenceTransformer('jinaai/jina-clip-v1', trust_remote_code=True)
|
| 87 |
+
|
| 88 |
+
# New meaningful sentences
|
| 89 |
+
sentences = ['A blue cat', 'A red cat']
|
| 90 |
+
|
| 91 |
+
# Public image URLs
|
| 92 |
+
image_urls = [
|
| 93 |
+
'https://i.pinimg.com/600x315/21/48/7e/21487e8e0970dd366dafaed6ab25d8d8.jpg',
|
| 94 |
+
'https://i.pinimg.com/736x/c9/f2/3e/c9f23e212529f13f19bad5602d84b78b.jpg'
|
| 95 |
+
]
|
| 96 |
+
|
| 97 |
+
text_embeddings = model.encode(sentences)
|
| 98 |
+
image_embeddings = model.encode(image_urls)
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
3. JavaScript developers can use Jina CLIP via the [Transformers.js](https://huggingface.co/docs/transformers.js) library. Note that to use this model, you need to install Transformers.js [v3](https://github.com/xenova/transformers.js/tree/v3) from source using `npm install xenova/transformers.js#v3`.
|
| 102 |
|
| 103 |
```js
|