File size: 1,816 Bytes
521c02a 9eb49e2 0cd4b66 521c02a 4545588 521c02a 372b2d5 885bcb5 372b2d5 885bcb5 0cd4b66 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: apache-2.0
language:
- fr
- en
base_model:
- jinaai/jina-clip-v1
pipeline_tag: sentence-similarity
tags:
- embedding
- image-text-embedding
---
# Fork of [jinaai/jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1) for a `multimodal-multilanguage-embedding` Inference endpoint.
This repository implements a `custom` task for `multimodal-multilanguage-embedding` for 🤗 Inference Endpoints. The code for the customized handler is in the [handler.py](https://huggingface.co/Blueway/Inference-endpoint-for-jina-clip-v1/blob/main/handler.py).
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file.
The repository contains a requirements.txt to download the einops, timm and pillow library.
## Call to endpoint example
``` python
import json
from typing import List
import requests as r
import base64
ENDPOINT_URL = "endpoint_url"
HF_TOKEN = "token_key"
def predict(path_to_image: str = None, text : str = None):
with open(path_to_image, "rb") as i:
b64 = base64.b64encode(i.read())
payload = {"inputs":
{
"image": b64.decode("utf-8"),
"text": text
}
}
response = r.post(
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
return response.json()
prediction = predict(
path_to_image="image/accidentdevoiture.webp", text="An image of a cat and a remote control"
)
print(json.dumps(prediction, indent=2))
```
## Expected result
``` json
{
"text_embedding": [-0.009289545938372612,
-0.03686045855283737,
...
0.038627129048109055,
-0.01346363127231597]
"image_embedding": [-0.009289545938372612,
-0.03686045855283737,
...
0.038627129048109055,
-0.01346363127231597]
}
``` |