LDanielBlueway's picture
Update README.md
0cd4b66 verified
---
license: apache-2.0
language:
- fr
- en
base_model:
- jinaai/jina-clip-v1
pipeline_tag: sentence-similarity
tags:
- embedding
- image-text-embedding
---
# Fork of [jinaai/jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1) for a `multimodal-multilanguage-embedding` Inference endpoint.
This repository implements a `custom` task for `multimodal-multilanguage-embedding` for 🤗 Inference Endpoints. The code for the customized handler is in the [handler.py](https://huggingface.co/Blueway/Inference-endpoint-for-jina-clip-v1/blob/main/handler.py).
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file.
The repository contains a requirements.txt to download the einops, timm and pillow library.
## Call to endpoint example
``` python
import json
from typing import List
import requests as r
import base64
ENDPOINT_URL = "endpoint_url"
HF_TOKEN = "token_key"
def predict(path_to_image: str = None, text : str = None):
with open(path_to_image, "rb") as i:
b64 = base64.b64encode(i.read())
payload = {"inputs":
{
"image": b64.decode("utf-8"),
"text": text
}
}
response = r.post(
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
return response.json()
prediction = predict(
path_to_image="image/accidentdevoiture.webp", text="An image of a cat and a remote control"
)
print(json.dumps(prediction, indent=2))
```
## Expected result
``` json
{
"text_embedding": [-0.009289545938372612,
-0.03686045855283737,
...
0.038627129048109055,
-0.01346363127231597]
"image_embedding": [-0.009289545938372612,
-0.03686045855283737,
...
0.038627129048109055,
-0.01346363127231597]
}
```