File size: 1,075 Bytes
9ee9b6b 0f48e11 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b 0e15755 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
tags:
- vision
- zero-shot-image-classification
library_name: generic
---
# Fork of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) for a `zero-sho-image-classification` Inference endpoint.
This repository implements a `custom` task for `zero-shot-image-classification` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/clip-zero-shot-image-classification/blob/main/pipeline.py).
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `pipeline.py` file. -> _double check if it is selected_
### expected Request payload
```json
{
"image": "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC....", // base64 image as bytes
"candiates":["sea","palace","car","ship"]
}
```
below is an example on how to run a request using Python and `requests`.
## Run Request
1. prepare an image.
```bash
!wget https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
```
2. run request
```python
```
|