File size: 668 Bytes
9b25486 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
---
license: mit
datasets:
- tomg-group-umd/ContraStyles
library_name: transformers
---
An unofficial implementation of [CSD](https://github.com/learn2phoenix/CSD)
Inspired by [vvmatorin/CSD](https://huggingface.co/vvmatorin/CSD), the difference in this implementation is that the CLIP backbone is not the openai clip class but an instance of `CLIPVisionModel`.
Inference:
```python
from PIL import Image
from transformers import AutoProcessor, AutoModel
model = AutoModel.from_pretrained("NagaSaiAbhinay/CSD", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("NagaSaiAbhinay/CSD")
img = Image.open('test_image.png')
pixel_values = processor
``` |