|
--- |
|
license: mit |
|
datasets: |
|
- tomg-group-umd/ContraStyles |
|
library_name: transformers |
|
--- |
|
An unofficial implementation of [CSD](https://github.com/learn2phoenix/CSD) |
|
|
|
Inspired by [vvmatorin/CSD](https://huggingface.co/vvmatorin/CSD), the difference in this implementation is that the CLIP backbone is not the openai clip class but an instance of `CLIPVisionModel`. |
|
|
|
Inference: |
|
|
|
```python |
|
from PIL import Image |
|
from transformers import AutoProcessor, AutoModel |
|
|
|
model = AutoModel.from_pretrained("NagaSaiAbhinay/CSD", trust_remote_code=True) |
|
processor = AutoProcessor.from_pretrained("NagaSaiAbhinay/CSD") |
|
|
|
img = Image.open('test_image.png') |
|
pixel_values = processor |
|
|
|
``` |