Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,22 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
datasets:
|
| 4 |
+
- ILSVRC/imagenet-1k
|
| 5 |
+
- mlfoundations/datacomp_small
|
| 6 |
+
base_model:
|
| 7 |
+
- openai/clip-vit-large-patch14
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
Model Initialized from `openai/clip-vit-large-patch14`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=2$ with $\rho=5$ and semantic constraints.
|
| 11 |
+
|
| 12 |
+
To load this model use:
|
| 13 |
+
|
| 14 |
+
```python
|
| 15 |
+
from transformers import CLIPProcessor, CLIPModel
|
| 16 |
+
|
| 17 |
+
model_name = "LEAF-CLIP/CLIP-ViT-L-rho5-k2-FARE2"
|
| 18 |
+
processor_name = "openai/clip-vit-large-patch14"
|
| 19 |
+
|
| 20 |
+
model = CLIPModel.from_pretrained(model_name)
|
| 21 |
+
processor = CLIPProcessor.from_pretrained(processor_name)
|
| 22 |
+
```
|