File size: 566 Bytes
116bc2d 92abbd0 116bc2d 0563bc2 116bc2d 92abbd0 0563bc2 92abbd0 0563bc2 92abbd0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
---
license: mit
datasets:
- ILSVRC/imagenet-1k
- mlfoundations/datacomp_small
base_model:
- laion/CLIP-ViT-g-14-laion2B-s12B-b42K
---
Model Initialized from `laion/CLIP-ViT-g-14-laion2B-s12B-b42K`. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$.
To load this model use:
```python
from transformers import CLIPProcessor, CLIPModel
model_name = "LEAF-CLIP/OpenCLIP-ViT-g-rho50-k1"
processor_name = "laion/CLIP-ViT-g-14-laion2B-s12B-b42K"
model = CLIPModel.from_pretrained(model_name)
processor = CLIPProcessor.from_pretrained(processor_name)
``` |