metadata
license: mit
datasets:
- ILSVRC/imagenet-1k
- mlfoundations/datacomp_small
base_model:
- laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Model Initialized from laion/CLIP-ViT-H-14-laion2B-s32B-b79K. The image encoder is finetuned with FARE at $\epsilon=2/255$.
To load this model use:
from transformers import CLIPProcessor, CLIPModel
model_name = "LEAF-CLIP/OpenCLIP-ViT-H-FARE2"
processor_name = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
model = CLIPModel.from_pretrained(model_name)
processor = CLIPProcessor.from_pretrained(processor_name)