Safetensors
clip
megaelius commited on
Commit
92abbd0
·
verified ·
1 Parent(s): 15e0558

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -1,8 +1,22 @@
1
  ---
2
  license: mit
3
  datasets:
 
4
  - mlfoundations/datacomp_small
5
  base_model:
6
- - laion/CLIP-ViT-g-14-laion2B-s12B-b42K
7
  ---
8
- ViT-H OpenCLIP initialized from https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K. The Text encoder is finetuned with the FARE loss using the charmer attack with $\rho=50$ and $k=1$.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  datasets:
4
+ - ILSVRC/imagenet-1k
5
  - mlfoundations/datacomp_small
6
  base_model:
7
+ - laion/CLIP-ViT-g-14-laion2B-s34B-b88K
8
  ---
9
+
10
+ Model Initialized from `laion/CLIP-ViT-g-14-laion2B-s34B-b88K`. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$.
11
+
12
+ To load this model use:
13
+
14
+ ```python
15
+ from transformers import CLIPProcessor, CLIPModel
16
+
17
+ model_name = "LEAF-CLIP/OpenCLIP-ViT-g-rho50-k1"
18
+ processor_name = "laion/CLIP-ViT-g-14-laion2B-s34B-b88K"
19
+
20
+ model = CLIPModel.from_pretrained(model_name)
21
+ processor = CLIPProcessor.from_pretrained(processor_name)
22
+ ```