Finetune OpenAI CLIP-L-14 on VLM-1B.

Dataset Performance
ImageNet 1k 0.76496
ImageNet V2 0.7043
ImageNet-A 0.704133
ImageNet-O 0.364
ImageNet-R 0.8834
ImageNet Sketch 0.612176
ObjectNet 0.681383
IN-shifts 0.658232
VTAB 0.60819
MSCOCO 0.567709
Flickr30k 0.8575
WinoGAViL 0.564843
Retrieval 0.663351
Avg of 38 datasets 0.6369

license: apache-2.0

Downloads last month
31
Safetensors
Model size
428M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for zhixiangwei/hqclip-openai-large-ft-vlm1b

Finetuned
(111)
this model

Dataset used to train zhixiangwei/hqclip-openai-large-ft-vlm1b

Space using zhixiangwei/hqclip-openai-large-ft-vlm1b 1

Collection including zhixiangwei/hqclip-openai-large-ft-vlm1b