silgip2-base-patch16-224-ko

google/siglip2-base-patch16-224 λͺ¨λΈμ„ Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation기반으둜 ν•™μŠ΅ν•΄μ„œ ν•œκ΅­μ–΄ 이해λŠ₯λ ₯을 κ°•ν™”ν•œ Siglip2 λͺ¨λΈμž…λ‹ˆλ‹€.

μ‚¬μš©λœ ν•™μŠ΅ 데이터 : aihub english-korean parallel dataset

μ‚¬μš©λœ 평가 데이터 : ms-koko caption english korean dataset

How to use

import requests
import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor

repo = "hyunlord/siglip2-base-patch16-224-ko"
model = AutoModel.from_pretrained(repo)
processor = AutoProcessor.from_pretrained(repo)

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

texts = ["고양이 ν•œ 마리", 
         "고양이 두 마리",
         "뢄홍색 μ†ŒνŒŒμ— λ“œλŸ¬λˆ„μš΄ 고양이 μΉœκ΅¬λ“€",
         "리λͺ¨μ»¨κ³Ό 고양이 λ‘λ§ˆλ¦¬",
         "리λͺ¨μ»¨ 두 κ°œμ™€ 고양이 λ‘λ§ˆλ¦¬",
         "뢄홍색 μ†ŒνŒŒ μœ„μ— 리λͺ¨μ»¨ 두 κ°œμ™€ λ“œλŸ¬λˆ„μš΄ 고양이 λ‘λ§ˆλ¦¬"]
inputs = processor(text=texts,
                   images=image,
                   padding="max_length",
                   max_length=64,
                   return_tensors="pt")
with torch.no_grad():
    outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image)
>>> probs
tensor([[0.0038, 0.0429, 0.8294, 0.9787, 0.9816, 0.9990]])

MS-COCO Caption Evaluation

Model Parameter Size (En) I-T Recall@1 (En) T-I Recall@1 (Ko) I-T Recall@1 (Ko) T-I Recall@1
google/siglip2-base-patch16-224 375,187,970 65.20% 48.29% 45.68% 25.44%
google/siglip2-so400m-patch14-384 1,136,008,498 67.74% 52.04% 52.36% 31.59%
hyunlord/siglip2-base-patch16-224-ko 375,187,970 65.54% 47.99% 57.24% 36.55%
Downloads last month
3
Safetensors
Model size
375M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for hyunlord/siglip2-base-patch16-224-ko

Finetuned
(103)
this model