Bingsu commited on
Commit
f7631f1
·
1 Parent(s): 380ee39

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ widget:
3
+ - src: http://images.cocodataset.org/val2017/000000039769.jpg
4
+ candidate_labels: 고양이, 강아지, 리모컨
5
+ example_title: cat and remote
6
+ language: ko
7
+ license: mit
8
+ ---
9
+
10
+ # clip-vit-base-patch32-ko
11
+
12
+ Korean CLIP model trained by [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
13
+
14
+ [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)로 학습된 한국어 CLIP 모델입니다.
15
+
16
+ 훈련 코드: <https://github.com/Bing-su/KoCLIP_training_code>
17
+
18
+ 사용된 데이터: AIHUB에 있는 모든 한국어-영어 병렬 데이터
19
+
20
+ ## How to Use
21
+
22
+ #### 1.
23
+
24
+ ```python
25
+ import requests
26
+ import torch
27
+ from PIL import Image
28
+ from transformers import AutoModel, AutoProcessor
29
+
30
+ repo = "Bingsu/clip-vit-base-patch32-ko"
31
+ model = AutoModel.from_pretrained(repo)
32
+ processor = AutoProcessor.from_pretrained(repo)
33
+
34
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
35
+ image = Image.open(requests.get(url, stream=True).raw)
36
+ inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
37
+ with torch.inference_mode():
38
+ outputs = model(**inputs)
39
+ logits_per_image = outputs.logits_per_image
40
+ probs = logits_per_image.softmax(dim=1)
41
+ ```
42
+
43
+ ```python
44
+ >>> probs
45
+ tensor([[0.9926, 0.0074]])
46
+ ```
47
+
48
+ #### 2.
49
+
50
+ ```python
51
+ from transformers import pipeline
52
+
53
+ repo = "Bingsu/clip-vit-base-patch32-ko"
54
+ pipe = pipeline("zero-shot-image-classification", model=repo)
55
+
56
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
57
+ result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "분홍색 소파에 드러누운 고양이 친구들"], hypothesis_template="{}")
58
+ ```
59
+
60
+ ```python
61
+ >>> result
62
+ [{'score': 0.9456236958503723, 'label': '분홍색 소파에 드러누운 고양이 친구들'},
63
+ {'score': 0.05315302312374115, 'label': '고양이 두 마리'},
64
+ {'score': 0.0012233294546604156, 'label': '고양이 한 마리'}]
65
+ ```
66
+
67
+ ## Tokenizer
68
+
69
+ 토크나이저는 한국어 데이터와 영어 데이터를 7:3 비율로 섞어, 원본 CLIP 토크나이저에서 `.train_new_from_iterator`를 통해 학습되었습니다.
70
+
71
+ https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/clip/modeling_clip.py#L661-L666
72
+ ```python
73
+ # text_embeds.shape = [batch_size, sequence_length, transformer.width]
74
+ # take features from the eot embedding (eot_token is the highest number in each sequence)
75
+ # casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14
76
+ pooled_output = last_hidden_state[
77
+ torch.arange(last_hidden_state.shape[0]), input_ids.to(torch.int).argmax(dim=-1)
78
+ ]
79
+ ```
80
+
81
+ CLIP 모델은 `pooled_output`을 구할때 id가 가장 큰 토큰을 사용하기 때문에, eos 토큰은 가장 마지막 토큰이 되어야 합니다.