clip
Jingya HF Staff commited on
Commit
c94b95c
·
verified ·
1 Parent(s): 5117733

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -3
README.md CHANGED
@@ -1,3 +1,37 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ **INFERENTIA ONLY**
6
+
7
+ Exported with:
8
+
9
+ ```python
10
+ from optimum.neuron import NeuronModelForSentenceTransformers
11
+
12
+ # [Compile]
13
+ model_id = "sentence-transformers/clip-ViT-B-32"
14
+
15
+ # configs for compiling model
16
+ input_shapes = {
17
+ "num_channels": 3,
18
+ "height": 224,
19
+ "width": 224,
20
+ "text_batch_size": 3,
21
+ "image_batch_size": 1,
22
+ "sequence_length": 64,
23
+ }
24
+
25
+ neuron_model = NeuronModelForSentenceTransformers.from_pretrained(
26
+ model_id, subfolder="0_CLIPModel", export=True, dynamic_batch_size=False, **input_shapes
27
+ )
28
+
29
+ # Save locally or upload to the HuggingFace Hub
30
+ save_directory = "clip_emb_neuron/"
31
+ neuron_model.save_pretrained(save_directory)
32
+
33
+ # Upload to the HuggingFace Hub
34
+ neuron_model.push_to_hub(
35
+ "clip_emb_neuron/", repository_id="optimum/clip_vit_emb_neuronx" # Replace with your HF Hub repo id
36
+ )
37
+ ```