alikia2x commited on
Commit
7b6db41
·
verified ·
1 Parent(s): 6d105d4

Upload 9 files

Browse files
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  0_StaticEmbedding/tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  0_StaticEmbedding/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,142 +1,172 @@
1
  ---
2
  base_model: jinaai/jina-embeddings-v3
3
- library_name: sentence-transformers
4
- pipeline_tag: sentence-similarity
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  tags:
 
 
6
  - sentence-transformers
7
- - sentence-similarity
8
- - feature-extraction
9
  ---
10
 
11
- # SentenceTransformer based on jinaai/jina-embeddings-v3
12
 
13
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
14
 
15
- ## Model Details
16
 
17
- ### Model Description
18
- - **Model Type:** Sentence Transformer
19
- - **Base model:** [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3) <!-- at revision 4be32c2f5d65b95e4bcce473545b7883ec8d2edd -->
20
- - **Maximum Sequence Length:** inf tokens
21
- - **Output Dimensionality:** 1024 tokens
22
- - **Similarity Function:** Cosine Similarity
23
- <!-- - **Training Dataset:** Unknown -->
24
- <!-- - **Language:** Unknown -->
25
- <!-- - **License:** Unknown -->
26
-
27
- ### Model Sources
28
-
29
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
30
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
31
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
32
-
33
- ### Full Model Architecture
34
 
 
35
  ```
36
- SentenceTransformer(
37
- (0): StaticEmbedding(
38
- (embedding): EmbeddingBag(250002, 1024, mode='mean')
39
- )
40
- )
41
  ```
42
 
43
  ## Usage
 
 
 
44
 
45
- ### Direct Usage (Sentence Transformers)
46
-
47
- First install the Sentence Transformers library:
48
 
49
- ```bash
50
- pip install -U sentence-transformers
51
  ```
52
 
53
- Then you can load this model and run inference.
54
  ```python
55
- from sentence_transformers import SentenceTransformer
56
-
57
- # Download from the 🤗 Hub
58
- model = SentenceTransformer("Thaweewat/jina-embedding-v3-m2v-1024")
59
- # Run inference
60
- sentences = [
61
- 'The weather is lovely today.',
62
- "It's so sunny outside!",
63
- 'He drove to the stadium.',
64
- ]
65
- embeddings = model.encode(sentences)
66
- print(embeddings.shape)
67
- # [3, 1024]
68
-
69
- # Get the similarity scores for the embeddings
70
- similarities = model.similarity(embeddings, embeddings)
71
- print(similarities.shape)
72
- # [3, 3]
73
- ```
74
-
75
- <!--
76
- ### Direct Usage (Transformers)
77
-
78
- <details><summary>Click to see the direct usage in Transformers</summary>
79
-
80
- </details>
81
- -->
82
-
83
- <!--
84
- ### Downstream Usage (Sentence Transformers)
85
 
86
- You can finetune this model on your own dataset.
 
87
 
88
- <details><summary>Click to expand</summary>
 
89
 
90
- </details>
91
- -->
92
-
93
- <!--
94
- ### Out-of-Scope Use
95
 
96
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
97
- -->
98
 
99
- <!--
100
- ## Bias, Risks and Limitations
101
 
102
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
103
- -->
104
 
105
- <!--
106
- ### Recommendations
107
 
108
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
109
- -->
 
 
110
 
111
- ## Training Details
112
 
113
- ### Framework Versions
114
- - Python: 3.10.12
115
- - Sentence Transformers: 3.2.0
116
- - Transformers: 4.44.2
117
- - PyTorch: 2.4.1+cu121
118
- - Accelerate: 0.34.2
119
- - Datasets:
120
- - Tokenizers: 0.19.1
121
 
122
  ## Citation
123
 
124
- ### BibTeX
125
-
126
- <!--
127
- ## Glossary
128
-
129
- *Clearly define terms in order to be accessible across audiences.*
130
- -->
131
-
132
- <!--
133
- ## Model Card Authors
134
-
135
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
136
- -->
137
-
138
- <!--
139
- ## Model Card Contact
140
-
141
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
142
- -->
 
1
  ---
2
  base_model: jinaai/jina-embeddings-v3
3
+ language:
4
+ - multilingual
5
+ - af
6
+ - am
7
+ - ar
8
+ - as
9
+ - az
10
+ - be
11
+ - bg
12
+ - bn
13
+ - br
14
+ - bs
15
+ - ca
16
+ - cs
17
+ - cy
18
+ - da
19
+ - de
20
+ - el
21
+ - en
22
+ - eo
23
+ - es
24
+ - et
25
+ - eu
26
+ - fa
27
+ - fi
28
+ - fr
29
+ - fy
30
+ - ga
31
+ - gd
32
+ - gl
33
+ - gu
34
+ - ha
35
+ - he
36
+ - hi
37
+ - hr
38
+ - hu
39
+ - hy
40
+ - id
41
+ - is
42
+ - it
43
+ - ja
44
+ - jv
45
+ - ka
46
+ - kk
47
+ - km
48
+ - kn
49
+ - ko
50
+ - ku
51
+ - ky
52
+ - la
53
+ - lo
54
+ - lt
55
+ - lv
56
+ - mg
57
+ - mk
58
+ - ml
59
+ - mn
60
+ - mr
61
+ - ms
62
+ - my
63
+ - ne
64
+ - nl
65
+ - 'no'
66
+ - om
67
+ - or
68
+ - pa
69
+ - pl
70
+ - ps
71
+ - pt
72
+ - ro
73
+ - ru
74
+ - sa
75
+ - sd
76
+ - si
77
+ - sk
78
+ - sl
79
+ - so
80
+ - sq
81
+ - sr
82
+ - su
83
+ - sv
84
+ - sw
85
+ - ta
86
+ - te
87
+ - th
88
+ - tl
89
+ - tr
90
+ - ug
91
+ - uk
92
+ - ur
93
+ - uz
94
+ - vi
95
+ - xh
96
+ - yi
97
+ - zh
98
+ library_name: model2vec
99
+ license: mit
100
+ model_name: onnx
101
  tags:
102
+ - embeddings
103
+ - static-embeddings
104
  - sentence-transformers
 
 
105
  ---
106
 
107
+ # onnx Model Card
108
 
109
+ This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical.
110
 
 
111
 
112
+ ## Installation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
 
114
+ Install model2vec using pip:
115
  ```
116
+ pip install model2vec
 
 
 
 
117
  ```
118
 
119
  ## Usage
120
+ Load this model using the `from_pretrained` method:
121
+ ```python
122
+ from model2vec import StaticModel
123
 
124
+ # Load a pretrained Model2Vec model
125
+ model = StaticModel.from_pretrained("onnx")
 
126
 
127
+ # Compute text embeddings
128
+ embeddings = model.encode(["Example sentence"])
129
  ```
130
 
131
+ Alternatively, you can distill your own model using the `distill` method:
132
  ```python
133
+ from model2vec.distill import distill
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
134
 
135
+ # Choose a Sentence Transformer model
136
+ model_name = "BAAI/bge-base-en-v1.5"
137
 
138
+ # Distill the model
139
+ m2v_model = distill(model_name=model_name, pca_dims=256)
140
 
141
+ # Save the model
142
+ m2v_model.save_pretrained("m2v_model")
143
+ ```
 
 
144
 
145
+ ## How it works
 
146
 
147
+ Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
 
148
 
149
+ It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using zipf weighting. During inference, we simply take the mean of all token embeddings occurring in a sentence.
 
150
 
151
+ ## Additional Resources
 
152
 
153
+ - [All Model2Vec models on the hub](https://huggingface.co/models?library=model2vec)
154
+ - [Model2Vec Repo](https://github.com/MinishLab/model2vec)
155
+ - [Model2Vec Results](https://github.com/MinishLab/model2vec?tab=readme-ov-file#results)
156
+ - [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
157
 
158
+ ## Library Authors
159
 
160
+ Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
 
 
 
 
 
 
 
161
 
162
  ## Citation
163
 
164
+ Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
165
+ ```
166
+ @software{minishlab2024model2vec,
167
+ authors = {Stephan Tulkens, Thomas van Dongen},
168
+ title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model},
169
+ year = {2024},
170
+ url = {https://github.com/MinishLab/model2vec},
171
+ }
172
+ ```
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -1 +1,13 @@
1
- {"tokenizer_name": "jinaai/jina-embeddings-v3", "apply_pca": 256, "apply_zipf": true, "hidden_dim": 256, "seq_length": 1000000, "normalize": false}
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "model2vec",
3
+ "architectures": [
4
+ "StaticModel"
5
+ ],
6
+ "tokenizer_name": "jinaai/jina-embeddings-v3",
7
+ "apply_pca": 1024,
8
+ "apply_zipf": null,
9
+ "sif_coefficient": 0.0001,
10
+ "hidden_dim": 1024,
11
+ "seq_length": 1000000,
12
+ "normalize": true
13
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4ddaa9abbb557e9a66eac96b61f9d92455d737d4deac7ee663cf0ded24978e6d
3
  size 1024008288
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96a4ad4fbb7fc15c91e9707f966d2563a10fdd12ca1f537acade2d75a707c519
3
  size 1024008288
modules.json CHANGED
@@ -1,8 +1,14 @@
1
  [
2
- {
3
- "idx": 0,
4
- "name": "0",
5
- "path": "0_StaticEmbedding",
6
- "type": "sentence_transformers.models.StaticEmbedding"
7
- }
 
 
 
 
 
 
8
  ]
 
1
  [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": ".",
6
+ "type": "sentence_transformers.models.StaticEmbedding"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Normalize",
12
+ "type": "sentence_transformers.models.Normalize"
13
+ }
14
  ]
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32ca8c3d06d5d5487c075c795be8cfda729be26cc0ba62578ad4ec05ff0e811b
3
+ size 1024012377
special_tokens_map.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": "<mask>",
6
+ "pad_token": "<pad>",
7
+ "sep_token": "</s>",
8
+ "unk_token": "<unk>"
9
+ }
tokenizer_config.json CHANGED
@@ -1,54 +1,55 @@
1
  {
2
- "added_tokens_decoder": {
3
- "0": {
4
- "content": "<s>",
5
- "lstrip": false,
6
- "normalized": false,
7
- "rstrip": false,
8
- "single_word": false,
9
- "special": true
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  },
11
- "1": {
12
- "content": "<pad>",
13
- "lstrip": false,
14
- "normalized": false,
15
- "rstrip": false,
16
- "single_word": false,
17
- "special": true
18
- },
19
- "2": {
20
- "content": "</s>",
21
- "lstrip": false,
22
- "normalized": false,
23
- "rstrip": false,
24
- "single_word": false,
25
- "special": true
26
- },
27
- "3": {
28
- "content": "<unk>",
29
- "lstrip": false,
30
- "normalized": false,
31
- "rstrip": false,
32
- "single_word": false,
33
- "special": true
34
- },
35
- "250001": {
36
- "content": "<mask>",
37
- "lstrip": true,
38
- "normalized": false,
39
- "rstrip": false,
40
- "single_word": false,
41
- "special": true
42
- }
43
- },
44
- "bos_token": "<s>",
45
- "clean_up_tokenization_spaces": true,
46
- "cls_token": "<s>",
47
- "eos_token": "</s>",
48
- "mask_token": "<mask>",
49
- "model_max_length": 8194,
50
- "pad_token": "<pad>",
51
- "sep_token": "</s>",
52
- "tokenizer_class": "XLMRobertaTokenizer",
53
- "unk_token": "<unk>"
54
- }
 
1
  {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "250001": {
28
+ "content": "<mask>",
29
+ "lstrip": true,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "3": {
36
+ "content": "<unk>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
  },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": false,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "extra_special_tokens": {},
49
+ "mask_token": "<mask>",
50
+ "model_max_length": 1000000000000000019884624838656,
51
+ "pad_token": "<pad>",
52
+ "sep_token": "</s>",
53
+ "tokenizer_class": "XLMRobertaTokenizerFast",
54
+ "unk_token": "<unk>"
55
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
vocab.txt ADDED
The diff for this file is too large to render. See raw diff