Fizzarolli commited on
Commit
0d75026
·
verified ·
1 Parent(s): 4ca5f19

slightly betterest model

Browse files
Files changed (3) hide show
  1. README.md +4 -4
  2. model.safetensors +2 -2
  3. tokenizer.json +0 -0
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
  library_name: model2vec
3
  license: mit
4
- model_name: granite_125m_dim512_tl_distill-ckpt5
5
  tags:
6
  - embeddings
7
  - static-embeddings
8
  - sentence-transformers
9
  ---
10
 
11
- # granite_125m_dim512_tl_distill-ckpt5 Model Card
12
 
13
  This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of a Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
14
 
@@ -31,7 +31,7 @@ Load this model using the `from_pretrained` method:
31
  from model2vec import StaticModel
32
 
33
  # Load a pretrained Model2Vec model
34
- model = StaticModel.from_pretrained("granite_125m_dim512_tl_distill-ckpt5")
35
 
36
  # Compute text embeddings
37
  embeddings = model.encode(["Example sentence"])
@@ -45,7 +45,7 @@ You can also use the [Sentence Transformers library](https://github.com/UKPLab/s
45
  from sentence_transformers import SentenceTransformer
46
 
47
  # Load a pretrained Sentence Transformer model
48
- model = SentenceTransformer("granite_125m_dim512_tl_distill-ckpt5")
49
 
50
  # Compute text embeddings
51
  embeddings = model.encode(["Example sentence"])
 
1
  ---
2
  library_name: model2vec
3
  license: mit
4
+ model_name: granite_125m_dim512_tl_distill-ckpt6
5
  tags:
6
  - embeddings
7
  - static-embeddings
8
  - sentence-transformers
9
  ---
10
 
11
+ # granite_125m_dim512_tl_distill-ckpt6 Model Card
12
 
13
  This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of a Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
14
 
 
31
  from model2vec import StaticModel
32
 
33
  # Load a pretrained Model2Vec model
34
+ model = StaticModel.from_pretrained("granite_125m_dim512_tl_distill-ckpt6")
35
 
36
  # Compute text embeddings
37
  embeddings = model.encode(["Example sentence"])
 
45
  from sentence_transformers import SentenceTransformer
46
 
47
  # Load a pretrained Sentence Transformer model
48
+ model = SentenceTransformer("granite_125m_dim512_tl_distill-ckpt6")
49
 
50
  # Compute text embeddings
51
  embeddings = model.encode(["Example sentence"])
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1a17c4d4e572b9ac78accf30d474b70bd6561636f5f044c841159349286dfc90
3
- size 180093016
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de075e61ded0a0afb45042446b99bb0e6061c5b4795204487b4fa8e63f6843d7
3
+ size 179963992
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff