slightly betterest model
Browse files- README.md +4 -4
- model.safetensors +2 -2
- tokenizer.json +0 -0
README.md
CHANGED
@@ -1,14 +1,14 @@
|
|
1 |
---
|
2 |
library_name: model2vec
|
3 |
license: mit
|
4 |
-
model_name: granite_125m_dim512_tl_distill-
|
5 |
tags:
|
6 |
- embeddings
|
7 |
- static-embeddings
|
8 |
- sentence-transformers
|
9 |
---
|
10 |
|
11 |
-
# granite_125m_dim512_tl_distill-
|
12 |
|
13 |
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of a Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
|
14 |
|
@@ -31,7 +31,7 @@ Load this model using the `from_pretrained` method:
|
|
31 |
from model2vec import StaticModel
|
32 |
|
33 |
# Load a pretrained Model2Vec model
|
34 |
-
model = StaticModel.from_pretrained("granite_125m_dim512_tl_distill-
|
35 |
|
36 |
# Compute text embeddings
|
37 |
embeddings = model.encode(["Example sentence"])
|
@@ -45,7 +45,7 @@ You can also use the [Sentence Transformers library](https://github.com/UKPLab/s
|
|
45 |
from sentence_transformers import SentenceTransformer
|
46 |
|
47 |
# Load a pretrained Sentence Transformer model
|
48 |
-
model = SentenceTransformer("granite_125m_dim512_tl_distill-
|
49 |
|
50 |
# Compute text embeddings
|
51 |
embeddings = model.encode(["Example sentence"])
|
|
|
1 |
---
|
2 |
library_name: model2vec
|
3 |
license: mit
|
4 |
+
model_name: granite_125m_dim512_tl_distill-ckpt6
|
5 |
tags:
|
6 |
- embeddings
|
7 |
- static-embeddings
|
8 |
- sentence-transformers
|
9 |
---
|
10 |
|
11 |
+
# granite_125m_dim512_tl_distill-ckpt6 Model Card
|
12 |
|
13 |
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of a Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
|
14 |
|
|
|
31 |
from model2vec import StaticModel
|
32 |
|
33 |
# Load a pretrained Model2Vec model
|
34 |
+
model = StaticModel.from_pretrained("granite_125m_dim512_tl_distill-ckpt6")
|
35 |
|
36 |
# Compute text embeddings
|
37 |
embeddings = model.encode(["Example sentence"])
|
|
|
45 |
from sentence_transformers import SentenceTransformer
|
46 |
|
47 |
# Load a pretrained Sentence Transformer model
|
48 |
+
model = SentenceTransformer("granite_125m_dim512_tl_distill-ckpt6")
|
49 |
|
50 |
# Compute text embeddings
|
51 |
embeddings = model.encode(["Example sentence"])
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:de075e61ded0a0afb45042446b99bb0e6061c5b4795204487b4fa8e63f6843d7
|
3 |
+
size 179963992
|
tokenizer.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|