Sentence Similarity
sentence-transformers
PyTorch
Transformers
English
t5
text-embedding
embeddings
information-retrieval
beir
text-classification
language-model
text-clustering
text-semantic-similarity
text-evaluation
prompt-retrieval
text-reranking
feature-extraction
English
Sentence Similarity
natural_questions
ms_marco
fever
hotpot_qa
mteb
Eval Results
text-generation-inference
Update README.md
Browse files
README.md
CHANGED
@@ -2524,15 +2524,7 @@ model-index:
|
|
2524 |
value: 79.25143598295348
|
2525 |
---
|
2526 |
|
2527 |
-
#
|
2528 |
-
We introduce **Instructor**👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨 achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))!
|
2529 |
-
The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)!
|
2530 |
-
|
2531 |
-
**************************** **Updates** ****************************
|
2532 |
-
|
2533 |
-
* 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance.
|
2534 |
-
* 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out!
|
2535 |
-
|
2536 |
## Quick start
|
2537 |
<hr />
|
2538 |
|
@@ -2545,7 +2537,7 @@ pip install InstructorEmbedding
|
|
2545 |
Then you can use the model like this to calculate domain-specific and task-aware embeddings:
|
2546 |
```python
|
2547 |
from InstructorEmbedding import INSTRUCTOR
|
2548 |
-
model = INSTRUCTOR('
|
2549 |
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
|
2550 |
instruction = "Represent the Science title:"
|
2551 |
embeddings = model.encode([[instruction,sentence]])
|
|
|
2524 |
value: 79.25143598295348
|
2525 |
---
|
2526 |
|
2527 |
+
# nascenia/instruct-embedding
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2528 |
## Quick start
|
2529 |
<hr />
|
2530 |
|
|
|
2537 |
Then you can use the model like this to calculate domain-specific and task-aware embeddings:
|
2538 |
```python
|
2539 |
from InstructorEmbedding import INSTRUCTOR
|
2540 |
+
model = INSTRUCTOR('nascenia/instruct-embedding')
|
2541 |
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
|
2542 |
instruction = "Represent the Science title:"
|
2543 |
embeddings = model.encode([[instruction,sentence]])
|