|
--- |
|
language: |
|
- ru |
|
|
|
pipeline_tag: sentence-similarity |
|
|
|
tags: |
|
- russian |
|
- pretraining |
|
- embeddings |
|
- tiny |
|
- feature-extraction |
|
- sentence-similarity |
|
- sentence-transformers |
|
- transformers |
|
|
|
license: mit |
|
|
|
--- |
|
|
|
## Быстрый Bert для Semantic text similarity (STS) на CPU |
|
|
|
Быстрая модель BERT для расчетов компактных эмбедингов предложений на русском языке. Модель основана на [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) - имеет аналогичные размеры контекста (2048), ембединга (312) и быстродействие. Является первой и самой быстрой моделью в серии BERT-sts. |
|
|
|
На STS и близких задачах (PI, NLI, SA, TI) для русского языка превосходит по качеству LaBSE. Оптимальна для использования в составе RAG LLMs при инференсе на CPU. Для работы с контекстом свыше 512 токенов требует дообучения под целевой домен. |
|
|
|
## Использование модели с библиотекой `transformers`: |
|
|
|
```python |
|
# pip install transformers sentencepiece |
|
import torch |
|
from transformers import AutoTokenizer, AutoModel |
|
tokenizer = AutoTokenizer.from_pretrained("sergeyzh/rubert-tiny-sts") |
|
model = AutoModel.from_pretrained("sergeyzh/rubert-tiny-sts") |
|
# model.cuda() # uncomment it if you have a GPU |
|
|
|
def embed_bert_cls(text, model, tokenizer): |
|
t = tokenizer(text, padding=True, truncation=True, return_tensors='pt') |
|
with torch.no_grad(): |
|
model_output = model(**{k: v.to(model.device) for k, v in t.items()}) |
|
embeddings = model_output.last_hidden_state[:, 0, :] |
|
embeddings = torch.nn.functional.normalize(embeddings) |
|
return embeddings[0].cpu().numpy() |
|
|
|
print(embed_bert_cls('привет мир', model, tokenizer).shape) |
|
# (312,) |
|
``` |
|
|
|
## Использование с `sentence_transformers`: |
|
```Python |
|
from sentence_transformers import SentenceTransformer, util |
|
|
|
model = SentenceTransformer('sergeyzh/rubert-tiny-sts') |
|
|
|
sentences = ["привет мир", "hello world", "здравствуй вселенная"] |
|
embeddings = model.encode(sentences) |
|
print(util.dot_score(embeddings, embeddings)) |
|
``` |
|
|
|
## Метрики |
|
Оценки модели на бенчмарке [encodechka](https://github.com/avidale/encodechka): |
|
|
|
| Модель | STS | PI | NLI | SA | TI | |
|
|:---------------------------------|:---------:|:---------:|:---------:|:---------:|:---------:| |
|
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.862 | 0.727 | 0.473 | 0.810 | 0.979 | |
|
| [sergeyzh/LaBSE-ru-sts](https://huggingface.co/sergeyzh/LaBSE-ru-sts) | 0.845 | 0.737 | 0.481 | 0.805 | 0.957 | |
|
| **sergeyzh/rubert-tiny-sts** | **0.797** | **0.702** | **0.453** | **0.778** | **0.946** | |
|
| [Tochka-AI/ruRoPEBert-e5-base-512](https://huggingface.co/Tochka-AI/ruRoPEBert-e5-base-512) | 0.793 | 0.704 | 0.457 | 0.803 | 0.970 | |
|
| [cointegrated/LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru) | 0.794 | 0.659 | 0.431 | 0.761 | 0.946 | |
|
| [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) | 0.750 | 0.651 | 0.417 | 0.737 | 0.937 | |
|
|
|
**Задачи:** |
|
|
|
- Semantic text similarity (**STS**); |
|
- Paraphrase identification (**PI**); |
|
- Natural language inference (**NLI**); |
|
- Sentiment analysis (**SA**); |
|
- Toxicity identification (**TI**). |
|
|
|
## Быстродействие и размеры |
|
|
|
На бенчмарке [encodechka](https://github.com/avidale/encodechka): |
|
|
|
| Модель | CPU | GPU | size | dim | n_ctx | n_vocab | |
|
|:---------------------------------|----------:|----------:|----------:|----------:|----------:|----------:| |
|
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 149.026 | 15.629 | 2136 | 1024 | 514 | 250002 | |
|
| [sergeyzh/LaBSE-ru-sts](https://huggingface.co/sergeyzh/LaBSE-ru-sts) | 42.835 | 8.561 | 490 | 768 | 512 | 55083 | |
|
| **sergeyzh/rubert-tiny-sts** | **3.208** | **3.379** | **111** | **312** | **2048** | **83828** | |
|
| [Tochka-AI/ruRoPEBert-e5-base-512](https://huggingface.co/Tochka-AI/ruRoPEBert-e5-base-512) | 43.314 | 9.338 | 532 | 768 | 512 | 69382 | |
|
| [cointegrated/LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru) | 42.867 | 8.549 | 490 | 768 | 512 | 55083 | |
|
| [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) | 3.212 | 3.384 | 111 | 312 | 2048 | 83828 | |
|
|
|
|
|
|
|
При использовании батчей с `sentence_transformers`: |
|
|
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
|
|
model_name = 'sergeyzh/rubert-tiny-sts' |
|
model = SentenceTransformer(model_name, device='cpu') |
|
sentences = ["Тест быстродействия на CPU Ryzen 7 3800X: batch = 1000"] * 1000 |
|
%timeit -n 5 -r 3 model.encode(sentences) |
|
|
|
# 840 ms ± 8.08 ms per loop (mean ± std. dev. of 3 runs, 5 loops each) |
|
# 1000/0.840 = 1190 snt/s |
|
|
|
model = SentenceTransformer(model_name, device='cuda') |
|
sentences = ["Тест быстродействия на GPU RTX 3060: batch = 8000"] * 8000 |
|
%timeit -n 5 -r 3 model.encode(sentences) |
|
|
|
# 922 ms ± 29.5 ms per loop (mean ± std. dev. of 3 runs, 5 loops each) |
|
# 8000/0.922 = 8677 snt/s |
|
``` |
|
|
|
|
|
## Связанные ресурсы |
|
Вопросы использования модели обсуждаются в [русскоязычном чате NLP](https://t.me/natural_language_processing). |
|
|
|
|