Update README.md
Browse files
README.md
CHANGED
@@ -9,29 +9,36 @@ tags:
|
|
9 |
- loss:SpladeLoss
|
10 |
- loss:SparseMarginMSELoss
|
11 |
- loss:FlopsLoss
|
12 |
-
base_model:
|
|
|
13 |
widget:
|
14 |
-
- text:
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
|
|
|
|
|
|
22 |
- text: benefits of honey and lemon
|
23 |
-
- text:
|
24 |
-
|
25 |
-
|
26 |
-
1 to 4 ears of corn
|
27 |
-
|
28 |
-
|
29 |
-
of corn,
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
|
|
|
|
|
|
35 |
pipeline_tag: feature-extraction
|
36 |
library_name: sentence-transformers
|
37 |
metrics:
|
@@ -121,38 +128,34 @@ model-index:
|
|
121 |
- type: corpus_sparsity_ratio
|
122 |
value: 0.9942712788405814
|
123 |
name: Corpus Sparsity Ratio
|
|
|
|
|
|
|
|
|
|
|
124 |
---
|
125 |
|
126 |
-
# SPLADE Sparse Encoder
|
127 |
|
128 |
-
|
129 |
-
## Model Details
|
130 |
|
131 |
-
|
132 |
-
- **Model Type:** SPLADE Sparse Encoder
|
133 |
-
- **Base model:** [yosefw/SPLADE-BERT-Mini-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Mini-BS256) <!-- at revision 986bc55b61d9f0559f86423fb5807b9f4a3b7094 -->
|
134 |
-
- **Maximum Sequence Length:** 512 tokens
|
135 |
-
- **Output Dimensionality:** 30522 dimensions
|
136 |
-
- **Similarity Function:** Dot Product
|
137 |
-
<!-- - **Training Dataset:** Unknown -->
|
138 |
-
<!-- - **Language:** Unknown -->
|
139 |
-
<!-- - **License:** Unknown -->
|
140 |
|
141 |
-
|
142 |
|
143 |
-
-
|
144 |
-
-
|
145 |
-
-
|
146 |
-
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
|
147 |
|
148 |
-
|
149 |
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
|
|
|
|
156 |
|
157 |
## Usage
|
158 |
|
@@ -214,6 +217,37 @@ You can finetune this model on your own dataset.
|
|
214 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
215 |
-->
|
216 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
217 |
## Evaluation
|
218 |
|
219 |
### Metrics
|
@@ -506,4 +540,5 @@ You can finetune this model on your own dataset.
|
|
506 |
## Model Card Contact
|
507 |
|
508 |
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
509 |
-
-->
|
|
|
|
9 |
- loss:SpladeLoss
|
10 |
- loss:SparseMarginMSELoss
|
11 |
- loss:FlopsLoss
|
12 |
+
base_model:
|
13 |
+
- prajjwal1/bert-mini
|
14 |
widget:
|
15 |
+
- text: >-
|
16 |
+
Caffeine is a central nervous system stimulant. It works by stimulating the
|
17 |
+
brain. Caffeine is found naturally in foods and beverages such as coffee,
|
18 |
+
tea, colas, energy and chocolate. Botanical sources of caffeine include kola
|
19 |
+
nuts, guarana, and yerba mate.
|
20 |
+
- text: >-
|
21 |
+
Tim Hardaway, Jr. Compared To My 5ft 10in (177cm) Height. Tim Hardaway,
|
22 |
+
Jr.'s height is 6ft 6in or 198cm while I am 5ft 10in or 177cm. I am shorter
|
23 |
+
compared to him. To find out how much shorter I am, we would have to
|
24 |
+
subtract my height from Tim Hardaway, Jr.'s height. Therefore I am shorter
|
25 |
+
to him for about 21cm.
|
26 |
- text: benefits of honey and lemon
|
27 |
+
- text: >-
|
28 |
+
How To Cook Corn on the Cob in the Microwave What You Need. Ingredients 1 or
|
29 |
+
more ears fresh, un-shucked sweet corn Equipment Microwave Cooling rack or
|
30 |
+
cutting board Instructions. Place 1 to 4 ears of corn in the microwave:
|
31 |
+
Arrange 1 to 4 ears of corn, un-shucked, in the microwave. If you prefer,
|
32 |
+
you can set them on a microwaveable plate or tray. If you need to cook more
|
33 |
+
than 4 ears of corn, cook them in batches. Microwave for 3 to 5 minutes: For
|
34 |
+
just 1 or 2 ears of corn, microwave for 3 minutes. For 3 or 4 ears,
|
35 |
+
microwave for 4 minutes. If you like softer corn or if your ears are
|
36 |
+
particularly large, microwave for an additional minute.
|
37 |
+
- text: >-
|
38 |
+
The law recognizes two basic kinds of warrantiesimplied warranties and
|
39 |
+
express warranties. Implied Warranties. Implied warranties are unspoken,
|
40 |
+
unwritten promises, created by state law, that go from you, as a seller or
|
41 |
+
merchant, to your customers.
|
42 |
pipeline_tag: feature-extraction
|
43 |
library_name: sentence-transformers
|
44 |
metrics:
|
|
|
128 |
- type: corpus_sparsity_ratio
|
129 |
value: 0.9942712788405814
|
130 |
name: Corpus Sparsity Ratio
|
131 |
+
license: mit
|
132 |
+
datasets:
|
133 |
+
- microsoft/ms_marco
|
134 |
+
language:
|
135 |
+
- en
|
136 |
---
|
137 |
|
|
|
138 |
|
139 |
+
# SPLADE-BERT-Mini-Distil
|
|
|
140 |
|
141 |
+
This is a SPLADE sparse retrieval model based on BERT-Mini (11M) that was trained by distilling a Cross-Encoder on the MSMARCO dataset. The cross-encoder used was [ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
142 |
|
143 |
+
This tiny SPLADE model is `6x` smaller than Naver's official `splade-v3-distilbert` while having `85%` of it's performance on the MSMARCO benchmark. This model is small enough to be used without a GPU on a dataset of a few thousand documents.
|
144 |
|
145 |
+
- `Collection:` https://huggingface.co/collections/rasyosef/splade-tiny-msmarco-687c548c0691d95babf65b70
|
146 |
+
- `Distillation Dataset:` https://huggingface.co/datasets/yosefw/msmarco-train-distil-v2
|
147 |
+
- `Code:` https://github.com/rasyosef/splade-tiny-msmarco
|
|
|
148 |
|
149 |
+
## Performance
|
150 |
|
151 |
+
The splade models were evaluated on 55 thousand queries and 8.84 million documents from the [MSMARCO](https://huggingface.co/datasets/microsoft/ms_marco) dataset.
|
152 |
+
|
153 |
+
||Size (# Params)|MRR@10 (MS MARCO dev)|
|
154 |
+
|:---|:----|:-------------------|
|
155 |
+
|`BM25`|-|18.0|-|-|
|
156 |
+
|`rasyosef/splade-tiny`|4.4M|30.9|
|
157 |
+
|`rasyosef/splade-mini`|11.2M|34.1|
|
158 |
+
|`naver/splade-v3-distilbert`|67.0M|38.7|
|
159 |
|
160 |
## Usage
|
161 |
|
|
|
217 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
218 |
-->
|
219 |
|
220 |
+
## Model Details
|
221 |
+
|
222 |
+
### Model Description
|
223 |
+
- **Model Type:** SPLADE Sparse Encoder
|
224 |
+
- **Base model:** [prajjwal1/bert-mini](https://huggingface.co/prajjwal1/bert-mini)
|
225 |
+
- **Maximum Sequence Length:** 512 tokens
|
226 |
+
- **Output Dimensionality:** 30522 dimensions
|
227 |
+
- **Similarity Function:** Dot Product
|
228 |
+
<!-- - **Training Dataset:** Unknown -->
|
229 |
+
<!-- - **Language:** Unknown -->
|
230 |
+
<!-- - **License:** Unknown -->
|
231 |
+
|
232 |
+
### Model Sources
|
233 |
+
|
234 |
+
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
235 |
+
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
|
236 |
+
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
237 |
+
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
|
238 |
+
|
239 |
+
### Full Model Architecture
|
240 |
+
|
241 |
+
```
|
242 |
+
SparseEncoder(
|
243 |
+
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
|
244 |
+
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
|
245 |
+
)
|
246 |
+
```
|
247 |
+
|
248 |
+
## More
|
249 |
+
<details><summary>Click to expand</summary>
|
250 |
+
|
251 |
## Evaluation
|
252 |
|
253 |
### Metrics
|
|
|
540 |
## Model Card Contact
|
541 |
|
542 |
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
543 |
+
-->
|
544 |
+
</details>
|