Update README.md
Browse files
README.md
CHANGED
@@ -151,33 +151,6 @@ datasets:
|
|
151 |
# SPLADE-BERT-Tiny-Distil
|
152 |
|
153 |
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
|
154 |
-
## Model Details
|
155 |
-
|
156 |
-
### Model Description
|
157 |
-
- **Model Type:** SPLADE Sparse Encoder
|
158 |
-
- **Base model:** [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) <!-- at revision 6f75de8b60a9f8a2fdf7b69cbd86d9e64bcb3837 -->
|
159 |
-
- **Maximum Sequence Length:** 512 tokens
|
160 |
-
- **Output Dimensionality:** 30522 dimensions
|
161 |
-
- **Similarity Function:** Dot Product
|
162 |
-
<!-- - **Training Dataset:** Unknown -->
|
163 |
-
- **Language:** en
|
164 |
-
- **License:** mit
|
165 |
-
|
166 |
-
### Model Sources
|
167 |
-
|
168 |
-
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
169 |
-
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
|
170 |
-
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
171 |
-
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
|
172 |
-
|
173 |
-
### Full Model Architecture
|
174 |
-
|
175 |
-
```
|
176 |
-
SparseEncoder(
|
177 |
-
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
|
178 |
-
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
|
179 |
-
)
|
180 |
-
```
|
181 |
|
182 |
## Usage
|
183 |
|
@@ -238,6 +211,35 @@ You can finetune this model on your own dataset.
|
|
238 |
|
239 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
240 |
-->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
241 |
## More
|
242 |
<details><summary>Click to expand</summary>
|
243 |
|
|
|
151 |
# SPLADE-BERT-Tiny-Distil
|
152 |
|
153 |
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
154 |
|
155 |
## Usage
|
156 |
|
|
|
211 |
|
212 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
213 |
-->
|
214 |
+
|
215 |
+
## Model Details
|
216 |
+
|
217 |
+
### Model Description
|
218 |
+
- **Model Type:** SPLADE Sparse Encoder
|
219 |
+
- **Base model:** [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) <!-- at revision 6f75de8b60a9f8a2fdf7b69cbd86d9e64bcb3837 -->
|
220 |
+
- **Maximum Sequence Length:** 512 tokens
|
221 |
+
- **Output Dimensionality:** 30522 dimensions
|
222 |
+
- **Similarity Function:** Dot Product
|
223 |
+
<!-- - **Training Dataset:** Unknown -->
|
224 |
+
- **Language:** en
|
225 |
+
- **License:** mit
|
226 |
+
|
227 |
+
### Model Sources
|
228 |
+
|
229 |
+
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
230 |
+
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
|
231 |
+
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
232 |
+
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
|
233 |
+
|
234 |
+
### Full Model Architecture
|
235 |
+
|
236 |
+
```
|
237 |
+
SparseEncoder(
|
238 |
+
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
|
239 |
+
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
|
240 |
+
)
|
241 |
+
```
|
242 |
+
|
243 |
## More
|
244 |
<details><summary>Click to expand</summary>
|
245 |
|