rasyosef commited on
Commit
28210d5
·
verified ·
1 Parent(s): 1e85aa3

Update Splade Index

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ corpus.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ library_name: splade-index
4
+ tags:
5
+ - splade
6
+ - splade-index
7
+ - retrieval
8
+ - search
9
+ - sparse
10
+ ---
11
+
12
+ # Splade-Index
13
+
14
+ This is an index created with the [splade-index](https://github.com/rasyosef/splade-index) library (version `0.1.2`)
15
+
16
+ ## Installation
17
+
18
+ You can install the `splade-index` library with `pip`:
19
+
20
+ ```bash
21
+ pip install "splade-index==0.1.2"
22
+
23
+ # Include extra dependencies like stemmer
24
+ pip install "splade-index[full]==0.1.2"
25
+
26
+ # For huggingface hub usage
27
+ pip install huggingface_hub
28
+ ```
29
+
30
+ ## Load this Index
31
+
32
+ You can use the following code to load this SPLADE index from Hugging Face hub:
33
+
34
+ ```python
35
+ import os
36
+ from sentence_transformers import SparseEncoder
37
+ from splade_index import SPLADE
38
+
39
+ # Download the SPLADE model that was used to create the index from the HuggingFace Hub
40
+ model_id = "the-splade-model-id" # Enter the splade model id
41
+ model = SparseEncoder(model_id)
42
+
43
+ # Set your huggingface token if repo is private
44
+ token = os.environ["HF_TOKEN"]
45
+ repo_id = "rasyosef/natural_questions_108k_splade_index"
46
+
47
+ # Load a SPLADE index from the Hugging Face model hub
48
+ retriever = SPLADE.load_from_hub(repo_id, model=model, token=token)
49
+ ```
50
+
51
+ ## Stats
52
+
53
+ This dataset was created using the following data:
54
+
55
+ | Statistic | Value |
56
+ | --- | --- |
57
+ | Number of documents | 108593 |
58
+ | Number of tokens | 20265694 |
59
+ | Average tokens per document | 186.62 |
60
+
corpus.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d1cf8ec2ec3f922161cd24ffac66d8c70dc8947006d3bdd80c1d52aeb1b21ed
3
+ size 49244396
corpus.mmindex.json ADDED
The diff for this file is too large to render. See raw diff
 
csc.index.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a9f750d4328cce9750a3ed54e403b82ab33593249c02a78c66ce94b6da6c3bb
3
+ size 112269260
params.index.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "dtype": "float32",
3
+ "int_dtype": "int32",
4
+ "num_docs": 108593,
5
+ "version": "0.1.2",
6
+ "backend": "numpy"
7
+ }
vocab.index.json ADDED
The diff for this file is too large to render. See raw diff