Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ The source of this curated data is [unicamp-dl/mmarco](https://huggingface.co/da
|
|
22 |
linked in the [ColBERT V2 repo](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file#advanced-training-colbertv2-style).
|
23 |
|
24 |
Following an observation of Arabic tokenization issues (ex. in BERT models) - see https://www.linkedin.com/posts/akhooli_arabic-bert-tokenizers-you-may-need-to-normalize-activity-7225747473523216384-D1oH -
|
25 |
-
two new files were uploaded to this dataset (normalized queries and collection). Models based on these files require normalizing the query first
|
26 |
```python
|
27 |
from unicodedata import normalize
|
28 |
normalized_text = normalize('NFKC', text)
|
|
|
22 |
linked in the [ColBERT V2 repo](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file#advanced-training-colbertv2-style).
|
23 |
|
24 |
Following an observation of Arabic tokenization issues (ex. in BERT models) - see https://www.linkedin.com/posts/akhooli_arabic-bert-tokenizers-you-may-need-to-normalize-activity-7225747473523216384-D1oH -
|
25 |
+
two new files were uploaded to this dataset (normalized queries and collection - visually they are the same). Models based on these files require normalizing the query first.
|
26 |
```python
|
27 |
from unicodedata import normalize
|
28 |
normalized_text = normalize('NFKC', text)
|