File size: 1,564 Bytes
11bf682
 
46c6a6e
 
 
 
 
 
 
 
 
11bf682
 
2e8b0e9
11bf682
bedb798
11bf682
bedb798
 
 
 
 
 
1bcda71
6a31c33
1bcda71
 
 
 
 
9c5426a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: mit

language:
- ar

configs:
- config_name: pipe

  data_files: "arabic-queries-no-latin.tsv"
  sep: "|"
---

# akhooli/ar-mmarco-sample
This repo has samples from the Arabic (machine translation) version of the mMARCO dataset, together with mined rankings (English, but should apply 
as translations are aligned across languages). 
The purpose is to train (using free compute, so not fully trained) an Arabic ColBERT V2 model.   
The original dataset has a little over 800K queries (training set). I filtered out ones with English words, leaving around 700K, then sampled 250K along with their 
ranking examples (for 250K, the examples file size is a little over 8GB). 

The source of this curated data is [unicamp-dl/mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco) and the full examples json file (27 GB) is 
linked in the [ColBERT V2 repo](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file#advanced-training-colbertv2-style). 

Following an observation of Arabic tokenization issues (ex. in BERT models) - see https://www.linkedin.com/posts/akhooli_arabic-bert-tokenizers-you-may-need-to-normalize-activity-7225747473523216384-D1oH - 
two new files were uploaded to this dataset (normalized queries and collection - visually they are the same). Models based on these files require normalizing the query first.
```python
from unicodedata import normalize
normalized_text = normalize('NFKC', text)
```

More: https://www.linkedin.com/posts/akhooli_arabic-mmarco-sample-dataset-and-colbert-activity-7225135682044743680-35nN