File size: 1,354 Bytes
01623bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d05b82f
 
01623bc
d05b82f
 
 
 
01623bc
 
 
 
 
 
 
 
82f7ccc
 
 
694b94d
82f7ccc
 
 
694b94d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
dataset_info:
  features:
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: answers
    struct:
    - name: answer_start
      sequence: int64
    - name: text
      sequence: string
  - name: id
    dtype: string
  - name: labels
    list:
    - name: end
      sequence: int64
    - name: start
      sequence: int64
  splits:
  - name: train
    num_bytes: 57635506.94441748
    num_examples: 18142
  - name: validation
    num_bytes: 3374870.9449192784
    num_examples: 1070
  download_size: 4666280
  dataset_size: 61010377.88933676
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
---

## Dataset Card for "squad"

This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.

### Preprocessing and Filtering

Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.