File size: 1,408 Bytes
85ad3e6
 
 
 
0d53a6e
85ad3e6
 
0d53a6e
 
85ad3e6
0d53a6e
85ad3e6
 
0d53a6e
 
 
 
85ad3e6
 
 
 
 
 
 
 
 
 
8471a6f
d5cb6d0
8471a6f
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
dataset_info:
  features:
  - name: sequence
    dtype: large_string
  splits:
  - name: train
    num_bytes: 45299669517.08662
    num_examples: 207228723
  - name: valid
    num_bytes: 2185974.456691827
    num_examples: 10000
  - name: test
    num_bytes: 2916145.0439189114
    num_examples: 13340
  download_size: 44647931388
  dataset_size: 45304771636.587234
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: valid
    path: data/valid-*
  - split: test
    path: data/test-*
---
## OMGProt50 with evaluation splits

Thanks [Tatta Bio](https://huggingface.co/tattabio) for putting together such an amazing dataset!

To create this version we removed IDs to save space and added the evaluations sets.

See [here](https://huggingface.co/datasets/Synthyra/omg_prot50_packed) for a pretokenized version

We add validation and test sets for evalution purposes, including [ESM2 speed runs](https://github.com/Synthyra/SpeedRunningESM2).
OMG prot50 was clusterd at 50% identity, so random splits are nonredundant to the training set by default.
Random splits of 10,000 make up the base components of the validation and test sets.
To the test set, we also add all new Uniprot entries since OMG creation that have transcript level evidence after dedpulication.

[Code](https://github.com/Synthyra/SpeedRunningESM2/blob/master/data/create_omgprot50_splits.py)