viridono commited on
Commit
cc3b57e
·
verified ·
1 Parent(s): 89078a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md CHANGED
@@ -144,6 +144,99 @@ configs:
144
  path: proteins/train-*
145
  ---
146
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
147
 
148
  # CF/MS Elution Profile PPI Dataset
149
 
 
144
  path: proteins/train-*
145
  ---
146
 
147
+ # Quickstart Usage
148
+
149
+ This dataset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library. First, install the `datasets` library via command line:
150
+
151
+ ```
152
+ $ pip install datasets
153
+ ```
154
+
155
+ With `datasets` installed, the user should then import it into their python script / environment:
156
+
157
+ ```
158
+ >>> import datasets
159
+ ```
160
+
161
+ The user can then load the `CF-MS_Homo_sapiens_PPI` dataset using `datasets.load_dataset(...)`. There are two configurations, or 'views' for the set. The user can choose between them via the `name` parameter:
162
+
163
+ * `pairs` (Default): Pairwise protein elution profiles with binary labels for whether the two proteins are known to interact
164
+
165
+ ```
166
+ >>> view = "pairs"
167
+ >>> dataset = datasets.load_dataset(
168
+ path = "viridono/CF-MS_Homo_sapiens_PPI",
169
+ name = view)
170
+ ```
171
+
172
+ * `proteins`: Individual protein elution profiles without labels for if the user wishes to assemble in a non-pairwise fashion
173
+
174
+ ```
175
+ >>> view = "proteins"
176
+ >>> dataset = datasets.load_dataset(
177
+ path = "viridono/CF-MS_Homo_sapiens_PPI",
178
+ name = view)
179
+ ```
180
+
181
+ and the dataset will be loaded as a `datasets.DatasetDict`. For pairs:
182
+ ```
183
+
184
+ >>> dataset
185
+ DatasetDict({
186
+ train: Dataset({
187
+ features: ['experiment_id', 'uniprot_id1', 'uniprot_id2', 'elut_trace1', 'elut_trace2', 'label'],
188
+ num_rows: 2496144
189
+ })
190
+ test: Dataset({
191
+ features: ['experiment_id', 'uniprot_id1', 'uniprot_id2', 'elut_trace1', 'elut_trace2', 'label'],
192
+ num_rows: 2769931
193
+ })
194
+ })
195
+ ```
196
+
197
+ and for proteins:
198
+
199
+ ```
200
+ DatasetDict({
201
+ train: Dataset({
202
+ features: ['experiment_id', 'uniprot_id', 'fraction_names', 'trace'],
203
+ num_rows: 20383
204
+ })
205
+ })
206
+ ```
207
+
208
+ This is a column-wise format. Elution traces are 1D vectors of protein abundances (PSMs) that are stored either in the `elut_trace#` column (for pairs) or in the `trace` (for individual proteins). Note that the traces have been uploaded in a lossless format, meaning they are not normalized across different experiments (`experiment_id`) (i.e. have differing lengths, differing peak heights).
209
+
210
+ The user may wish to normalize elution data when training. This is easily achievable following conversion to a `pandas.DataFrame`. Note that the `DatasetDict` must first be partitioned into its train and test splits:
211
+
212
+ ```
213
+ >>> ds_train = dataset['train']
214
+ >>> ds_test = dataset['test']
215
+ >>> ds_train.to_pandas()
216
+ >>> ds_test.to_pandas()
217
+ ```
218
+
219
+ # Useful Pandas Normalizations / Transformations
220
+
221
+ As a `pandas.DataFrame`, the user can then apply any of various transformations, including padding the 1D vectors to make them of uniform length:
222
+
223
+ ```
224
+ max_len = max(df_train['elut_trace1'].apply(len))
225
+ df_train['elut_trace1'] = df_train['elut_trace1'].apply(lambda x: np.pad(x, (0, max_len - len(x)), mode='constant'))
226
+ ```
227
+
228
+ or value-wise normalization, for example row-max:
229
+
230
+ ```
231
+ df_train['elut_trace1'] = df_train['elut_trace1'].apply(lambda x: x / x.max() if x.max() != 0 else x)
232
+ ```
233
+
234
+ Also note that elution data can be rather sparse, so the user might want to extract only the elution vectors that reach a certain minimum PSM threshold. **This should be done prior to value normalization**. Good values for minimum peak height are 5 or 10:
235
+
236
+ ```
237
+ df_filtered = df_train[df_train['elut_trace1'].apply(lambda x: np.any(x >= 10)) &
238
+ df_train['elut_trace2'].apply(lambda x: np.any(x >= 10))]
239
+ ```
240
 
241
  # CF/MS Elution Profile PPI Dataset
242