Datasets:
Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -74,12 +74,31 @@ This repo also provide scripts to explore the images and labels in more detail.
|
|
| 74 |
|
| 75 |
## Prepare the dataset in HDF5
|
| 76 |
We found a single HDF5 file to be efficient for FL.
|
| 77 |
-
If you want to process the dataset for general usage in FL, we recommend using [this preprocessing script](https://github.com/apple/pfl-research/blob/develop/benchmarks/dataset/flair/
|
| 78 |
|
| 79 |
By default the script will group the images and labels by train/val/test split and then by user ids, making it suitable for federated learning experiments.
|
| 80 |
With the flag `--not_group_data_by_user`, the script will simply group the images and labels by train/val/test split and ignore the user ids, which is the typical setup for centralized training. \
|
| 81 |
⚠️ Warning: the hdf5 file take up to ~80GB disk space to store after processing.
|
| 82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
## Disclaimer
|
| 84 |
The annotations and Apple’s other rights in the dataset are licensed under CC-BY-NC 4.0 license.
|
| 85 |
The images are copyright of the respective owners, the license terms of which can be found using the links provided in ATTRIBUTIONS.TXT (by matching the Image ID).
|
|
|
|
| 74 |
|
| 75 |
## Prepare the dataset in HDF5
|
| 76 |
We found a single HDF5 file to be efficient for FL.
|
| 77 |
+
If you want to process the dataset for general usage in FL, we recommend using [this preprocessing script](https://github.com/apple/pfl-research/blob/develop/benchmarks/dataset/flair/download_preprocess.py) to construct a HDF5 file.
|
| 78 |
|
| 79 |
By default the script will group the images and labels by train/val/test split and then by user ids, making it suitable for federated learning experiments.
|
| 80 |
With the flag `--not_group_data_by_user`, the script will simply group the images and labels by train/val/test split and ignore the user ids, which is the typical setup for centralized training. \
|
| 81 |
⚠️ Warning: the hdf5 file take up to ~80GB disk space to store after processing.
|
| 82 |
|
| 83 |
+
## Use dataset directly with HuggingFace
|
| 84 |
+
|
| 85 |
+
The dataset can also be used with the `datasets` package.
|
| 86 |
+
To group datapoints by user, simply construct a mapping and then query the dataset by index:
|
| 87 |
+
```
|
| 88 |
+
from datasets import load_dataset
|
| 89 |
+
from collections import defaultdict
|
| 90 |
+
ds = load_dataset('apple/flair', split='val')
|
| 91 |
+
|
| 92 |
+
user_to_ix = defaultdict(list)
|
| 93 |
+
for i, record in enumerate(ds):
|
| 94 |
+
user_to_ix[record['user_id']].append(i)
|
| 95 |
+
|
| 96 |
+
def load_user_data(user_id):
|
| 97 |
+
return [ds[i] for i in user_to_ix[user_id]]
|
| 98 |
+
|
| 99 |
+
load_user_data('81594342@N00')
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
## Disclaimer
|
| 103 |
The annotations and Apple’s other rights in the dataset are licensed under CC-BY-NC 4.0 license.
|
| 104 |
The images are copyright of the respective owners, the license terms of which can be found using the links provided in ATTRIBUTIONS.TXT (by matching the Image ID).
|