Tobias Nauen
commited on
Commit
·
9cd2bbd
1
Parent(s):
b291fb2
autoformat readme
Browse files
README.md
CHANGED
@@ -1,15 +1,16 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
task_categories:
|
4 |
-
- image-classification
|
5 |
pretty_name: ForAug/ForNet
|
6 |
size_categories:
|
7 |
-
- 1M<n<10M
|
8 |
---
|
9 |
|
10 |
[](https://arxiv.org/abs/2503.09399) [](https://github.com/tobna/ForAug)
|
11 |
|
12 |
# ForAug/ForNet
|
|
|
13 |

|
14 |
|
15 |
This is the ForNet dataset from the paper [ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation](https://www.arxiv.org/abs/2503.09399).
|
@@ -20,11 +21,12 @@ This is the ForNet dataset from the paper [ForAug: Recombining Foregrounds and B
|
|
20 |
- [19.03.2025] We release the patch files of ForNet on Huggingface :hugs:
|
21 |
- [12.03.2025] We release the preprint of [ForAug on arXiv](https://www.arxiv.org/abs/2503.09399) :spiral_notepad:
|
22 |
|
23 |
-
|
24 |
## Using ForAug/ForNet
|
25 |
|
26 |
### Preliminaries
|
|
|
27 |
To be able to download ForNet, you will need the ImageNet dataset in the usual format at `<in_path>`:
|
|
|
28 |
```
|
29 |
<in_path>
|
30 |
|--- train
|
@@ -46,8 +48,11 @@ To be able to download ForNet, you will need the ImageNet dataset in the usual f
|
|
46 |
```
|
47 |
|
48 |
### Downloading ForNet
|
|
|
49 |
To download and prepare the already-segmented ForNet dataset at `<data_path>`, follow these steps:
|
|
|
50 |
#### 1. Clone the git repository and install the requirements
|
|
|
51 |
```
|
52 |
git clone https://github.com/tobna/ForAug
|
53 |
cd ForAug
|
@@ -55,57 +60,69 @@ pip install -r prep-requirements.txt
|
|
55 |
```
|
56 |
|
57 |
#### 2. Download the diff files
|
|
|
58 |
```
|
59 |
./download_diff_files.sh <data_path>
|
60 |
```
|
|
|
61 |
This script will download all dataset files to `<data_path>`
|
62 |
|
63 |
#### 3. Apply the diffs to ImageNet
|
|
|
64 |
```
|
65 |
python apply_patch.py -p <data_path> -in <in_path> -o <data_path>
|
66 |
```
|
|
|
67 |
This will apply the diffs to ImageNet and store the results in the `<data_path>` folder. It will also delete the already-processes patch files (the ones downloaded in step 2). In order to keep the patch files, add the `--keep` flag.
|
68 |
|
69 |
#### Optional: Zip the files without compression
|
|
|
70 |
When dealing with a large cluster and dataset files that have to be sent over the network (i.e. the dataset is on another server than the one used for processing) it's sometimes useful to not deal with many small files and have fewer large ones instead.
|
71 |
If you want this, you can zip up the files (without compression) by using
|
|
|
72 |
```
|
73 |
./zip_up.sh <data_path>
|
74 |
```
|
75 |
|
76 |
### Creating ForNet from Scratch
|
|
|
77 |
Coming soon
|
78 |
|
79 |
### Using ForNet
|
|
|
80 |
To use ForAug/ForNet you need to have it available in folder or zip form (see [Downloading ForNet](#downloading-fornet)) at `data_path`.
|
81 |
Additionally, you need to install the (standard) requirements from 'requirements.txt':
|
|
|
82 |
```
|
83 |
pip install -r requirements.txt
|
84 |
```
|
85 |
|
86 |
Then, just do
|
|
|
87 |
```python
|
88 |
from fornet import ForNet
|
89 |
|
90 |
data_path = ...
|
91 |
|
92 |
dataset = ForNet(
|
93 |
-
data_path,
|
94 |
-
train=True,
|
95 |
-
transform=None,
|
96 |
background_combination="all",
|
97 |
)
|
98 |
|
99 |
```
|
100 |
|
101 |
For information on all possible parameters, run
|
|
|
102 |
```python
|
103 |
from fornet import ForNet
|
104 |
|
105 |
help(ForNet.__init__)
|
106 |
```
|
107 |
|
108 |
-
## Citation
|
|
|
109 |
```BibTex
|
110 |
@misc{nauen2025foraug,
|
111 |
title={ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation},
|
@@ -118,6 +135,7 @@ help(ForNet.__init__)
|
|
118 |
```
|
119 |
|
120 |
### Dataset Sources
|
|
|
121 |
- **Repository:** [GitHub](https://github.com/tobna/ForAug)
|
122 |
- **Paper:** [arXiv](https://www.arxiv.org/abs/2503.09399)
|
123 |
- **Project Page:** coming soon
|
@@ -126,4 +144,4 @@ help(ForNet.__init__)
|
|
126 |
|
127 |
- [x] release code to download and create ForNet
|
128 |
- [x] release code to use ForNet for training and evaluation
|
129 |
-
- [ ] integrate ForNet into Huggingface Datasets
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
task_categories:
|
4 |
+
- image-classification
|
5 |
pretty_name: ForAug/ForNet
|
6 |
size_categories:
|
7 |
+
- 1M<n<10M
|
8 |
---
|
9 |
|
10 |
[](https://arxiv.org/abs/2503.09399) [](https://github.com/tobna/ForAug)
|
11 |
|
12 |
# ForAug/ForNet
|
13 |
+
|
14 |

|
15 |
|
16 |
This is the ForNet dataset from the paper [ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation](https://www.arxiv.org/abs/2503.09399).
|
|
|
21 |
- [19.03.2025] We release the patch files of ForNet on Huggingface :hugs:
|
22 |
- [12.03.2025] We release the preprint of [ForAug on arXiv](https://www.arxiv.org/abs/2503.09399) :spiral_notepad:
|
23 |
|
|
|
24 |
## Using ForAug/ForNet
|
25 |
|
26 |
### Preliminaries
|
27 |
+
|
28 |
To be able to download ForNet, you will need the ImageNet dataset in the usual format at `<in_path>`:
|
29 |
+
|
30 |
```
|
31 |
<in_path>
|
32 |
|--- train
|
|
|
48 |
```
|
49 |
|
50 |
### Downloading ForNet
|
51 |
+
|
52 |
To download and prepare the already-segmented ForNet dataset at `<data_path>`, follow these steps:
|
53 |
+
|
54 |
#### 1. Clone the git repository and install the requirements
|
55 |
+
|
56 |
```
|
57 |
git clone https://github.com/tobna/ForAug
|
58 |
cd ForAug
|
|
|
60 |
```
|
61 |
|
62 |
#### 2. Download the diff files
|
63 |
+
|
64 |
```
|
65 |
./download_diff_files.sh <data_path>
|
66 |
```
|
67 |
+
|
68 |
This script will download all dataset files to `<data_path>`
|
69 |
|
70 |
#### 3. Apply the diffs to ImageNet
|
71 |
+
|
72 |
```
|
73 |
python apply_patch.py -p <data_path> -in <in_path> -o <data_path>
|
74 |
```
|
75 |
+
|
76 |
This will apply the diffs to ImageNet and store the results in the `<data_path>` folder. It will also delete the already-processes patch files (the ones downloaded in step 2). In order to keep the patch files, add the `--keep` flag.
|
77 |
|
78 |
#### Optional: Zip the files without compression
|
79 |
+
|
80 |
When dealing with a large cluster and dataset files that have to be sent over the network (i.e. the dataset is on another server than the one used for processing) it's sometimes useful to not deal with many small files and have fewer large ones instead.
|
81 |
If you want this, you can zip up the files (without compression) by using
|
82 |
+
|
83 |
```
|
84 |
./zip_up.sh <data_path>
|
85 |
```
|
86 |
|
87 |
### Creating ForNet from Scratch
|
88 |
+
|
89 |
Coming soon
|
90 |
|
91 |
### Using ForNet
|
92 |
+
|
93 |
To use ForAug/ForNet you need to have it available in folder or zip form (see [Downloading ForNet](#downloading-fornet)) at `data_path`.
|
94 |
Additionally, you need to install the (standard) requirements from 'requirements.txt':
|
95 |
+
|
96 |
```
|
97 |
pip install -r requirements.txt
|
98 |
```
|
99 |
|
100 |
Then, just do
|
101 |
+
|
102 |
```python
|
103 |
from fornet import ForNet
|
104 |
|
105 |
data_path = ...
|
106 |
|
107 |
dataset = ForNet(
|
108 |
+
data_path,
|
109 |
+
train=True,
|
110 |
+
transform=None,
|
111 |
background_combination="all",
|
112 |
)
|
113 |
|
114 |
```
|
115 |
|
116 |
For information on all possible parameters, run
|
117 |
+
|
118 |
```python
|
119 |
from fornet import ForNet
|
120 |
|
121 |
help(ForNet.__init__)
|
122 |
```
|
123 |
|
124 |
+
## Citation
|
125 |
+
|
126 |
```BibTex
|
127 |
@misc{nauen2025foraug,
|
128 |
title={ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation},
|
|
|
135 |
```
|
136 |
|
137 |
### Dataset Sources
|
138 |
+
|
139 |
- **Repository:** [GitHub](https://github.com/tobna/ForAug)
|
140 |
- **Paper:** [arXiv](https://www.arxiv.org/abs/2503.09399)
|
141 |
- **Project Page:** coming soon
|
|
|
144 |
|
145 |
- [x] release code to download and create ForNet
|
146 |
- [x] release code to use ForNet for training and evaluation
|
147 |
+
- [ ] integrate ForNet into Huggingface Datasets
|