Update README.md
Browse files
README.md
CHANGED
@@ -128,8 +128,6 @@ torch.Size([1, 768, 16, 16])
|
|
128 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
129 |
|
130 |
We used images from five public, deidentified chest X-ray datasets to train this checkpoint of RAD-DINO.
|
131 |
-
Images in the validation and test sets used to train [MAIRA](https://arxiv.org/abs/2311.13668) were excluded from the training set of RAD-DINO.
|
132 |
-
The list of image files used for training is available at [`./training_images.csv`](./training_images.csv).
|
133 |
|
134 |
| Dataset | Num. images |
|
135 |
| --------- | ----------: |
|
@@ -139,7 +137,12 @@ The list of image files used for training is available at [`./training_images.cs
|
|
139 |
| [PadChest](https://www.sciencedirect.com/science/article/abs/pii/S1361841520301614) | 136 787 |
|
140 |
| [BRAX](https://www.nature.com/articles/s41597-022-01608-8) | 41 260 |
|
141 |
|
142 |
-
|
|
|
|
|
|
|
|
|
|
|
143 |
|
144 |
### Training procedure
|
145 |
|
@@ -189,11 +192,16 @@ Our evaluation is best described in the [manuscript](https://arxiv.org/abs/2401.
|
|
189 |
|
190 |
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
|
191 |
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
|
|
|
|
|
|
|
|
|
|
197 |
|
198 |
### Compute infrastructure
|
199 |
|
@@ -201,7 +209,7 @@ RAD-DINO was trained on [Azure Machine Learning](https://azure.microsoft.com/en-
|
|
201 |
|
202 |
#### Hardware
|
203 |
|
204 |
-
We used
|
205 |
|
206 |
#### Software
|
207 |
|
@@ -216,12 +224,12 @@ We used [SimpleITK](https://simpleitk.org/) and [Pydicom](https://pydicom.github
|
|
216 |
|
217 |
```bibtex
|
218 |
@article{PerezGarcia2024RADDINOES,
|
219 |
-
title={
|
220 |
author={Fernando Pérez-García and Harshita Sharma and Sam Bond-Taylor and Kenza Bouzid and Valentina Salvatelli and Maximilian Ilse and Shruthi Bannur and Daniel C. Castro and Anton Schwaighofer and Matthew P. Lungren and Maria Teodora Wetscherek and Noel Codella and Stephanie L. Hyland and Javier Alvarez-Valle and Ozan Oktay},
|
221 |
journal={ArXiv},
|
222 |
year={2024},
|
223 |
volume={abs/2401.10815},
|
224 |
-
url={https://
|
225 |
}
|
226 |
```
|
227 |
|
|
|
128 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
129 |
|
130 |
We used images from five public, deidentified chest X-ray datasets to train this checkpoint of RAD-DINO.
|
|
|
|
|
131 |
|
132 |
| Dataset | Num. images |
|
133 |
| --------- | ----------: |
|
|
|
137 |
| [PadChest](https://www.sciencedirect.com/science/article/abs/pii/S1361841520301614) | 136 787 |
|
138 |
| [BRAX](https://www.nature.com/articles/s41597-022-01608-8) | 41 260 |
|
139 |
|
140 |
+
Images in the validation and test sets used to train [MAIRA](https://arxiv.org/abs/2311.13668) were excluded from the training set of RAD-DINO.
|
141 |
+
The list of image files used for training is available at [`./training_images.csv`](./training_images.csv).
|
142 |
+
|
143 |
+
Note this checkpoint is different from the one in the paper, where some private data was used (and fewer GPUs).
|
144 |
+
The checkpoint shared here is trained for 35 000 iterations (the total number of iterations in the run was 100 000, but we selected this checkpoint using linear probing on the validation sets of the evaluation datasets described in the paper).
|
145 |
+
We used 16 nodes with 4 A100 GPUs each, and a batch size of 40 images per GPU.
|
146 |
|
147 |
### Training procedure
|
148 |
|
|
|
192 |
|
193 |
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
|
194 |
|
195 |
+
<!-- Hardware type: A100 PCIe -->
|
196 |
+
<!-- Hours: 1d 16h = 40h -->
|
197 |
+
<!-- Cloud provider: Azure -->
|
198 |
+
<!-- Region: Italy North -->
|
199 |
+
|
200 |
+
- **Hardware type:** NVIDIA A100 GPUs
|
201 |
+
- **Hours used:** 40 hours/GPU × 16 nodes × 4 GPUs/node = 2560 GPU-hours
|
202 |
+
- **Cloud provider:** Azure
|
203 |
+
- **Compute region:** West US 2
|
204 |
+
- **Carbon emitted:** 222 kg CO₂ eq.
|
205 |
|
206 |
### Compute infrastructure
|
207 |
|
|
|
209 |
|
210 |
#### Hardware
|
211 |
|
212 |
+
We used 16 `Standard_NC96ads_A100_v4` nodes with four NVIDIA A100 (80 GB) GPUs each.
|
213 |
|
214 |
#### Software
|
215 |
|
|
|
224 |
|
225 |
```bibtex
|
226 |
@article{PerezGarcia2024RADDINOES,
|
227 |
+
title={RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision},
|
228 |
author={Fernando Pérez-García and Harshita Sharma and Sam Bond-Taylor and Kenza Bouzid and Valentina Salvatelli and Maximilian Ilse and Shruthi Bannur and Daniel C. Castro and Anton Schwaighofer and Matthew P. Lungren and Maria Teodora Wetscherek and Noel Codella and Stephanie L. Hyland and Javier Alvarez-Valle and Ozan Oktay},
|
229 |
journal={ArXiv},
|
230 |
year={2024},
|
231 |
volume={abs/2401.10815},
|
232 |
+
url={https://api.semanticscholar.org/CorpusID:267060839}
|
233 |
}
|
234 |
```
|
235 |
|