onkarsus13 commited on
Commit
8919649
·
verified ·
1 Parent(s): f212919

Add files using upload-large-folder tool

Browse files
kits21/kits21/.gitignore ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ kits21/data/*/imaging.nii.gz
2
+ *__pycache__*
3
+
4
+ .idea
5
+
6
+ kits21/data/*/segmentation_samples*
7
+ regenerates.sh
8
+ kits21/data/*/*segmentation_samples*
9
+
10
+ *.egg-info
11
+ *.vscode*
12
+
13
+ *inter_rater_disagreement.json
14
+ *inter_rater_variability.json
15
+ *tolerances.json
16
+
17
+ *temp.tmp
18
+
19
+ examples/submission/dummy_submission/**.tar.gz
kits21/kits21/.pylintrc ADDED
File without changes
kits21/kits21/LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2021 Nicholas Heller
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
kits21/kits21/README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## NEW: The KiTS23 Challenge is Underway!
2
+
3
+ See the [KiTS23 Homepage](https://kits-challenge.org/kits23/) for more details, including:
4
+
5
+ - A larger dataset
6
+ - Additional contrast phases
7
+
8
+ # KiTS21
9
+
10
+ The official repository of the 2021 Kidney and Kidney Tumor Segmentation Challenge
11
+
12
+ **Current dataset version: `2.2.3` -- Official Frozen Training Set** (see [changelog](changelog.md))
13
+
14
+ <img src="https://kits-challenge.org/public/site_media/figures/rendering_dimmed.png" width="400" />
15
+
16
+ [Challenge Homepage](https://kits-challenge.org/kits21/)
17
+
18
+ ## Timeline
19
+
20
+ - **Mar 1 - Jul 1**: Annotation, Release, and Refinement of Training Data (*now published!*)
21
+ - **July 15**: Further refinement of training set will be complete
22
+ - **Aug 23**: Deadline for Intention to Submit & Required Paper (formerly Aug 9)
23
+ - **Aug 30 - Sep 13**: Submissions Accepted (formerly Aug 16 - 30)
24
+ - **Sep 15**: Results Announced (formerly Sep 1)
25
+ - **Sep 27**: Satellite Event at MICCAI 2021
26
+
27
+ ## News
28
+
29
+ - **July 15, 2021**: The training set has been frozen!
30
+ - **July 1, 2021**: The training set has been released! Also we are adding a two-week buffer for final edits to be made based on community feedback, and we are pushing the challenge timeline by two weeks (see above).
31
+ - **June 17, 2021**: We've changed the set of classes for the challenge. See [this forum post](https://discourse.kits-challenge.org/t/kits21-challenge-update/354) for details
32
+ - **Apr 7, 2021**: We've started using tags and a changelog to keep track of the dataset version
33
+ - **Mar 23, 2021**: A draft of the postprocessing code and some preliminary data has been merged into the master branch.
34
+ - **Mar 9, 2021**: A preliminary challenge homepage has been published at [kits-challenge.org](https://kits-challenge.org). You can keep tabs on the data annotation process there.
35
+ - **Mar 29, 2020**: A second edition of KiTS was accepted to be held in conjunction with MICCAI 2021 in Strasbourg! More information will be posted here and on the [discussion forum](https://discourse.kits-challenge.org/) when it becomes available.
36
+
37
+ ## Usage
38
+
39
+ ### Installation
40
+
41
+ 1) Install dependency for surface dice:\
42
+ `pip install git+https://github.com/JoHof/surface-distance.git` (the original [DeepMind repository](https://github.com/deepmind/surface-distance) is currently not working due to a [missing line comment](https://github.com/deepmind/surface-distance/blob/4315531eb2d449310d47c0850f334cc6a6580543/surface_distance/metrics.py#L102))
43
+ 2) Clone this repository
44
+ 3) Install this repository by running `pip install -e .` in the folder where the setup.py file is located
45
+
46
+ ### Download
47
+
48
+ Start by cloning this repository, but note that **the imaging is not stored here**, it must be downloaded using one of the `get_imaging` scripts in the `starter_code` directory. Currently there are implementations in:
49
+
50
+ - **python3**: `python3 kits21/starter_code/get_imaging.py`
51
+ - **MATLAB**: `matlab kits21/starter_code/get_imaging.m`
52
+ - **bash**: `bash kits21/starter_code/get_imaging.sh`
53
+ - **julia**: `julia kits21/starter_code/get_imaging.jl`
54
+
55
+ If you would like to request another implementation of `get_imaging`, please [submit an issue](https://github.com/neheller/kits21/issues/new).
56
+
57
+ ## Folder Structure
58
+
59
+ ### `data/`
60
+
61
+ ```text
62
+ kits21
63
+ ├──data/
64
+ | ├── case_00000/
65
+ | | ├── raw/
66
+ | | ├── segmentations/
67
+ | | ├── imaging.nii.gz
68
+ | | ├── aggregated_OR_seg.nii.gz
69
+ | | ├── aggregated_AND_seg.nii.gz
70
+ | | └── aggregated_MAJ_seg.nii.gz
71
+ | ├── case_00001/
72
+ | | ├── raw/
73
+ | | ├── segmentations/
74
+ | | ├── imaging.nii.gz
75
+ | | ├── aggregated_OR_seg.nii.gz
76
+ | | ├── aggregated_AND_seg.nii.gz
77
+ | | └── aggregated_MAJ_seg.nii.gz
78
+ ...
79
+ | ├── case_00299/
80
+ | | ├── raw/
81
+ | | ├── segmentations/
82
+ | | ├── imaging.nii.gz
83
+ | | ├── aggregated_OR_seg.nii.gz
84
+ | | ├── aggregated_AND_seg.nii.gz
85
+ | | └── aggregated_MAJ_seg.nii.gz
86
+ └── ├── kits.json
87
+ ```
88
+
89
+ This is different from [KiTS19](https://github.com/neheller/kits19) because unlike 2019, we now have multiple annotations per "instance" and multiple instances per region.
90
+
91
+ Consider the "kidney" label in a scan: most patients have two kidneys (i.e., two "instances" of kidney), and each instance was annotated by three independent people. That case's `segmentations/` we would thus have
92
+
93
+ - `kidney_instance-1_annotation-1.nii.gz`
94
+ - `kidney_instance-1_annotation-2.nii.gz`
95
+ - `kidney_instance-1_annotation-3.nii.gz`
96
+ - `kidney_instance-2_annotation-1.nii.gz`
97
+ - `kidney_instance-2_annotation-2.nii.gz`
98
+ - `kidney_instance-2_annotation-3.nii.gz`
99
+
100
+ along with similar collections for `cyst`, and `tumor` regions. The `aggregated_<X>_seg.nii.gz` file is a result of aggregating all of these files by various methods indicated by \<X\>:
101
+
102
+ - **OR**: A voxel-wise "or" or "union" operator
103
+ - **AND**: A voxel-wise "and" or "intersection" operator
104
+ - **MAJ**: Voxel-wise majority voting
105
+
106
+ ### `starter_code/`
107
+
108
+ This folder holds code snippets for viewing and manipulating the data. See [Usage](#Usage) for more information.
109
+
110
+ ### `annotation/`
111
+
112
+ This folder contains code used to process and import data from the annotation platform. As a participant, there's no reason you should need to run this code, it's only meant to serve as a reference.
113
+
114
+ ## Challenge Information
115
+
116
+ This challenge will feature significantly more data, several annotations per case, and a number of additional annotated regions. The accepted proposal can be found [on Zenodo](https://doi.org/10.5281/zenodo.3714971), but the most up-to-date information about the challenge can be found on [the KiTS21 homepage](https://kits-challenge.org/kits21/).
117
+
118
+ ## Previous KiTS Challenges
119
+
120
+ KiTS was first held in conjunction with MICCAI 2019 in Shenzhen. A paper describing that challenge was published in Medical Image Analysis \[[html](https://www.sciencedirect.com/science/article/abs/pii/S1361841520301857)\] \[[pdf](https://arxiv.org/pdf/1912.01054.pdf)\].
121
+
122
+ ```bibtex
123
+ @article{heller2020state,
124
+ title={The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge},
125
+ author={Heller, Nicholas and Isensee, Fabian and Maier-Hein, Klaus H and Hou, Xiaoshuai and Xie, Chunmei and Li, Fengyi and Nan, Yang and Mu, Guangrui and Lin, Zhiyong and Han, Miofei and others},
126
+ journal={Medical Image Analysis},
127
+ pages={101821},
128
+ year={2020},
129
+ publisher={Elsevier}
130
+ }
131
+ ```
kits21/kits21/examples/README.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Examples
2
+
3
+ This folder contains instructions for training a baseline nnUNet model, and examples for making official docker submissions.
4
+
5
+ - `nnUNet_baseline` contains instructions on how to train nnUNet. This is merely intended as inspiration.
6
+ **We by no means require participants to use nnUnet!** However, we encourage participants to compare their model
7
+ performance to the performance achieved by the nnUNet baseline, and participants are welcome to extend this
8
+ implementation if they so desire
9
+
10
+ - `submission` contains guidelines for successful submission of your trained model to the test set. We
11
+ have prepared examples for creating the needed docker containers.
kits21/kits21/examples/nnUNet_baseline/README.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nnUNet baseline model
2
+
3
+ We chose [nnUNet](https://www.nature.com/articles/s41592-020-01008-z) as a model baseline for KiTS 2021 Challenge since
4
+ it is well known as a framework for fast and effective
5
+ development of segmentation methods. Users with various backgrounds and expertise can use nnUNet out-of-the-box for
6
+ their custom 3D segmentation problem without much need for manual intervention. It's publicly available and can be
7
+ accessed via [MIC-DKFZ/nnUNet](https://github.com/MIC-DKFZ/nnUNet).
8
+
9
+ We do not expect the participants to use nnUNet for model development but strongly encourage to compare the performance of
10
+ their developed model to the nnUNet baseline.
11
+
12
+ A documentation on how to run nnUNet on a new dataset is
13
+ given [here](https://github.com/MIC-DKFZ/nnUNet#how-to-run-nnu-net-on-a-new-dataset). To simplify a number of the steps
14
+ for the participants of KiTS 2021 Challenge, here we highlight the steps needed to train nnUNet on the KiTS 2021 dataset.
15
+
16
+ **IMPORTANT: nnU-Net only works on Linux-based operating systems!**
17
+
18
+ Note that our nnU-Net baseline uses the majority voted segmentations as ground truth for training and does not make
19
+ use of the sampled segmentations.
20
+
21
+ ### nnUNet setup
22
+
23
+ Please follow the installation instructions [here](https://github.com/MIC-DKFZ/nnUNet#installation). Please install
24
+ nnU-Net as an integrative framework (not via `pip install nnunet`).
25
+
26
+ Remember that all nnU-Net commands support the `-h` argument for displaying usage instructions!
27
+
28
+ ### Dataset preparation
29
+
30
+ This section requires you to have downloaded the KiTS2021 dataset.
31
+
32
+ As nnUNet expects datasets in a structured format, you need to convert the dataset to be compatible with nnUNet. We
33
+ provide a script to do this as part of the nnU-Net repository: [Task135_KiTS2021.py](https://github.com/MIC-DKFZ/nnUNet/blob/master/nnunet/dataset_conversion/Task135_KiTS2021.py)
34
+
35
+ Please adapt this script to your system and simply execute it with python. This will convert the KiTS dataset into
36
+ nnU-Net's data format.
37
+
38
+
39
+ ### Experiment planning and preprocessing
40
+ In order to train the nnU-Net models all you need to do is run the standard nnU-Net steps:
41
+
42
+ The following command will extract the dataset fingerprint and based on that configure nnU-Net.
43
+ ```console
44
+ nnUNet_plan_and_preprocess -t 135 -pl2d None -tl 4 -tf 2
45
+ ```
46
+
47
+ `-pl2d None` makes nnU-Net ignore the 2D configuration which is unlikely to perform well on the KiTS task. You can
48
+ remove this part if you would like to use the 2D model.
49
+
50
+ Setting `-tf 2` and `-tl 4` is necessary to keep RAM utilization low during preprocessing. The provided numbers work
51
+ well with 64GB RAM. If you find yourself running out of memory or if the preprocessing gets stuck, consider setting
52
+ these lower. If you have more RAM (and CPU cores), set them higher.
53
+
54
+ Running preprocessing will take a while - so sit back and relax!
55
+
56
+ ### Model training
57
+ Once preprocessing is completed you can run the nnU-net configurations you would like to use as baselines. Note that
58
+ we will be providing pretrained model weights shortly after the dataset freeze so that you don't have to train nnU-Net
59
+ yourself! (TODO)
60
+
61
+ In nnU-Net, the default is to train each configuration via cross-validation. This is the setting we recommend you use
62
+ as well, regardless of whether you use nnU-Net for your submission or not. Running cross-validation gives you the most
63
+ stable estimate of model performance on the training set. To run training with nnU-Net, use the following command:
64
+
65
+ ```console
66
+ nnUNet_train CONFIGURATION nnUNetTrainerV2 135 FOLD
67
+ ```
68
+
69
+ `CONFIGURATION` is hereby the nnU-Net configuration you would like to use (`2d`, `3d_lowres`, `3d_fullres`,
70
+ `3d_cascade_fullres`; remember that we do not have preprocessed data for `2d` because we used `-pl2d None` in
71
+ `nnUNet_plan_and_preprocess`). Run this command 5 times for `FOLD` 0, 1, 2, 3 and 4. If have multiple GPUs you can run
72
+ these simultaneously BUT you need to start one of the folds first and wait till it utilizes the GPU before starting
73
+ the others (this has to do with unpacking the data for training).
74
+
75
+ The trained models will be writen to the RESULTS_FOLDER/nnUNet folder. Each training obtains an automatically generated
76
+ output folder name. Here we give an example of output folder for 3d_fullres:
77
+
78
+ RESULTS_FOLDER/nnUNet/
79
+ ├── 3d_cascade_fullres
80
+ ├── 3d_fullres
81
+ │   └── Task135_KiTS2021
82
+ │   └── nnUNetTrainerV2__nnUNetPlansv2.1
83
+ │   ├── fold_0
84
+ │   │   ├── debug.json
85
+ │   │   ├── model_best.model
86
+ │   │   ├── model_best.model.pkl
87
+ │   │   ├── model_final_checkpoint.model
88
+ │   │   ├── model_final_checkpoint.model.pkl
89
+ │   │   ├── progress.png
90
+ │   │   └── validation_raw
91
+ │   │   ├── case_00002.nii.gz
92
+ │   │   ├���─ case_00008.nii.gz
93
+ │   │   ├── case_00012.nii.gz
94
+ │   │   ├── case_00021.nii.gz
95
+ │   │   ├── case_00022.nii.gz
96
+ │   │   ├── case_00031.nii.gz
97
+ │   │   ├── case_00034.nii.gz
98
+ │   │   ├── case_00036.nii.gz
99
+ │   │   ├── summary.json
100
+ │   │   └── validation_args.json
101
+ │   ├── fold_1
102
+ │   ├── fold_2
103
+ │   ├── fold_3
104
+ │   └── fold_4
105
+ └── 3d_lowres
106
+
107
+ Exactly this structure of those three folders (3d_fullres, 3d_lowres and 3d_cascade_fullres) is required for running
108
+ inference script presented in the example
109
+ of [nnUNet docker submission](../submission/nnUNet_submission).
110
+
111
+ ### Choosing the best configuration
112
+
113
+ Once the models are trained, you can either choose manually which one you would like to use, or use the
114
+ `nnUNet_find_best_configuration` command to automatically determine the best configuration. Since this command does not
115
+ understand the KiTS2021 HECs, we recommend evaluating the different configurations manually with the
116
+ evaluation scripts provided in the kits21 repository and selecting the best performing model based on that.
117
+
118
+ In order to evaluate a nnU-Net model with the kits21 repository you first need to gather the validation set
119
+ predictions from the five folds into a single folder. These are located here:
120
+ `${RESULTS_FOLDER}/nnUNet/CONFIGURATION/Task135_KiTS21/TRAINERCLASS__PLANSIDENTIFIER/fold_X/validation_raw`
121
+ Note that we are using the `validation_raw` and not the `validation_raw_postprocessed` folder. That is because
122
+ a) nnU-Net prostprocessing needs to be executed for the entire cross validation using `nnUNet_determine_postprocessing`
123
+ (`validation_raw_postprocessed` is for development purposes only) and b) the nnU-Net postprocessing is not useful for
124
+ KiTS2021 anyways so it can safely be omitted.
125
+
126
+ Once you have all validation set predictions of the desired nnU-Net run in one folder, double check that all 300 KiTS21
127
+ training cases are present. Then run
128
+
129
+ `python kits21/evaluation/evaluate_predictions.py FOLDER -num_processes XX`
130
+
131
+ (note that you need to have generated the sampled segmentations first, see [here](../../kits21/evaluation))
132
+
133
+ Once that is completed there will be a file in `FOLDER` with the kits metrics.
134
+
135
+ ### Inference
136
+
137
+ For running inference on all images in a specific folder you can either make use of the scripts
138
+ prepared for docker submission or run `nnUNet_predict` command:
139
+ ```console
140
+ nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t 135 -m 3d_fullres
141
+ ```
142
+
143
+ IMPORTANT: When using `nnUNet_predict`, nnU-Net expects the filenames in the input folder to end with _XXXX.nii.gz
144
+ where _XXXX is a modality
145
+ identifier. For KiTS there is just one modality (CT) so the files need to end with _0000.nii.gz
146
+ (example: case_00036_0000.nii.gz). This is not needed when using the scripts in the nnU-Net docker examples!
147
+
148
+ ## Updating the KiTS21 dataset within nnU-Net
149
+
150
+ The datset will be finalized by July 15th 2021. In order to update the dataset within nnU-Net you HAVE TO delete not
151
+ only the content of `${nnUNet_raw_data_base}/nnUNet_raw_data` but also `${nnUNet_raw_data_base}/nnUNet_cropped_data`
152
+ and `${nnUNet_preprocessed}/Task135_KiTS2021`. Then rerun the conversion script again, followed by
153
+ [experiment planning and preprocessing](#experiment-planning-and-preprocessing).
154
+
155
+ # nnU-Net baseline results
156
+ Pretrained model weights and predicted segmentation masks from the training set are provided here: https://zenodo.org/record/5126443
157
+ If you would like to use the pretrained weights, download the [Task135_KiTS21.zip](https://zenodo.org/record/5126443/files/Task135_KiTS2021.zip?download=1)
158
+ file and import it with `nnUNet_install_pretrained_model_from_zip Task135_KiTS21.zip`.
159
+
160
+
161
+ Here are the results obtained with our nnU-Net baseline on the 300 training cases (5-fold cross-validation):
162
+
163
+ | | Dice_kidney | Dice_masses | Dice_tumor | Dice_average | | SurfDice_kidney | SurfDice_masses | SurfDice_tumor | SurfDice_average |
164
+ |--------------------|-------------|-------------|------------|--------------|---|-----------------|-----------------|----------------|------------------|
165
+ | 3d_fullres | 0.9666 | 0.8618 | 0.8493 | 0.8926 | | 0.9336 | 0.7532 | 0.7371 | 0.8080 |
166
+ | 3d_lowres | 0.9683 | 0.8702 | 0.8508 | 0.8964 | | 0.9272 | 0.7507 | 0.7347 | 0.8042 |
167
+ | 3d_cascade_fullres | 0.9747 | 0.8799 | 0.8491 | 0.9012 | | 0.9453 | 0.7714 | 0.7393 | 0.8187 |
168
+
169
+ As you can see, the `3d_cascade_fullres` configuration performed best, both in thers of average Dice score and average Surface Dice.
170
+
171
+ # Extending nnU-Net for KiTS2021
172
+
173
+ [Here](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/extending_nnunet.md) are instructions on how to
174
+ change and adapt nnU-Net. In order to keep things fair between participants **WE WILL NOT PROVIDE SUPPORT FOR IMPROVING
175
+ nnU-Net FOR KITS2021**. You are on your own!
kits21/kits21/examples/submission/README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Submission examples
2
+ Please direct any questions or concerns about these instructions or the submission process generally to [the KiTS21 Discourse Forum](https://discourse.kits-challenge.org/).
3
+
4
+ ## Submission guidelines
5
+
6
+ Instead of getting access to the test images and being requested to upload the segmentations (as is was the case in
7
+ KiTS2019), you will be asked to upload the inference portion of your algorithm in the form of a
8
+ [docker](https://www.docker.com/) container. The submission takes place by uploading a saved docker image
9
+ (single file) containing your inference code to [our grand-challenge.org site](https://kits21.grand-challenge.org/).
10
+ This image will be loaded on the evaluation system and executed on private servers to run inference on the test images.
11
+ Naturally, these docker images **will NOT have access to the internet**,
12
+ so please make sure everything you need it included in the image you upload.
13
+ The primary reason for that is to eliminate
14
+ any possibility of cheating e.g. designing the model specifically for test dataset or manually correcting test set
15
+ predictions.
16
+
17
+ On our servers, the containers will be mounted such that two specific folders are available, `/input/images/ct/` and `/output/images/kidney-tumor-and-cyst/` (see also [Step 4](#step-4-run-a-container-from-a-created-docker-image)).
18
+ The `/input/images/ct/` folder contains the test set. There are no subfolders -
19
+ merely a bunch of `*.mha` files containing the test images. Your docker is expected to produce equivalently
20
+ named segmentation files (also ending with .mha) in the /output/images/kidney-tumor-and-cyst/ folder. The structure of those folders is shown
21
+ below with the example of two cases:
22
+
23
+ ├── input
24
+ │   └── case00000.mha
25
+ │   └── case00001.mha
26
+ ├── output
27
+ │   └── case00000.mha
28
+ │   └── case00001.mha
29
+
30
+ In reality, the cases will not be named with this predictable numbering system. They can have arbitrary file names.
31
+
32
+ NOTE: The dataset was released in .nii.gz format but grand-challenge.org is only able to work with .mha files, so our .nii.gz collection has been converted to .mha files on the backend, and these are the files that your docker container must know how to read and write. Please see the dummy submission's updated "run_inference.py" for an example of how you can do this.
33
+
34
+ In order to run the inference, your trained model has to be part of the docker image and needs to have been added to
35
+ the docker at the stage of building the image. Transferring parameter files is simply done by copying them to a
36
+ specified folder within the container using the `ADD` command in the dockerfile.
37
+ For more information see the examples of the dockerfiles we prepared.
38
+
39
+ Your docker image needs to expose the inference functionality via an inference script which must be named
40
+ `run_inference.py` and take no additional arguments (must be executable with `python run_inference.py`).
41
+ This script needs to use the images
42
+ provided in `/input/images/ct/` and write your segmentation predictions into the `/output/images/kidney-tumor-and-cyst/` folder (using the same name as the
43
+ corresponding input file). **IMPORTANT: Following best practices, your predictions must have the same geometry
44
+ (same shape + same affine) as the corresponding raw image!**
45
+
46
+ ## Docker examples
47
+
48
+ This folder consists of 2 examples that can be used as a base for docker submission of the KiTS challenge 2021.
49
+
50
+ - The `dummy_submission` folder includes
51
+ a simple [dockerfile](dummy_submission/Dockerfile)
52
+ and simplistic inference
53
+ script [run_inference.py](dummy_submission/run_inference.py)
54
+ for computing dummy output segmentation (this just creates random noise as segmentation).
55
+
56
+ - The `nnUNet_submission` folder has
57
+ a [dockerfile](nnU-Net_baseline/Dockerfile) for
58
+ running nnUNet baseline model along with 2 options: single model
59
+ submission ([run_inference.py](nnUNet_submission/run_inference.py))
60
+ and ensemble of the
61
+ models ([run_inference_ensemble.py](nnUNet_submission/run_inference_ensembling.py))
62
+ . Please note here, that to run the ensemble script locally, you need to change the naming of the parameters folder as
63
+ well as the script to run (as outlines in the comments of
64
+ the [dockerfile](nnUNet_submission/Dockerfile)).
65
+ Your docker run command has to be adapted accordingly. For final submission, your inference script should be
66
+ always called `run_inference.py`.
67
+
68
+ ## Installation and running guidelines
69
+
70
+ We recognize that not all participants will have had experience with Docker, so we've prepared quick guidelines for
71
+ setting up a docker and using the submission examples. Here are the steps to follow to:
72
+
73
+ - Install docker
74
+ - Build a docker image
75
+ - Run a container
76
+ - Save and load a docker image created
77
+
78
+ ### Step 1. Install Docker
79
+
80
+ To install docker use following instructions [https://docs.docker.com/engine/install/](https://docs.docker.com/engine/install/) depending on your OS.
81
+
82
+ ### Step 2. Creating Dockerfile
83
+
84
+ A good practice when using docker is to create a dockerfile with all needed requirements and needed operations. You can
85
+ find a simple example of the dockerfile in
86
+ the [`dummy_submission/`](dummy_submission) folder.
87
+ More complicated example of a dockerfile can be found
88
+ in [`nnUNet_submission/`](nnUNet_submission) folder,
89
+ where we specified additional requirements needed for running the nnUNet baseline model. Please make sure that your
90
+ dockerfile is placed in the same folder as your python script to run inference on the test data
91
+ (*run_inference.py*) and directory that contains your training weights (`model/` folder for dummy example and `parameters/`
92
+ folder for nnUNet baseline example).
93
+
94
+ Please double check that the naming of your folder with a trained model is correctly specified in a dockerfile as well
95
+ as in the inference script.
96
+
97
+ ### Step 3. Build a docker image from a dockerfile
98
+
99
+ Navigate to the directory with the dockerfile and run following command:
100
+
101
+ ```console
102
+ docker build -t YOUR_DOCKER_IMAGE_NAME .
103
+ ```
104
+
105
+ Note that the nnU-Net docker requires the parameters to build. The pretrained parameters are not available yet, but will be provided soon :-)
106
+
107
+ ### Step 4. Run a container from a created docker image
108
+
109
+ To run a container the `docker run` command is used:
110
+
111
+ ```console
112
+ docker run --rm --runtime=nvidia --ipc=host -v LOCAL_PATH_INPUT:/input/images/ct/:ro -v LOCAL_PATH_OUTPUT:/output/images/kidney-tumor-and-cyst/ YOUR_DOCKER_IMAGE_NAME
113
+ ```
114
+
115
+ `-v` flag mounts the directories between your local host and the container. `:ro` specifies that the folder mounted
116
+ with `-v` has read-only permissions. Make sure that `LOCAL_PATH_INPUT` contains your test samples,
117
+ and `LOCAL_PATH_OUTPUT` is an output folder for saving the predictions. During test set submission this command will
118
+ be run on a private server managed by the organizers with mounting to the folders with final test data. Please test
119
+ the docker on your local computer using the command above before uploading!
120
+
121
+ <!---
122
+ ### (Optional) Step 5. Running script within the container
123
+ To run any additional scripts, you can execute the following line **within the container**:
124
+ ```console
125
+ python run_inference.py
126
+ ```
127
+ """
128
+ -->
129
+
130
+ ### Step 5. Save docker image container
131
+
132
+ To save your docker image to a file on your local machine, you can run the following command in a terminal:
133
+
134
+ ```console
135
+ docker save YOUR_DOCKER_IMAGE_NAME | gzip -c > test_docker.tar.gz
136
+ ```
137
+
138
+ This will create a file named `test_docker.tar.gz` containing your image.
139
+
140
+ ### Step 6. Load the image
141
+
142
+ To double check your saved image, you can load it with:
143
+
144
+ ```console
145
+ docker load -i test_docker.tar.gz
146
+ ```
147
+
148
+ and run the loaded docker as outlined above with the following command (see Step 4):
149
+
150
+ ```console
151
+ docker run --rm --runtime=nvidia --ipc=host -v LOCAL_PATH_INPUT:/input/images/ct/:ro -v LOCAL_PATH_OUTPUT:/output/images/kidney-tumor-and-cyst/ YOUR_DOCKER_IMAGE_NAME
152
+ ```
kits21/kits21/examples/submission/dummy_submission/Dockerfile ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Here is an example of a Dockerfile to use. Please make sure this file is placed to the same folder as run_inference.py file and directory model/ that contains your training weights.
2
+
3
+ FROM ubuntu:latest
4
+
5
+ # Install some basic utilities and python
6
+ RUN apt-get update \
7
+ && apt-get install -y python3-pip python3-dev \
8
+ && cd /usr/local/bin \
9
+ && ln -s /usr/bin/python3 python \
10
+ && pip3 install --upgrade pip
11
+
12
+ RUN pip3 install numpy simpleitk nibabel
13
+
14
+ # Copy the folder with your pretrained model here to /model folder within the container. This part is skipped here due to simplicity reasons
15
+ # ADD model /model/
16
+
17
+ ADD run_inference.py ./
18
+
19
+ RUN groupadd -r myuser -g 433 && \
20
+ useradd -u 431 -r -g myuser -s /sbin/nologin -c "Docker image user" myuser
21
+
22
+ RUN mkdir /input_nifti && mkdir /output_nifti && chown -R myuser /input_nifti && chown -R myuser /output_nifti
23
+
24
+ USER myuser
25
+
26
+ CMD python3 ./run_inference.py
kits21/kits21/examples/submission/dummy_submission/run_inference.py ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ This python script is a dummy example of the inference script that populates output/ folder. For this example, the
3
+ loading of the model from the directory /model is not taking place and the output/ folder is populated with arrays
4
+ filled with zeros of the same size as the images in the input/ folder.
5
+ """
6
+
7
+ import os
8
+ import numpy as np
9
+ from pathlib import Path
10
+
11
+ import SimpleITK as sitk
12
+ import nibabel as nib
13
+
14
+ INPUT_NIFTI = '/input_nifti'
15
+ OUTPUT_NIFTI = '/output_nifti'
16
+ if not os.path.exists(INPUT_NIFTI):
17
+ os.mkdir(INPUT_NIFTI)
18
+ if not os.path.exists(OUTPUT_NIFTI):
19
+ os.mkdir(OUTPUT_NIFTI)
20
+
21
+
22
+ def _load_mha_as_nifti(filepath):
23
+ reader = sitk.ImageFileReader()
24
+ reader.SetImageIO("MetaImageIO")
25
+ reader.SetFileName(str(filepath))
26
+ image = reader.Execute()
27
+ nda = np.moveaxis(sitk.GetArrayFromImage(image), -1, 0)
28
+ mha_meta = {
29
+ "origin": image.GetOrigin(),
30
+ "spacing": image.GetSpacing(),
31
+ "direction": image.GetDirection(),
32
+ "filename": filepath.name
33
+ }
34
+
35
+ affine = np.array(
36
+ [[0.0, 0.0, -1*mha_meta["spacing"][2], 0.0],
37
+ [0.0, -1*mha_meta["spacing"][1], 0.0, 0.0],
38
+ [-1*mha_meta["spacing"][0], 0.0, 0.0, 0.0],
39
+ [0.0, 0.0, 0.0, 1.0]]
40
+ )
41
+
42
+ return nib.Nifti1Image(nda, affine), mha_meta
43
+
44
+
45
+ def _save_mha(segmentation_nib, mha_meta):
46
+ output_mha = '/output/images/kidney-tumor-and-cyst/'
47
+ if not Path(output_mha).exists():
48
+ Path(output_mha).mkdir(parents=True)
49
+
50
+ channels_last = np.moveaxis(np.asanyarray(segmentation_nib.dataobj), 0, -1)
51
+
52
+ dummy_segmentation = sitk.GetImageFromArray(channels_last)
53
+ dummy_segmentation.SetOrigin(mha_meta["origin"])
54
+ dummy_segmentation.SetSpacing(mha_meta["spacing"])
55
+ dummy_segmentation.SetDirection(mha_meta["direction"])
56
+
57
+ writer = sitk.ImageFileWriter()
58
+ writer.SetFileName(os.path.join(output_mha, mha_meta["filename"]))
59
+ writer.Execute(dummy_segmentation)
60
+
61
+
62
+ def convert_imaging_to_nifti():
63
+ input_mha = Path('/input/images/ct/')
64
+ meta = {}
65
+ for x in input_mha.glob("*.mha"):
66
+ x_nii, x_meta = _load_mha_as_nifti(x)
67
+ meta[x.stem] = x_meta
68
+ nib.save(x_nii, str(Path(INPUT_NIFTI) / "{}.nii.gz".format(x.stem)))
69
+
70
+ return meta
71
+
72
+
73
+ def convert_predictions_to_mha(conversion_meta):
74
+ for uid in conversion_meta:
75
+ pred_nii_pth = Path(OUTPUT_NIFTI) / "{}.nii.gz".format(uid)
76
+ if not pred_nii_pth.exists():
77
+ raise ValueError("No prediction found for file {}.mha".format(uid))
78
+ pred_nii = nib.load(str(pred_nii_pth))
79
+ _save_mha(pred_nii, conversion_meta[uid])
80
+
81
+
82
+
83
+ # =========================================================================
84
+ # Replace this function with your inference code!
85
+ def predict(image_nib, model):
86
+ # As a dummy submission, just predict random voxels
87
+ width, height, queue = image_nib.shape
88
+ data = np.round(np.random.uniform(low=-0.49, high=2.49, size=(width, height, queue))).astype(np.uint8)
89
+
90
+ # Must return a Nifti1Image object
91
+ return nib.Nifti1Image(data, image_nib.affine)
92
+ # =========================================================================
93
+
94
+
95
+ def main():
96
+ # This converts the mha files to nifti just like the official GitHub
97
+ conversion_meta = convert_imaging_to_nifti()
98
+
99
+ # =========================================================================
100
+ # Load model from /model folder!
101
+ # This part is skipped for simplicity reasons
102
+ model = None
103
+ # =========================================================================
104
+
105
+ for filename in os.listdir(INPUT_NIFTI):
106
+ if filename.endswith(".nii.gz"):
107
+ # Load mha as nifti using provided function
108
+ image_nib = nib.load(os.path.join(INPUT_NIFTI, filename))
109
+
110
+ # Run your prediction function
111
+ segmentation_nib = predict(image_nib, model)
112
+
113
+ # Save nifti just as you otherwise would
114
+ nib.save(segmentation_nib, Path(OUTPUT_NIFTI) / "{}".format(filename))
115
+
116
+ # This converts the nifti predictions to the expected output
117
+ convert_predictions_to_mha(conversion_meta)
118
+
119
+
120
+ if __name__ == '__main__':
121
+ main()
kits21/kits21/examples/submission/nnUNet_submission/Dockerfile ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM nvcr.io/nvidia/pytorch:20.08-py3
2
+
3
+ # Install some basic utilities and python
4
+ RUN apt-get update \
5
+ && apt-get install -y python3-pip python3-dev \
6
+ && cd /usr/local/bin \
7
+ && ln -s /usr/bin/python3 python \
8
+ && pip3 install --upgrade pip
9
+
10
+ # install nnunet
11
+ RUN pip install nnunet
12
+
13
+ # for single model inference
14
+ ADD parameters /parameters/
15
+ ADD run_inference.py ./
16
+
17
+ # for ensemble model inference
18
+ # ADD parameters_ensembling /parameters_ensembling/
19
+ # ADD run_inference_ensembling.py ./
20
+
21
+ RUN groupadd -r myuser -g 433 && \
22
+ useradd -u 431 -r -g myuser -s /sbin/nologin -c "Docker image user" myuser
23
+
24
+ RUN mkdir /input_nifti && mkdir /output_nifti && chown -R myuser /input_nifti && chown -R myuser /output_nifti
25
+
26
+ USER myuser
27
+
28
+ CMD python3 ./run_inference.py
29
+ # or CMD python3 ./run_inference_ensembling.py
kits21/kits21/examples/submission/nnUNet_submission/run_inference.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pathlib import Path
2
+ import os
3
+
4
+ import numpy as np
5
+ import nibabel as nib
6
+ import SimpleITK as sitk
7
+
8
+
9
+ INPUT_NIFTI = '/input_nifti'
10
+ OUTPUT_NIFTI = '/output_nifti'
11
+ if not os.path.exists(INPUT_NIFTI):
12
+ os.mkdir(INPUT_NIFTI)
13
+ if not os.path.exists(OUTPUT_NIFTI):
14
+ os.mkdir(OUTPUT_NIFTI)
15
+
16
+
17
+ def _load_mha_as_nifti(filepath):
18
+ reader = sitk.ImageFileReader()
19
+ reader.SetImageIO("MetaImageIO")
20
+ reader.SetFileName(str(filepath))
21
+ image = reader.Execute()
22
+ nda = np.moveaxis(sitk.GetArrayFromImage(image), -1, 0)
23
+ mha_meta = {
24
+ "origin": image.GetOrigin(),
25
+ "spacing": image.GetSpacing(),
26
+ "direction": image.GetDirection(),
27
+ "filename": filepath.name
28
+ }
29
+
30
+ affine = np.array(
31
+ [[0.0, 0.0, -1*mha_meta["spacing"][2], 0.0],
32
+ [0.0, -1*mha_meta["spacing"][1], 0.0, 0.0],
33
+ [-1*mha_meta["spacing"][0], 0.0, 0.0, 0.0],
34
+ [0.0, 0.0, 0.0, 1.0]]
35
+ )
36
+
37
+ return nib.Nifti1Image(nda, affine), mha_meta
38
+
39
+
40
+ def _save_mha(segmentation_nib, mha_meta):
41
+ output_mha = '/output/images/kidney-tumor-and-cyst/'
42
+ if not Path(output_mha).exists():
43
+ Path(output_mha).mkdir(parents=True)
44
+
45
+ channels_last = np.moveaxis(np.asanyarray(segmentation_nib.dataobj), 0, -1)
46
+
47
+ dummy_segmentation = sitk.GetImageFromArray(channels_last)
48
+ dummy_segmentation.SetOrigin(mha_meta["origin"])
49
+ dummy_segmentation.SetSpacing(mha_meta["spacing"])
50
+ dummy_segmentation.SetDirection(mha_meta["direction"])
51
+
52
+ writer = sitk.ImageFileWriter()
53
+ writer.SetFileName(os.path.join(output_mha, mha_meta["filename"]))
54
+ writer.Execute(dummy_segmentation)
55
+
56
+
57
+ def convert_imaging_to_nifti():
58
+ input_mha = Path('/input/images/ct/')
59
+ meta = {}
60
+ for x in input_mha.glob("*.mha"):
61
+ x_nii, x_meta = _load_mha_as_nifti(x)
62
+ meta[x.stem] = x_meta
63
+ nib.save(x_nii, str(Path(INPUT_NIFTI) / "{}.nii.gz".format(x.stem)))
64
+
65
+ return meta
66
+
67
+
68
+ def convert_predictions_to_mha(conversion_meta):
69
+ for uid in conversion_meta:
70
+ pred_nii_pth = Path(OUTPUT_NIFTI) / "{}.nii.gz".format(uid)
71
+ if not pred_nii_pth.exists():
72
+ raise ValueError("No prediction found for file {}.mha".format(uid))
73
+ pred_nii = nib.load(str(pred_nii_pth))
74
+ _save_mha(pred_nii, conversion_meta[uid])
75
+
76
+
77
+ if __name__ == '__main__':
78
+ """
79
+ This inference script is intended to be used within a Docker container as part of the KiTS Test set submission. It
80
+ expects to find input files (.nii.gz) in /input and will write the segmentation output to /output
81
+
82
+ For testing purposes we set the paths to something local, but once we pack it in a docker we need to adapt them of
83
+ course
84
+
85
+ IMPORTANT: This script performs inference using one nnU-net configuration (3d_lowres, 3d_fullres, 2d OR
86
+ 3d_cascade_fullres). Within the /parameter folder, nnU-Net expects to find fold_X subfolders where X is the fold ID
87
+ (typically [0-4]). These folds CANNOT originate from different configurations. There also needs to be the plans.pkl
88
+ file that you find along with these fold_X folders in the
89
+ corresponding nnunet training output directory.
90
+
91
+ /parameters/
92
+ ├── fold_0
93
+ │ ├── model_final_checkpoint.model
94
+ │ └── model_final_checkpoint.model.pkl
95
+ ├── fold_1
96
+ ├── ...
97
+ ├── plans.pkl
98
+
99
+ Note: nnU-Net will read the correct nnU-Net trainer class from the plans.pkl file. Thus there is no need to
100
+ specify it here.
101
+ """
102
+
103
+ # This converts the mha files to nifti just like the official GitHub
104
+ conversion_meta = convert_imaging_to_nifti()
105
+
106
+ # this will be changed to /input for the docker
107
+ input_folder = INPUT_NIFTI
108
+
109
+ # this will be changed to /output for the docker
110
+ output_folder = OUTPUT_NIFTI
111
+ outpth = Path(output_folder)
112
+ outpth.mkdir(parents=True, exist_ok=True)
113
+
114
+ # this will be changed to /parameters for the docker
115
+ parameter_folder = '/parameters'
116
+
117
+ from nnunet.inference.predict import predict_cases
118
+ from batchgenerators.utilities.file_and_folder_operations import subfiles, join
119
+
120
+ input_files = subfiles(input_folder, suffix='.mha', join=False)
121
+
122
+ output_files = [join(output_folder, i) for i in input_files]
123
+ input_files = [join(input_folder, i) for i in input_files]
124
+
125
+ # in the parameters folder are five models (fold_X) traines as a cross-validation. We use them as an ensemble for
126
+ # prediction
127
+ folds = (0, 1, 2, 3, 4)
128
+
129
+ # setting this to True will make nnU-Net use test time augmentation in the form of mirroring along all axes. This
130
+ # will increase inference time a lot at small gain, so you can turn that off
131
+ do_tta = True
132
+
133
+ # does inference with mixed precision. Same output, twice the speed on Turing and newer. It's free lunch!
134
+ mixed_precision = True
135
+
136
+ predict_cases(parameter_folder, [[i] for i in input_files], output_files, folds, save_npz=False,
137
+ num_threads_preprocessing=2, num_threads_nifti_save=2, segs_from_prev_stage=None, do_tta=do_tta,
138
+ mixed_precision=mixed_precision, overwrite_existing=True, all_in_gpu=False, step_size=0.5)
139
+
140
+ # This converts the nifti predictions to the expected output
141
+ convert_predictions_to_mha(conversion_meta)
142
+
143
+ # done!
144
+ # (ignore the postprocessing warning!)
kits21/kits21/examples/submission/nnUNet_submission/run_inference_ensembling.py ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import shutil
2
+ from pathlib import Path
3
+ import os
4
+
5
+ import numpy as np
6
+ import nibabel as nib
7
+ import SimpleITK as sitk
8
+
9
+
10
+ INPUT_NIFTI = '/input_nifti'
11
+ OUTPUT_NIFTI = '/output_nifti'
12
+ if not os.path.exists(INPUT_NIFTI):
13
+ os.mkdir(INPUT_NIFTI)
14
+ if not os.path.exists(OUTPUT_NIFTI):
15
+ os.mkdir(OUTPUT_NIFTI)
16
+
17
+
18
+ def _load_mha_as_nifti(filepath):
19
+ reader = sitk.ImageFileReader()
20
+ reader.SetImageIO("MetaImageIO")
21
+ reader.SetFileName(str(filepath))
22
+ image = reader.Execute()
23
+ nda = np.moveaxis(sitk.GetArrayFromImage(image), -1, 0)
24
+ mha_meta = {
25
+ "origin": image.GetOrigin(),
26
+ "spacing": image.GetSpacing(),
27
+ "direction": image.GetDirection(),
28
+ "filename": filepath.name
29
+ }
30
+
31
+ affine = np.array(
32
+ [[0.0, 0.0, -1*mha_meta["spacing"][2], 0.0],
33
+ [0.0, -1*mha_meta["spacing"][1], 0.0, 0.0],
34
+ [-1*mha_meta["spacing"][0], 0.0, 0.0, 0.0],
35
+ [0.0, 0.0, 0.0, 1.0]]
36
+ )
37
+
38
+ return nib.Nifti1Image(nda, affine), mha_meta
39
+
40
+
41
+ def _save_mha(segmentation_nib, mha_meta):
42
+ output_mha = '/output/images/kidney-tumor-and-cyst/'
43
+ if not Path(output_mha).exists():
44
+ Path(output_mha).mkdir(parents=True)
45
+
46
+ channels_last = np.moveaxis(np.asanyarray(segmentation_nib.dataobj), 0, -1)
47
+
48
+ dummy_segmentation = sitk.GetImageFromArray(channels_last)
49
+ dummy_segmentation.SetOrigin(mha_meta["origin"])
50
+ dummy_segmentation.SetSpacing(mha_meta["spacing"])
51
+ dummy_segmentation.SetDirection(mha_meta["direction"])
52
+
53
+ writer = sitk.ImageFileWriter()
54
+ writer.SetFileName(os.path.join(output_mha, mha_meta["filename"]))
55
+ writer.Execute(dummy_segmentation)
56
+
57
+
58
+ def convert_imaging_to_nifti():
59
+ input_mha = Path('/input/images/ct/')
60
+ meta = {}
61
+ for x in input_mha.glob("*.mha"):
62
+ x_nii, x_meta = _load_mha_as_nifti(x)
63
+ meta[x.stem] = x_meta
64
+ nib.save(x_nii, str(Path(INPUT_NIFTI) / "{}.nii.gz".format(x.stem)))
65
+
66
+ return meta
67
+
68
+
69
+ def convert_predictions_to_mha(conversion_meta):
70
+ for uid in conversion_meta:
71
+ pred_nii_pth = Path(OUTPUT_NIFTI) / "{}.nii.gz".format(uid)
72
+ if not pred_nii_pth.exists():
73
+ raise ValueError("No prediction found for file {}.mha".format(uid))
74
+ pred_nii = nib.load(str(pred_nii_pth))
75
+ _save_mha(pred_nii, conversion_meta[uid])
76
+
77
+
78
+
79
+ if __name__ == '__main__':
80
+ """
81
+ This inference script is intended to be used within a Docker container as part of the KiTS Test set submission. It
82
+ expects to find input files (.nii.gz) in /input and will write the segmentation output to /output
83
+
84
+ For testing purposes we set the paths to something local, but once we pack it in a docker we need to adapt them of
85
+ course
86
+
87
+ IMPORTANT: This script performs inference using two nnU-net configurations, 3d_lowres and 3d_fullres. Within the
88
+ /parameter folder, this script expects to find a 3d_fullres and a 3d_lowres subfolder. Within each of these there
89
+ should be fold_X subfolders where X is the fold ID (typically [0-4]). These fold folder CANNOT originate from
90
+ different configurations (the fullres folds go into the 3d_fullres subfolder, the lowres folds go into the
91
+ 3d_lowres folder!). There also needs to be the plans.pkl file that you find along with these fold_X folders in the
92
+ corresponding nnunet training output directory.
93
+
94
+ /parameters/
95
+ 3d_fullres/
96
+ ├── fold_0
97
+ │ ├── model_final_checkpoint.model
98
+ │ └── model_final_checkpoint.model.pkl
99
+ ├── fold_1
100
+ ├── ...
101
+ └── plans.pkl
102
+ 3d_lowres/
103
+ ├── fold_0
104
+ ├── fold_1
105
+ ├── ...
106
+ └── plans.pkl
107
+
108
+ Note: nnU-Net will read the correct nnU-Net trainer class from the plans.pkl file. Thus there is no need to
109
+ specify it here.
110
+ """
111
+
112
+ # This converts the mha files to nifti just like the official GitHub
113
+ conversion_meta = convert_imaging_to_nifti()
114
+
115
+ # this will be changed to /input for the docker
116
+ input_folder = INPUT_NIFTI
117
+
118
+ # this will be changed to /output for the docker
119
+ output_folder = OUTPUT_NIFTI
120
+ outpth = Path(output_folder)
121
+ outpth.mkdir(parents=True, exist_ok=True)
122
+
123
+ # this will be changed to /parameters/X for the docker
124
+ parameter_folder_fullres = '/parameters_ensembling/3d_fullres'
125
+ parameter_folder_lowres = '/parameters_ensembling/3d_lowres'
126
+
127
+ from nnunet.inference.predict import predict_cases
128
+ from batchgenerators.utilities.file_and_folder_operations import subfiles, join, maybe_mkdir_p
129
+
130
+ input_files = subfiles(input_folder, suffix='.mha', join=False)
131
+
132
+ # in the parameters folder are five models (fold_X) traines as a cross-validation. We use them as an ensemble for
133
+ # prediction
134
+ folds_fullres = (0, 1, 2, 3, 4)
135
+ folds_lowres = (0, 1, 2, 3, 4)
136
+
137
+ # setting this to True will make nnU-Net use test time augmentation in the form of mirroring along all axes. This
138
+ # will increase inference time a lot at small gain, so we turn that off here (you do whatever you want)
139
+ do_tta = False
140
+
141
+ # does inference with mixed precision. Same output, twice the speed on Turing and newer. It's free lunch!
142
+ mixed_precision = True
143
+
144
+ # This will make nnU-Net save the softmax probabilities. We need them for ensembling the configurations. Note
145
+ # that ensembling the 5 folds of each configurationis done BEFORE saving the softmax probabilities
146
+ save_npz = True
147
+
148
+ # predict with 3d_lowres
149
+ output_folder_lowres = join(output_folder, '3d_lowres')
150
+ maybe_mkdir_p(output_folder_lowres)
151
+ output_files_lowres = [join(output_folder_lowres, i) for i in input_files]
152
+
153
+ predict_cases(parameter_folder_lowres, [[join(input_folder, i)] for i in input_files], output_files_lowres, folds_lowres,
154
+ save_npz=save_npz, num_threads_preprocessing=2, num_threads_nifti_save=2, segs_from_prev_stage=None,
155
+ do_tta=do_tta, mixed_precision=mixed_precision, overwrite_existing=True, all_in_gpu=False,
156
+ step_size=0.5)
157
+
158
+ # predict with 3d_fullres
159
+ output_folder_fullres = join(output_folder, '3d_fullres')
160
+ maybe_mkdir_p(output_folder_fullres)
161
+ output_files_fullres = [join(output_folder_fullres, i) for i in input_files]
162
+
163
+ predict_cases(parameter_folder_fullres, [[join(input_folder, i)] for i in input_files], output_files_fullres, folds_fullres,
164
+ save_npz=save_npz, num_threads_preprocessing=2, num_threads_nifti_save=2, segs_from_prev_stage=None,
165
+ do_tta=do_tta, mixed_precision=mixed_precision, overwrite_existing=True, all_in_gpu=False,
166
+ step_size=0.5)
167
+
168
+ # ensemble
169
+ from nnunet.inference.ensemble_predictions import merge
170
+ merge((output_folder_fullres, output_folder_lowres), output_folder, 4, override=True, postprocessing_file=None,
171
+ store_npz=False)
172
+
173
+ # cleanup
174
+ shutil.rmtree(output_folder_fullres)
175
+ shutil.rmtree(output_folder_lowres)
176
+
177
+ # This converts the nifti predictions to the expected output
178
+ convert_predictions_to_mha(conversion_meta)
179
+
180
+ # done!
181
+
kits21/kits21/examples/submission/sanity_check_guide.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Tutorial For Sanity-Checking Your Submission
2
+
3
+ This document is meant to walk you through the steps of creating an algorithm on grand-challenge.org and requesting that it be run on the three "sanity check" cases in order to make sure that everything is working properly for the final test set.
4
+
5
+ ## Make Sure Your Docker Is Expecting The **UPDATED** I/O Format
6
+
7
+ This had to be changed on August 30th due to a misunderstanding by the organizers. The important changes are:
8
+
9
+ - Grand Challenge can only support .mha files, not .nii.gz
10
+ - Rather than `/input`, the input files are mounted at `/input/images/ct`
11
+ - Rather than `/output`, the output files are expected at `/output/images/kidney-tumor-and-cyst`
12
+
13
+ The [example dummy submission](https://github.com/neheller/kits21/blob/master/examples/submission/dummy_submission/run_inference.py) has been updated to fit this format. You might find it helpful.
14
+
15
+ ## Make Sure Your Dockerfile Doesn't Run As Root
16
+
17
+ This can typically be resolved by simply adding the following to the end of your Dockerfile
18
+
19
+ ```Dockerfile
20
+ RUN groupadd -r myuser -g 433 && \
21
+ useradd -u 431 -r -g myuser -s /sbin/nologin -c "Docker image user" myuser
22
+
23
+ USER myuser
24
+
25
+ CMD python3 ./run_inference.py
26
+ ```
27
+
28
+ ## Make Sure to GZip Your Saved Container
29
+
30
+ You can do this with the following command (assuming you have built the container with the tag `-t my_submission`):
31
+
32
+ ```bash
33
+ docker save my_submission | gzip -c > test_docker.tar.gz
34
+ ```
35
+
36
+ ## "Create an Algorithm" On Grand Challenge
37
+
38
+ Log-in to grand-challenge.org and click "Algorithms" at the top of the page.
39
+
40
+ ![](figures/go_to_algorithms.png)
41
+
42
+ Then click "Add New Algorithm" in the middle of the page.
43
+
44
+ ![](figures/click_add_new_algorithm.png)
45
+
46
+ **If this option is not shown, make sure you are registered for the KiTS21 challenge [here](https://kits21.grand-challenge.org/participants/registration/create/).**
47
+
48
+ This button will take you to a form where you will need to provide some metadata about your approach. You can use any name and image that you like, but make sure to fill out the rest of the marked fields as shown below.
49
+
50
+ Your algorithm will be created only once, and you can update it with new containers later, so don't worry about describing different versions here.
51
+
52
+ ![](figures/fill_out_form0.png)
53
+
54
+ ![](figures/fill_out_form1.png)
55
+
56
+ **It's especially important to mimic the above for the "Inputs" and "Outputs" fields.**
57
+
58
+ ## Make `helle246` a User of Your Algorithm
59
+
60
+ In order for us to run the algorithm on your behalf on the sanity check cases, you must make `helle246` a user of your algorithm. Do this by first going to "Users" on the left hand side.
61
+
62
+ ![](figures/go_to_users.png)
63
+
64
+ Then clicking on "Add Users"
65
+
66
+ ![](figures/add_users.png)
67
+
68
+ Then typing in "helle246" and clicking "Save".
69
+
70
+ ![](figures/add_helle246.png)
71
+
72
+
73
+ ## Upload Your `.tar.gz` Docker Container
74
+
75
+ Now go back to your algorithm and click "Containers" on the left hand side.
76
+
77
+ ![](figures/go_to_containers.png)
78
+
79
+ Once there, click on "Upload a Container"
80
+
81
+ ![](figures/upload_a_container.png)
82
+
83
+ That will bring you to a form which will ask you how much RAM you need (max 24GB) and ask if GPU is needed, and then ask you to upload your `.tar.gz` file.
84
+
85
+ ![](figures/add_your_container.png)
86
+
87
+
88
+ After your docker container is uploaded, it will be screened for errors. Please address any issues that are raised and **only proceed to the next step after your algorithm shows "Ready: `True`"**
89
+
90
+ ![](figures/make_sure_is_ready.png)
91
+
92
+
93
+ ## Initiate Your Inference Job
94
+
95
+ Once your algorithm is created with the correct configuration and you have uploaded a container which was error checked without issue, you are ready to initiate your inference job. You can do this by filling out [this form](https://kits21.kits-challenge.org/inference-request) which asks for
96
+
97
+ - Your Grand Challenge Username
98
+ - Your Team's "Secret"
99
+ - You should have received this via email. Please contact Nicholas Heller at [email protected] if you have not received one
100
+ - Your Algorithm's URL
101
+ - e.g., `https://grand-challenge.org/algorithms/kits21-demo-algorithm/`
102
+
103
+ ![](figures/kits21_sc_request_form.png)
104
+
105
+ Once the form is submitted, the page will either show an error message or it will provide a link to a page back on grand-challenge.org where you can monitor the progress of your inference jobs.
106
+
107
+ Once they have finished, you will be able to download the predicted segmentations in order to check them against your expected output.
kits21/kits21/kits21.egg-info/PKG-INFO ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: kits21
3
+ Version: 2.2.3
4
+ License-File: LICENSE
5
+ Requires-Dist: batchgenerators
6
+ Requires-Dist: numpy
7
+ Requires-Dist: SimpleITK
8
+ Requires-Dist: medpy
9
+ Requires-Dist: nibabel
10
+ Requires-Dist: pillow
11
+ Requires-Dist: opencv-python
12
+ Requires-Dist: torch
13
+ Requires-Dist: scipy
14
+ Requires-Dist: scikit-image
15
+ Requires-Dist: requests
16
+ Requires-Dist: argparse
17
+ Dynamic: license-file
18
+ Dynamic: requires-dist
kits21/kits21/kits21.egg-info/SOURCES.txt ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ LICENSE
2
+ README.md
3
+ setup.py
4
+ examples/submission/dummy_submission/run_inference.py
5
+ examples/submission/nnUNet_submission/run_inference.py
6
+ examples/submission/nnUNet_submission/run_inference_ensembling.py
7
+ kits21.egg-info/PKG-INFO
8
+ kits21.egg-info/SOURCES.txt
9
+ kits21.egg-info/dependency_links.txt
10
+ kits21.egg-info/not-zip-safe
11
+ kits21.egg-info/requires.txt
12
+ kits21.egg-info/top_level.txt
13
+ kits21/annotation/__init__.py
14
+ kits21/annotation/import.py
15
+ kits21/annotation/postprocessing.py
16
+ kits21/annotation/sample_segmentations.py
17
+ kits21/configuration/__init__.py
18
+ kits21/configuration/labels.py
19
+ kits21/configuration/paths.py
20
+ kits21/evaluation/__init__.py
21
+ kits21/evaluation/compute_tolerances.py
22
+ kits21/evaluation/evaluate_predictions.py
23
+ kits21/evaluation/inter_rater_disagreement.py
24
+ kits21/evaluation/metrics.py
25
+ kits21/starter_code/__init__.py
26
+ kits21/starter_code/get_imaging.py
27
+ kits21/starter_code/get_imaging_v2.py
kits21/kits21/kits21.egg-info/dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
kits21/kits21/kits21.egg-info/not-zip-safe ADDED
@@ -0,0 +1 @@
 
 
1
+
kits21/kits21/kits21.egg-info/requires.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ batchgenerators
2
+ numpy
3
+ SimpleITK
4
+ medpy
5
+ nibabel
6
+ pillow
7
+ opencv-python
8
+ torch
9
+ scipy
10
+ scikit-image
11
+ requests
12
+ argparse
kits21/kits21/pull_request_template.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # <Title_Here>
2
+
3
+ ## Description
4
+
5
+ <description_here>
6
+
7
+ ## Checklist
8
+
9
+ - [ ] Merged latest `master`
10
+ - [ ] Updated version number in `README.md`
11
+ - [ ] Added changes to `changelog.md`
12
+ - [ ] Updated version number in `setup.py`
13
+ - [ ] (only when updating dataset) Ran `annotation.import` to completion
14
+ - [ ] (only when updating dataset) Updated Surface Dice tolerances in `labels.py` (execute `sample_segmentations.py`, and
15
+ after that `compute_tolerances.py`. Then put the new values in. Do not re-use sampled segmentations from prior dataset versions!)
16
+
kits21/kits21/setup.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from setuptools import setup, find_namespace_packages
2
+
3
+ setup(name='kits21',
4
+ packages=find_namespace_packages(),
5
+ version='2.2.3',
6
+ description='',
7
+ zip_safe=False,
8
+ install_requires=[
9
+ 'batchgenerators',
10
+ 'numpy',
11
+ 'SimpleITK',
12
+ 'medpy',
13
+ 'nibabel',
14
+ 'pillow',
15
+ 'opencv-python',
16
+ 'torch',
17
+ 'scipy',
18
+ 'scikit-image',
19
+ 'requests',
20
+ 'argparse'
21
+ ])