Datasets:

ArXiv:
RadIR / README.md
zzh99's picture
Update README.md
a8ef405 verified
|
raw
history blame
2.63 kB
metadata
tags:
  - medical

This is the official data repository for RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval via Radiology Report Mining.

We mine image-paired report to extract findings on diverse anatomy structures, and quantify the multi-grained image-image relevance via RaTEScore. Specifically, we have extended two public datasets for multi-grained medical image retrieval task:

  • MIMIC-IR is extended from MIMIC-CXR, containing 377,110 images and x anatomy structures.
  • CTRATE-IR is extended from CTRATE, containing 25,692 images and 48 anatomy structures.

A simple demo to read the data from CTRATE-IR:

import pandas as pd
import numpy as np

anatomy_condition = 'bone'
sample_A_idx = 10
sample_B_idx = 20

df = pd.read_csv(f'CTRATE-IR/anatomy/train_entity/{anatomy_condition}.csv')
id_ls = df.iloc[:,0].tolist()
findings_ls = df.iloc[:,1].tolist()

simi_tab = np.load(f'CTRATE-IR/anatomy/train_ratescore/{anatomy_condition}.npy')

print(f'Sample {id_ls[sample_A_idx]} findings on {anatomy_condition}: {findings_ls[sample_A_idx]}')
print(f'Sample {id_ls[sample_B_idx]} findings on {anatomy_condition}: {findings_ls[sample_B_idx]}')
print(f'Relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')

Note that the score have been normalized to 0~100 and stored in uint8. We also provide the whole image-level relevance quantified based on their entire reports:

import os
import json
import numpy as np

sample_A_idx = 10
sample_B_idx = 20

with open('CTRATE-IR/train_filtered.jsonl', 'r') as f:
  data = f.readlines()
  data = [json.loads(l) for l in data]

simi_tab = np.load(f'CTRATE-IR/CT_train_ratescore.npy')

sample_A_id = os.path.basename(data[sample_A_idx]['img_path'])
sample_B_id = os.path.basename(data[sample_B_idx]['img_path'])

sample_A_report = os.path.basename(data[sample_A_idx]['text'])
sample_B_report = os.path.basename(data[sample_B_idx]['text'])

print(f'Sample {sample_A_id} reports: {sample_A_report}\n')
print(f'Sample {sample_B_id} reports: {sample_B_report}\n')
print(f'Whole image relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')

For raw image data, you can download them from CTRATE (or RadGenome-ChestCT) and MIMIC-CXR. We keep all the sample id consistent so you can easily find them.