ronniejiangC nielsr HF Staff commited on
Commit
7e5c7b3
·
verified ·
1 Parent(s): e64a040

Add comprehensive dataset card for MM-RIS (#2)

Browse files

- Add comprehensive dataset card for MM-RIS (ef37cca438be6cdf678ea78d702c0aab3ef1cfe3)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +121 -0
README.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - image-segmentation
4
+ language:
5
+ - en
6
+ tags:
7
+ - multimodal
8
+ - referring-image-segmentation
9
+ - infrared
10
+ - visible
11
+ - image-fusion
12
+ size_categories:
13
+ - 10K<n<100K
14
+ ---
15
+
16
+ # MM-RIS: Multimodal Referring Image Segmentation Dataset
17
+
18
+ The **MM-RIS** dataset was introduced in the paper [RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation](https://huggingface.co/papers/2509.12710).
19
+
20
+ This large-scale benchmark supports the multimodal referring image segmentation (RIS) task by providing a goal-aligned approach to supervise and evaluate how effectively natural language contributes to infrared and visible image fusion outcomes.
21
+
22
+ ## Paper
23
+
24
+ [RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation](https://huggingface.co/papers/2509.12710)
25
+
26
+ ## Code
27
+
28
+ The official code repository for the associated RIS-FUSION project can be found on GitHub: [https://github.com/SijuMa2003/RIS-FUSION](https://github.com/SijuMa2003/RIS-FUSION)
29
+
30
+ ## Introduction
31
+
32
+ Text-driven infrared and visible image fusion has gained attention for enabling natural language to guide the fusion process. However, existing methods often lack a goal-aligned task to supervise and evaluate how effectively the input text contributes to the fusion outcome.
33
+
34
+ We observe that **referring image segmentation (RIS)** and text-driven fusion share a common objective: highlighting the object referred to by the text. Motivated by this, we propose **RIS-FUSION**, a cascaded framework that unifies fusion and RIS through joint optimization.
35
+
36
+ To support the multimodal referring image segmentation task, we introduce **MM-RIS**, a large-scale benchmark with **12.5k training** and **3.5k testing** triplets, each consisting of an infrared-visible image pair, a segmentation mask, and a referring expression.
37
+
38
+ ## Dataset Structure
39
+
40
+ The MM-RIS dataset is available in this Hugging Face repository and consists of the following Parquet files:
41
+
42
+ - `mm_ris_test.parquet`
43
+ - `mm_ris_val.parquet`
44
+ - `mm_ris_train_part1.parquet`
45
+ - `mm_ris_train_part2.parquet`
46
+
47
+ These files together comprise 12.5k training and 3.5k testing triplets. Each triplet includes an infrared image, a visible image, a segmentation mask, and a natural language referring expression.
48
+
49
+ ## Sample Usage
50
+
51
+ To prepare the MM-RIS dataset for use with the RIS-FUSION code, you will need to download all the dataset files from this repository and merge the training partitions.
52
+
53
+ 1. **Download the dataset files**:
54
+ Download `mm_ris_test.parquet`, `mm_ris_val.parquet`, `mm_ris_train_part1.parquet`, and `mm_ris_train_part2.parquet` from this Hugging Face repository and place them under a `data/` directory in your project, ideally within a cloned RIS-FUSION GitHub repository.
55
+
56
+ 2. **Merge partitioned parquet files**:
57
+ The RIS-FUSION GitHub repository provides a script to merge the partitioned training data. Assuming you have cloned the repository and placed the parquet files in `./data/`:
58
+
59
+ ```bash
60
+ python ./data/merge_parquet.py
61
+ ```
62
+ This script will combine `mm_ris_train_part1.parquet` and `mm_ris_train_part2.parquet` into a single `mm_ris_train.parquet` file.
63
+
64
+ Once the dataset is prepared, you can use it for training and testing models as shown in the examples below.
65
+
66
+ ### Training Example
67
+
68
+ ```bash
69
+ python train_with_lavt.py \
70
+ --train_parquet ./data/mm_ris_train.parquet \
71
+ --val_parquet ./data/mm_ris_val.parquet \
72
+ --prefusion_model unet_fuser --prefusion_base_ch 32 \
73
+ --epochs 10 -b 16 -j 16 \
74
+ --img_size 480 \
75
+ --swin_type base \
76
+ --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth \
77
+ --bert_tokenizer ./bert/pretrained_weights/bert-base-uncased \
78
+ --ck_bert ./bert/pretrained_weights/bert-base-uncased \
79
+ --init_from_lavt_one ./pretrained_weights/lavt_one_8_cards_ImgNet22KPre_swin-base-window12_refcoco+_adamw_b32lr0.00005wd1e-2_E40.pth \
80
+ --lr_seg 5e-5 --wd_seg 1e-2 --lr_pf 1e-4 --wd_pf 1e-2 \
81
+ --lambda_prefusion 3.0 \
82
+ --w_sobel_vis 0.0 \
83
+ --w_sobel_ir 1.0 \
84
+ --w_grad 1.0 \
85
+ --w_ssim_vis 0.5 \
86
+ --w_ssim_ir 0.0 \
87
+ --w_mse_vis 0.5 \
88
+ --w_mse_ir 2.0
89
+ --eval_vis_dir ./eval_vis \
90
+ --output-dir ./ckpts/risfusion
91
+ ```
92
+
93
+ ### Testing Example
94
+
95
+ ```bash
96
+ python test.py \
97
+ --ckpt ./ckpts/risfusion/model_best_lavt.pth \
98
+ --test_parquet ./data/mm_ris_test.parquet \
99
+ --out_dir ./your_output_dir \
100
+ --bert_tokenizer ./bert/pretrained_weights/bert-base-uncased \
101
+ --ck_bert ./bert/pretrained_weights/bert-base-uncased
102
+ ```
103
+
104
+ ## Citation
105
+
106
+ If you find this dataset or the associated paper useful, please consider citing:
107
+
108
+ ```bibtex
109
+ @article{RIS-FUSION2025,
110
+ title = {RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation},
111
+ author = {Ma, Siju and Gong, Changsiyu and Fan, Xiaofeng and Ma, Yong and Jiang, Chengjie},
112
+ journal = {...},
113
+ year = {2025}
114
+ }
115
+ ```
116
+
117
+ ## Acknowledgements
118
+
119
+ - [Swin Transformer](https://github.com/microsoft/Swin-Transformer)
120
+ - [LAVT](https://github.com/yz93/LAVT)
121
+ - [MMEngine](https://github.com/open-mmlab/mmengine)