MLLMSeg: Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder
This repository contains the MLLMSeg_InternVL2_5_8B_RES
model presented in the paper Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder.
Reference Expression Segmentation (RES) aims to segment image regions specified by referring expressions. While Multimodal Large Language Models (MLLMs) excel in semantic understanding, their token-generation paradigm often struggles with pixel-level dense prediction. MLLMSeg addresses this by fully exploiting the inherent visual detail features encoded in the MLLM vision encoder without introducing an extra visual encoder. It proposes a detail-enhanced and semantic-consistent feature fusion module (DSFF) and establishes a light-weight mask decoder (only 34M network parameters) to optimally leverage detailed spatial features and semantic features for precise mask prediction. Extensive experiments demonstrate that MLLMSeg generally surpasses both SAM-based and SAM-free competitors, striking a better balance between performance and cost.
Code: https://github.com/jcwang0602/MLLMSeg
Usage
You can use this model with the transformers
library. Below is an example demonstrating how to load and use the MLLMSeg_InternVL2_5_8B_RES
model for inference.
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
import requests
from io import BytesIO
# Define image preprocessing utility functions
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
if image_file.startswith(('http://', 'https://')):
response = requests.get(image_file)
image = Image.open(BytesIO(response.content)).convert('RGB')
else:
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
# Load model and tokenizer
model_path = "jcwang0602/MLLMSeg_InternVL2_5_8B_RES"
model = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
# Load an example image (replace with your image path or URL)
image_path = "https://github.com/jcwang0602/MLLMSeg/raw/main/assets/res_0.png" # Example image from the repo
pixel_values = load_image(image_path, max_num=6).to(torch.bfloat16).cuda()
# Define the referring expression
question = "Please segment the person in the screenshot."
# Set generation configuration
generation_config = dict(max_new_tokens=1024, do_sample=False, temperature=0.0)
# Generate response and segmentation mask
# The output_segmentation_mask=True parameter is crucial for getting the mask directly.
response, history, pred_mask = model.chat(
tokenizer, pixel_values, question, generation_config, history=None, return_history=True, output_segmentation_mask=True
)
print(f'User: {question}\
Assistant: {response}')
# `pred_mask` will contain the predicted segmentation mask. It's a torch.Tensor.
# You can save or visualize it. For example, to save it as an image:
# from torchvision.utils import save_image
# save_image(pred_mask.float(), "segmentation_mask.png")
Performance Metrics
Referring Expression Segmentation

Referring Expression Comprehension

Generalized Referring Expression Segmentation

Visualization
Referring Expression Segmentation

Referring Expression Comprehension

Generalized Referring Expression Segmentation

Citation
If our work is useful for your research, please consider citing:
@misc{wang2025unlockingpotentialmllmsreferring,
title={Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder},
author={Jingchao Wang and Zhijian Wu and Dingjiang Huang and Yefeng Zheng and Hong Wang},
year={2025},
eprint={2508.04107},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.04107},
}
- Downloads last month
- 15