Video-Text-to-Text
Safetensors
English
qwen2_5_vl
MUSEG-3B / README.md
roufaen's picture
model weights
1d2bc3d verified
|
raw
history blame
1.55 kB
metadata
license: apache-2.0
datasets:
  - PolyU-ChenLab/ET-Instruct-164K
language:
  - en
metrics:
  - f1
base_model:
  - Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: video-text-to-text

MUSEG-3B

Paper | GitHub

We propose MUSEG 🌟, a novel RL-based method that enhances temporal understanding by introducing timestamp-aware multi-segment grounding. MUSEG enables MLLMs to align queries with multiple relevant video segments, promoting more comprehensive temporal reasoning ⏳. To facilitate effective learning, we design a customized RL training recipe with phased rewards that progressively guides the model toward temporally grounded reasoning. Extensive experiments on temporal grounding and time-sensitive video QA tasks demonstrate that MUSEG significantly outperforms existing methods and generalizes well across diverse temporal understanding scenarios 🚀.

More Details

Please refer to our GitHub Repository for more details about this model.

Citation

If you find our work helpful for your research, please consider citing our work.

@article{luo2025museg,
    title={MUSEG: Reinforcing Video Temporal Understanding via Timestamp-Aware Multi-Segment Grounding}, 
    author={Fuwen Luo and Shengfeng Lou and Chi Chen and Ziyue Wang and Chenliang Li and Weizhou Shen and Jiyue Guo and Peng Li and Ming Yan and Ji Zhang and Fei Huang and Yang Liu},
    journal={arXiv preprint arXiv:2505.20715},
    year={2025}
}