Video-Text-to-Text
Safetensors
English
qwen2_5_vl

MUSEG-3B

Paper | GitHub

We propose MUSEG ๐ŸŒŸ, a novel RL-based method that enhances temporal understanding by introducing timestamp-aware multi-segment grounding. MUSEG enables MLLMs to align queries with multiple relevant video segments, promoting more comprehensive temporal reasoning โณ. To facilitate effective learning, we design a customized RL training recipe with phased rewards that progressively guides the model toward temporally grounded reasoning. Extensive experiments on temporal grounding and time-sensitive video QA tasks demonstrate that MUSEG significantly outperforms existing methods and generalizes well across diverse temporal understanding scenarios ๐Ÿš€.

More Details

Please refer to our GitHub Repository for more details about this model.

Citation

If you find our work helpful for your research, please consider citing our work.

@article{luo2025museg,
    title={MUSEG: Reinforcing Video Temporal Understanding via Timestamp-Aware Multi-Segment Grounding}, 
    author={Fuwen Luo and Shengfeng Lou and Chi Chen and Ziyue Wang and Chenliang Li and Weizhou Shen and Jiyue Guo and Peng Li and Ming Yan and Ji Zhang and Fei Huang and Yang Liu},
    journal={arXiv preprint arXiv:2505.20715},
    year={2025}
}
Downloads last month
2
Safetensors
Model size
3.75B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Darwin-Project/MUSEG-3B

Finetuned
(214)
this model
Quantizations
1 model

Dataset used to train Darwin-Project/MUSEG-3B