Papers
arxiv:2508.09789

Describe What You See with Multimodal Large Language Models to Enhance Video Recommendations

Published on Aug 13
· Submitted by marcodena on Aug 20
Authors:
,

Abstract

A zero-finetuning framework uses Multimodal Large Language Models to inject high-level semantics into video recommendations, improving intent-awareness over traditional methods.

AI-generated summary

Existing video recommender systems rely primarily on user-defined metadata or on low-level visual and acoustic signals extracted by specialised encoders. These low-level features describe what appears on the screen but miss deeper semantics such as intent, humour, and world knowledge that make clips resonate with viewers. For example, is a 30-second clip simply a singer on a rooftop, or an ironic parody filmed amid the fairy chimneys of Cappadocia, Turkey? Such distinctions are critical to personalised recommendations yet remain invisible to traditional encoding pipelines. In this paper, we introduce a simple, recommendation system-agnostic zero-finetuning framework that injects high-level semantics into the recommendation pipeline by prompting an off-the-shelf Multimodal Large Language Model (MLLM) to summarise each clip into a rich natural-language description (e.g. "a superhero parody with slapstick fights and orchestral stabs"), bridging the gap between raw content and user intent. We use MLLM output with a state-of-the-art text encoder and feed it into standard collaborative, content-based, and generative recommenders. On the MicroLens-100K dataset, which emulates user interactions with TikTok-style videos, our framework consistently surpasses conventional video, audio, and metadata features in five representative models. Our findings highlight the promise of leveraging MLLMs as on-the-fly knowledge extractors to build more intent-aware video recommenders.

Community

Paper author Paper submitter

I'm excited to share our latest accepted paper at Recsys 2025!

"Describe What You See with Multimodal Large Language Models to Enhance Video Recommendations”, which gave me the chance to connect my Computer Vision and Recommendation system experience.

📌 The challenge: Video recommendations are challenging as we do not know what makes a video interesting to the user. Video encoders create features about "someone is dancing on a rooftop" but they are blind to e.g. cultural context that makes the video so cool (the dance parodies a 1990s superhero trope).

🧠 Our solution: We use Multi Modal Large Language Models to create rich descriptions of the videos, characters etc. Then, we plug everything into standard recommendation models through a lightweight text encoder.

✅ Key takeaway:

  • pixels show what happens on-screen
  • titles reflect what the uploader hopes will attract clicks
  • but MLLM-generated text captures why viewers might care
  • up to 60% gains in performance.

Hi Marco,

may I know where I can access the prompts mentioned in the paper?

·

ohh I forgot to upload them! Lemme do it on Monday. Sorry

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.09789 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.09789 in a Space README.md to link it from this page.

Collections including this paper 2