metadata
license: cc-by-nc-sa-4.0
task_categories:
- video-classification
- visual-question-answering
- question-answering
language:
- en
size_categories:
- n<1K
MORSE-500 Benchmark
|
|
|
|
|
π₯ News
- May 15, 2025: We release
MORSE-500
, 500 programmatically generated videos across six reasoning categories: abstract, mathematical, physical, planning, spatial, and temporal, to stress-test multimodal reasoning. Frontier models including OpenAI o3 and Gemini 2.5 Pro score lower than 25% accuracy (see πLeaderboard
). - Visit π€ Data:
morse-500
for newer updates
π¦ Resource
- π Websie:
morse-500
- π€ Data:
morse-500
- π€ Video Viewer:
morse-500-view
- π» Code:
morse-500
- π Paper:
arXiv:2506.05523
β¨ Key Features
Aspect | Details |
---|---|
Fresh & Portable | 500 newly cooked video clips + CSV metadata that runs fast |
Scalable Difficulty | Videos are generated programmatically so we can dial up complexity and release harder versions as models improve |
Diverse Categories | Spanning Abstract, Mathematical, Physical, Planning, Spatial, Temporal (+ Causal) β a vibrant mix of the reasoning types that matter |
Pure Visual Reasoning | Questions are baked right into the videos. No text crutches, no shortcuts β if you can't see it, you can't solve it |
Developer-Friendly | A β-viewβ subset streams directly on Hugging Face, making browsing and debugging smoother than a sunny afternoon |
π Dataset Structure
test.csv
: Contains the dataset metadata, including video file name, query, ground_truth, question_text, and categorytest.zip
: Contains all MP4 video filestest_sz512.zip
: Contains MP4 video files resized to 512px for longside while keeping the original aspect ratio
β‘ Quick Start
### In bash ###
# download the videos
git clone https://huggingface.co/datasets/video-reasoning/morse-500
# unzip the videos
cd morse-500
# unzip test.zip -d test # original size
unzip test_sz512.zip -d test_sz512 # long side resized to 512
### In python ###
# load dataset metadata ("idx", "video", "query", "question_text", "ground_truth", "category")
from datasets import load_dataset
dataset = load_dataset('video-reasoning/morse-500')
dataset = dataset['test']
video_root = 'test_sz512' # use the resize videos
# run your model on the benchmark
for i, example in enumerate(dataset):
video_path = f"{video_root}/" + example["video"]
print(f"Processing {i} {video_path}")
query = "Answer the question in this video."
gt = example['ground_truth']
# if your model has video support
answer = query_video(model_name, video_path, query)
# otherwise query with image frames, default 2 fps capped at 32 total frames
# answer = query_video_frames(model_name, video_path, query, fps=2, max_num_frames=32)
print(f"Answer: {answer}")
print(f"GT: {gt}")
Example query_video function
model_name = "xxx"
openai_api_key = "xxx"
openai_api_base = "xxx"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
def encode_b64(file_path):
with open(file_path, "rb") as file:
return base64.b64encode(file.read()).decode("utf-8")
base64_video = encode_b64(video_path)
video_url = f"data:video/mp4;base64,{base64_video}"
response = client.chat.completions.create(
model=model_name,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": query
},
{
"type": "video_url",
"video_url": {"url": video_url},
},
],
}
],
)
result = response.choices[0].message.content
print(result)