Datasets:

The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

MORSE-500 Benchmark

Website Data Viewer Code arXiv

πŸ”₯ News

  • May 15, 2025: We release MORSE-500, 500 programmatically generated videos across six reasoning categories: abstract, mathematical, physical, planning, spatial, and temporal, to stress-test multimodal reasoning. Frontier models including OpenAI o3 and Gemini 2.5 Pro score lower than 25% accuracy (see πŸ†Leaderboard).
  • Visit πŸ€— Data: morse-500 for newer updates

πŸ“¦ Resource

✨ Key Features

Aspect Details
Fresh & Portable 500 newly cooked video clips + CSV metadata that runs fast
Scalable Difficulty Videos are generated programmatically so we can dial up complexity and release harder versions as models improve
Diverse Categories Spanning Abstract, Mathematical, Physical, Planning, Spatial, Temporal (+ Causal) – a vibrant mix of the reasoning types that matter
Pure Visual Reasoning Questions are baked right into the videos. No text crutches, no shortcuts – if you can't see it, you can't solve it
Developer-Friendly A β€œ-view” subset streams directly on Hugging Face, making browsing and debugging smoother than a sunny afternoon

πŸ“‚ Dataset Structure

  • test.csv: Contains the dataset metadata, including video file name, query, ground_truth, question_text, and category
  • test.zip: Contains all MP4 video files
  • test_sz512.zip: Contains MP4 video files resized to 512px for longside while keeping the original aspect ratio

⚑ Quick Start

### In bash ###
# download the videos
git clone https://huggingface.co/datasets/video-reasoning/morse-500

# unzip the videos
cd morse-500
# unzip test.zip -d test                # original size
unzip test_sz512.zip -d test_sz512    # long side resized to 512


### In python ###
# load dataset metadata ("idx", "video", "query", "question_text", "ground_truth", "category")
from datasets import load_dataset
dataset = load_dataset('video-reasoning/morse-500')
dataset = dataset['test']
video_root = 'test_sz512' # use the resize videos

# run your model on the benchmark
for i, example in enumerate(dataset):
  video_path = f"{video_root}/" + example["video"]
  print(f"Processing {i} {video_path}")
  query = "Answer the question in this video."
  gt = example['ground_truth']

  # if your model has video support
  answer = query_video(model_name, video_path, query)
  # otherwise query with image frames, default 2 fps capped at 32 total frames
  # answer = query_video_frames(model_name, video_path, query, fps=2, max_num_frames=32)
  
  print(f"Answer: {answer}")
  print(f"GT: {gt}")

Example query_video function

model_name = "xxx"
openai_api_key = "xxx"
openai_api_base = "xxx"
client = OpenAI(
  api_key=openai_api_key,
  base_url=openai_api_base,
)


def encode_b64(file_path):
  with open(file_path, "rb") as file:
      return base64.b64encode(file.read()).decode("utf-8")

base64_video = encode_b64(video_path)
video_url = f"data:video/mp4;base64,{base64_video}"

response = client.chat.completions.create(
    model=model_name,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": query
                },
                {
                    "type": "video_url",
                    "video_url": {"url": video_url},
                },
            ],
        }
    ],
)

result = response.choices[0].message.content
print(result)

More scripts can be found on Github https://github.com/morse-benchmark/morse-500

Downloads last month
42