Papers
arxiv:2506.03107

ByteMorph: Benchmarking Instruction-Guided Image Editing with Non-Rigid Motions

Published on Jun 3
· Submitted by Boese0601 on Jun 4
Authors:
,
,
,
,

Abstract

ByteMorph, a framework using the Diffusion Transformer, addresses non-rigid motion in image editing with a large-scale dataset and comprehensive evaluation.

AI-generated summary

Editing images with instructions to reflect non-rigid motions, camera viewpoint shifts, object deformations, human articulations, and complex interactions, poses a challenging yet underexplored problem in computer vision. Existing approaches and datasets predominantly focus on static scenes or rigid transformations, limiting their capacity to handle expressive edits involving dynamic motion. To address this gap, we introduce ByteMorph, a comprehensive framework for instruction-based image editing with an emphasis on non-rigid motions. ByteMorph comprises a large-scale dataset, ByteMorph-6M, and a strong baseline model built upon the Diffusion Transformer (DiT), named ByteMorpher. ByteMorph-6M includes over 6 million high-resolution image editing pairs for training, along with a carefully curated evaluation benchmark ByteMorph-Bench. Both capture a wide variety of non-rigid motion types across diverse environments, human figures, and object categories. The dataset is constructed using motion-guided data generation, layered compositing techniques, and automated captioning to ensure diversity, realism, and semantic coherence. We further conduct a comprehensive evaluation of recent instruction-based image editing methods from both academic and commercial domains.

Community

Paper author Paper submitter
edited 2 days ago

We introduce ByteMorph, a comprehensive framework for instruction-based image editing with an emphasis on non-rigid motions. ByteMorph comprises a large-scale dataset, ByteMorph-6M, and a baseline model named ByteMorpher. ByteMorph-6M includes over 6 million high-resolution image editing pairs for training, along with a carefully curated evaluation benchmark ByteMorph-Bench. Both capture a wide variety of non-rigid motion types across diverse environments, human figures, and object categories.

Project Page: https://boese0601.github.io/bytemorph
Online Demo: https://huggingface.co/spaces/Boese0601/ByteMorph-Demo
Benchmark: https://huggingface.co/datasets/ByteDance-Seed/BM-Bench
Dataset: https://huggingface.co/datasets/ByteDance-Seed/BM-6M
Code: https://github.com/ByteDance-Seed/BM-code
Model: https://huggingface.co/ByteDance-Seed/BM-Model
Data-Example: https://huggingface.co/datasets/ByteDance-Seed/BM-6M-Demo
Leaderboard: https://boese0601.github.io/bytemorph/#leaderboard

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 3

Spaces citing this paper 2

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.