ByteMorph: Benchmarking Instruction-Guided Image Editing with Non-Rigid Motions
Abstract
ByteMorph, a framework using the Diffusion Transformer, addresses non-rigid motion in image editing with a large-scale dataset and comprehensive evaluation.
Editing images with instructions to reflect non-rigid motions, camera viewpoint shifts, object deformations, human articulations, and complex interactions, poses a challenging yet underexplored problem in computer vision. Existing approaches and datasets predominantly focus on static scenes or rigid transformations, limiting their capacity to handle expressive edits involving dynamic motion. To address this gap, we introduce ByteMorph, a comprehensive framework for instruction-based image editing with an emphasis on non-rigid motions. ByteMorph comprises a large-scale dataset, ByteMorph-6M, and a strong baseline model built upon the Diffusion Transformer (DiT), named ByteMorpher. ByteMorph-6M includes over 6 million high-resolution image editing pairs for training, along with a carefully curated evaluation benchmark ByteMorph-Bench. Both capture a wide variety of non-rigid motion types across diverse environments, human figures, and object categories. The dataset is constructed using motion-guided data generation, layered compositing techniques, and automated captioning to ensure diversity, realism, and semantic coherence. We further conduct a comprehensive evaluation of recent instruction-based image editing methods from both academic and commercial domains.
Community
We introduce ByteMorph, a comprehensive framework for instruction-based image editing with an emphasis on non-rigid motions. ByteMorph comprises a large-scale dataset, ByteMorph-6M, and a baseline model named ByteMorpher. ByteMorph-6M includes over 6 million high-resolution image editing pairs for training, along with a carefully curated evaluation benchmark ByteMorph-Bench. Both capture a wide variety of non-rigid motion types across diverse environments, human figures, and object categories.
Project Page: https://boese0601.github.io/bytemorph
Online Demo: https://huggingface.co/spaces/Boese0601/ByteMorph-Demo
Benchmark: https://huggingface.co/datasets/ByteDance-Seed/BM-Bench
Dataset: https://huggingface.co/datasets/ByteDance-Seed/BM-6M
Code: https://github.com/ByteDance-Seed/BM-code
Model: https://huggingface.co/ByteDance-Seed/BM-Model
Data-Example: https://huggingface.co/datasets/ByteDance-Seed/BM-6M-Demo
Leaderboard: https://boese0601.github.io/bytemorph/#leaderboard
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond Editing Pairs: Fine-Grained Instructional Image Editing via Multi-Scale Learnable Regions (2025)
- CompBench: Benchmarking Complex Instruction-guided Image Editing (2025)
- Step1X-Edit: A Practical Framework for General Image Editing (2025)
- In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer (2025)
- Insert Anything: Image Insertion via In-Context Editing in DiT (2025)
- 3D-Fixup: Advancing Photo Editing with 3D Priors (2025)
- SmartFreeEdit: Mask-Free Spatial-Aware Image Editing with Complex Instruction Understanding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 3
Spaces citing this paper 2
Collections including this paper 0
No Collection including this paper