DYMO-Hair: Generalizable Volumetric Dynamics Modeling for Robot Hair Manipulation
Abstract
DYMO-Hair, a model-based robot hair care system, uses a novel dynamics learning paradigm and a 3D latent space to perform visual goal-conditioned hair styling with high accuracy and generalizability.
Hair care is an essential daily activity, yet it remains inaccessible to individuals with limited mobility and challenging for autonomous robot systems due to the fine-grained physical structure and complex dynamics of hair. In this work, we present DYMO-Hair, a model-based robot hair care system. We introduce a novel dynamics learning paradigm that is suited for volumetric quantities such as hair, relying on an action-conditioned latent state editing mechanism, coupled with a compact 3D latent space of diverse hairstyles to improve generalizability. This latent space is pre-trained at scale using a novel hair physics simulator, enabling generalization across previously unseen hairstyles. Using the dynamics model with a Model Predictive Path Integral (MPPI) planner, DYMO-Hair is able to perform visual goal-conditioned hair styling. Experiments in simulation demonstrate that DYMO-Hair's dynamics model outperforms baselines on capturing local deformation for diverse, unseen hairstyles. DYMO-Hair further outperforms baselines in closed-loop hair styling tasks on unseen hairstyles, with an average of 22% lower final geometric error and 42% higher success rate than the state-of-the-art system. Real-world experiments exhibit zero-shot transferability of our system to wigs, achieving consistent success on challenging unseen hairstyles where the state-of-the-art system fails. Together, these results introduce a foundation for model-based robot hair care, advancing toward more generalizable, flexible, and accessible robot hair styling in unconstrained physical environments. More details are available on our project page: https://chengyzhao.github.io/DYMOHair-web/.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GWM: Towards Scalable Gaussian World Models for Robotic Manipulation (2025)
- Seeing the Bigger Picture: 3D Latent Mapping for Mobile Manipulation Policy Learning (2025)
- ManiVID-3D: Generalizable View-Invariant Reinforcement Learning for Robotic Manipulation via Disentangled 3D Representations (2025)
- TopoCut: Learning Multi-Step Cutting with Spectral Rewards and Discrete Diffusion Policies (2025)
- Generative Visual Foresight Meets Task-Agnostic Pose Estimation in Robotic Table-Top Manipulation (2025)
- Improving Robotic Manipulation with Efficient Geometry-Aware Vision Encoder (2025)
- 3D Flow Diffusion Policy: Visuomotor Policy Learning via Generating Flow in 3D Space (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper