Papers
arxiv:2506.00070

Robot-R1: Reinforcement Learning for Enhanced Embodied Reasoning in Robotics

Published on May 29
· Submitted by vangard703 on Jun 4
Authors:
,
,
,
,
,

Abstract

Robot-R1, a reinforcement learning framework, enhances embodied reasoning for robotics by predicting keypoint states, outperforming supervised fine-tuning methods and even surpassing GPT-4o in low-level action control tasks.

AI-generated summary

Large Vision-Language Models (LVLMs) have recently shown great promise in advancing robotics by combining embodied reasoning with robot control. A common approach involves training on embodied reasoning tasks related to robot control using Supervised Fine-Tuning (SFT). However, SFT datasets are often heuristically constructed and not explicitly optimized for improving robot control. Furthermore, SFT often leads to issues such as catastrophic forgetting and reduced generalization performance. To address these limitations, we introduce Robot-R1, a novel framework that leverages reinforcement learning to enhance embodied reasoning specifically for robot control. Robot-R1 learns to predict the next keypoint state required for task completion, conditioned on the current scene image and environment metadata derived from expert demonstrations. Inspired by the DeepSeek-R1 learning approach, Robot-R1 samples reasoning-based responses and reinforces those that lead to more accurate predictions. Our experiments show that models trained with Robot-R1 outperform SFT methods on embodied reasoning tasks. Despite having only 7B parameters, Robot-R1 even surpasses GPT-4o on reasoning tasks related to low-level action control, such as spatial and primitive movement reasoning.

Community

tl;dr: We introduces Robot-R1, a new reinforcement learning framework that better teaches robots how to reason for control tasks. It predicts necessary next steps and learns from good reasoning. Experiments show Robot-R1 beats SFT and even outperforms GPT-4o on some specific robot reasoning tasks, despite being smaller.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.00070 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.00070 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.00070 in a Space README.md to link it from this page.

Collections including this paper 2