Dataset Viewer
id
stringlengths 11
14
| problem
stringlengths 542
634
| answer
stringlengths 288
292
| split
stringclasses 1
value | images
images listlengths 1
1
|
---|---|---|---|---|
30904_frame_0 | <image>
Based on the observation image, the task instruction "place_the_mushroom_into_the_metal_pot.", and the current 2D position of the robot gripper at (0.689, 0.187), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.668, 0.662), (0.732, 0.305), [(0.676, 0.249), (0.665, 0.312), (0.646, 0.370), (0.647, 0.434), (0.657, 0.497), (0.659, 0.532), (0.644, 0.471), (0.640, 0.408), (0.638, 0.344), (0.644, 0.280), (0.650, 0.217), (0.658, 0.153), (0.690, 0.133), (0.708, 0.191), (0.694, 0.149), (0.676, 0.092)] | train | |
30904_frame_11 | <image>
Based on the observation image, the task instruction "place_the_mushroom_into_the_metal_pot.", and the current 2D position of the robot gripper at (0.665, 0.539), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.721, 0.625), (0.732, 0.305), [(0.650, 0.503), (0.643, 0.463), (0.640, 0.423), (0.639, 0.383), (0.638, 0.342), (0.641, 0.302), (0.645, 0.262), (0.649, 0.221), (0.654, 0.181), (0.660, 0.141), (0.684, 0.124), (0.706, 0.158), (0.710, 0.197), (0.697, 0.165), (0.689, 0.126), (0.676, 0.092)] | train | |
30904_frame_22 | <image>
Based on the observation image, the task instruction "place_the_mushroom_into_the_metal_pot.", and the current 2D position of the robot gripper at (0.710, 0.198), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.764, 0.338), (0.732, 0.305), [(0.705, 0.196), (0.701, 0.190), (0.700, 0.182), (0.699, 0.175), (0.698, 0.167), (0.696, 0.160), (0.695, 0.152), (0.694, 0.145), (0.692, 0.138), (0.691, 0.130), (0.689, 0.123), (0.687, 0.116), (0.684, 0.109), (0.682, 0.101), (0.679, 0.095), (0.676, 0.092)] | train | |
27230_frame_0 | <image>
Based on the observation image, the task instruction "put_the_yellow_knife_on_top_of_the_red_cloth", and the current 2D position of the robot gripper at (0.540, 0.271), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.418, 0.562), (0.250, 0.496), [(0.524, 0.326), (0.484, 0.365), (0.446, 0.409), (0.423, 0.461), (0.412, 0.518), (0.406, 0.504), (0.393, 0.450), (0.345, 0.425), (0.287, 0.421), (0.229, 0.426), (0.194, 0.446), (0.199, 0.390), (0.227, 0.340), (0.275, 0.306), (0.329, 0.295), (0.387, 0.298)] | train | |
27230_frame_9 | <image>
Based on the observation image, the task instruction "put_the_yellow_knife_on_top_of_the_red_cloth", and the current 2D position of the robot gripper at (0.410, 0.535), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.414, 0.564), (0.250, 0.496), [(0.406, 0.506), (0.405, 0.467), (0.382, 0.437), (0.347, 0.425), (0.308, 0.421), (0.269, 0.421), (0.230, 0.425), (0.196, 0.441), (0.192, 0.429), (0.198, 0.391), (0.217, 0.357), (0.242, 0.328), (0.274, 0.306), (0.309, 0.294), (0.348, 0.295), (0.387, 0.298)] | train | |
27230_frame_19 | <image>
Based on the observation image, the task instruction "put_the_yellow_knife_on_top_of_the_red_cloth", and the current 2D position of the robot gripper at (0.196, 0.440), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.229, 0.461), (0.250, 0.496), [(0.196, 0.446), (0.192, 0.430), (0.192, 0.410), (0.198, 0.392), (0.207, 0.375), (0.217, 0.358), (0.227, 0.341), (0.242, 0.329), (0.258, 0.317), (0.274, 0.306), (0.290, 0.295), (0.309, 0.294), (0.329, 0.295), (0.348, 0.295), (0.368, 0.297), (0.387, 0.298)] | train | |
17129_frame_0 | <image>
Based on the observation image, the task instruction "put_can_in_pot", and the current 2D position of the robot gripper at (0.236, 0.068), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.547, 0.518), (0.346, 0.395), [(0.262, 0.108), (0.325, 0.101), (0.371, 0.064), (0.394, 0.062), (0.432, 0.106), (0.445, 0.167), (0.468, 0.225), (0.486, 0.285), (0.518, 0.340), (0.526, 0.401), (0.495, 0.408), (0.459, 0.355), (0.433, 0.297), (0.408, 0.238), (0.362, 0.194), (0.346, 0.234)] | train | |
17129_frame_8 | <image>
Based on the observation image, the task instruction "put_can_in_pot", and the current 2D position of the robot gripper at (0.467, 0.204), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.549, 0.518), (0.346, 0.395), [(0.468, 0.242), (0.482, 0.276), (0.501, 0.308), (0.518, 0.341), (0.530, 0.376), (0.523, 0.412), (0.503, 0.421), (0.483, 0.390), (0.462, 0.359), (0.445, 0.326), (0.431, 0.292), (0.416, 0.257), (0.398, 0.226), (0.370, 0.201), (0.345, 0.199), (0.346, 0.234)] | train | |
17129_frame_17 | <image>
Based on the observation image, the task instruction "put_can_in_pot", and the current 2D position of the robot gripper at (0.515, 0.430), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.531, 0.535), (0.346, 0.395), [(0.501, 0.417), (0.489, 0.399), (0.477, 0.381), (0.464, 0.363), (0.453, 0.344), (0.444, 0.323), (0.436, 0.303), (0.427, 0.283), (0.418, 0.262), (0.410, 0.242), (0.397, 0.225), (0.381, 0.210), (0.364, 0.196), (0.346, 0.192), (0.343, 0.214), (0.346, 0.234)] | train | |
32274_frame_0 | <image>
Based on the observation image, the task instruction "put_detergent_in_sink", and the current 2D position of the robot gripper at (0.110, 0.141), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.572, 0.465), (0.785, 0.637), [(0.164, 0.142), (0.224, 0.112), (0.285, 0.092), (0.350, 0.110), (0.415, 0.128), (0.481, 0.142), (0.529, 0.184), (0.563, 0.242), (0.583, 0.306), (0.575, 0.372), (0.582, 0.401), (0.641, 0.370), (0.709, 0.372), (0.773, 0.391), (0.837, 0.412), (0.891, 0.451)] | train | |
32274_frame_7 | <image>
Based on the observation image, the task instruction "put_detergent_in_sink", and the current 2D position of the robot gripper at (0.526, 0.174), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.568, 0.473), (0.785, 0.637), [(0.539, 0.210), (0.562, 0.241), (0.576, 0.277), (0.584, 0.314), (0.579, 0.352), (0.571, 0.390), (0.573, 0.406), (0.607, 0.388), (0.641, 0.370), (0.679, 0.371), (0.718, 0.374), (0.754, 0.385), (0.791, 0.397), (0.828, 0.409), (0.860, 0.429), (0.891, 0.451)] | train | |
32274_frame_14 | <image>
Based on the observation image, the task instruction "put_detergent_in_sink", and the current 2D position of the robot gripper at (0.563, 0.413), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.561, 0.504), (0.785, 0.637), [(0.582, 0.401), (0.602, 0.390), (0.622, 0.380), (0.642, 0.370), (0.664, 0.371), (0.686, 0.371), (0.709, 0.372), (0.730, 0.378), (0.752, 0.385), (0.773, 0.391), (0.795, 0.398), (0.816, 0.405), (0.837, 0.412), (0.855, 0.425), (0.872, 0.440), (0.891, 0.451)] | train | |
12187_frame_0 | <image>
Based on the observation image, the task instruction "move_the_pot_to_the_bottom_middle_of_stove", and the current 2D position of the robot gripper at (0.571, 0.172), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.441, 0.506), (0.379, 0.271), [(0.555, 0.242), (0.537, 0.259), (0.536, 0.330), (0.548, 0.400), (0.567, 0.468), (0.581, 0.506), (0.560, 0.492), (0.546, 0.465), (0.545, 0.394), (0.531, 0.325), (0.510, 0.257), (0.483, 0.191), (0.440, 0.137), (0.439, 0.184), (0.441, 0.182), (0.423, 0.113)] | train | |
12187_frame_11 | <image>
Based on the observation image, the task instruction "move_the_pot_to_the_bottom_middle_of_stove", and the current 2D position of the robot gripper at (0.580, 0.513), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.496, 0.506), (0.379, 0.271), [(0.577, 0.471), (0.557, 0.502), (0.549, 0.481), (0.545, 0.436), (0.545, 0.391), (0.538, 0.347), (0.525, 0.304), (0.511, 0.260), (0.495, 0.218), (0.477, 0.177), (0.450, 0.143), (0.429, 0.147), (0.440, 0.191), (0.445, 0.200), (0.435, 0.156), (0.423, 0.113)] | train | |
12187_frame_23 | <image>
Based on the observation image, the task instruction "move_the_pot_to_the_bottom_middle_of_stove", and the current 2D position of the robot gripper at (0.544, 0.367), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.434, 0.426), (0.379, 0.271), [(0.536, 0.339), (0.527, 0.311), (0.518, 0.282), (0.509, 0.254), (0.499, 0.226), (0.487, 0.200), (0.475, 0.173), (0.459, 0.148), (0.434, 0.133), (0.430, 0.150), (0.438, 0.179), (0.443, 0.208), (0.445, 0.199), (0.438, 0.170), (0.431, 0.141), (0.423, 0.113)] | train | |
15792_frame_0 | <image>
Based on the observation image, the task instruction "place_the_pot_above_the_napkin_to_the_right_of_the_knife.", and the current 2D position of the robot gripper at (0.452, 0.037), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.621, 0.648), (0.707, 0.424), [(0.466, 0.137), (0.514, 0.224), (0.549, 0.319), (0.563, 0.419), (0.577, 0.519), (0.590, 0.619), (0.603, 0.636), (0.590, 0.537), (0.605, 0.437), (0.609, 0.336), (0.632, 0.347), (0.631, 0.380), (0.591, 0.300), (0.569, 0.202), (0.537, 0.106), (0.488, 0.032)] | train | |
15792_frame_12 | <image>
Based on the observation image, the task instruction "place_the_pot_above_the_napkin_to_the_right_of_the_knife.", and the current 2D position of the robot gripper at (0.610, 0.674), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.656, 0.672), (0.707, 0.424), [(0.604, 0.616), (0.594, 0.558), (0.594, 0.500), (0.605, 0.442), (0.607, 0.382), (0.609, 0.323), (0.631, 0.320), (0.627, 0.378), (0.630, 0.391), (0.606, 0.350), (0.589, 0.294), (0.575, 0.236), (0.561, 0.179), (0.543, 0.122), (0.526, 0.065), (0.488, 0.032)] | train | |
15792_frame_25 | <image>
Based on the observation image, the task instruction "place_the_pot_above_the_napkin_to_the_right_of_the_knife.", and the current 2D position of the robot gripper at (0.623, 0.404), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.701, 0.471), (0.707, 0.424), [(0.629, 0.396), (0.629, 0.370), (0.609, 0.358), (0.600, 0.332), (0.592, 0.305), (0.584, 0.278), (0.578, 0.251), (0.573, 0.223), (0.567, 0.196), (0.558, 0.169), (0.549, 0.143), (0.540, 0.116), (0.532, 0.090), (0.525, 0.062), (0.514, 0.039), (0.488, 0.032)] | train | |
21171_frame_0 | <image>
Based on the observation image, the task instruction "pick_the_orange_towel_and_place_it_on_the_middle_if_the_table", and the current 2D position of the robot gripper at (0.397, 0.291), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.645, 0.695), (0.314, 0.557), [(0.426, 0.341), (0.472, 0.376), (0.522, 0.407), (0.566, 0.445), (0.608, 0.486), (0.651, 0.526), (0.669, 0.580), (0.617, 0.567), (0.566, 0.538), (0.515, 0.511), (0.462, 0.484), (0.409, 0.459), (0.354, 0.439), (0.297, 0.429), (0.283, 0.410), (0.301, 0.366)] | train | |
21171_frame_7 | <image>
Based on the observation image, the task instruction "pick_the_orange_towel_and_place_it_on_the_middle_if_the_table", and the current 2D position of the robot gripper at (0.667, 0.578), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.645, 0.730), (0.314, 0.557), [(0.643, 0.580), (0.614, 0.564), (0.585, 0.548), (0.556, 0.532), (0.526, 0.517), (0.497, 0.502), (0.467, 0.487), (0.438, 0.472), (0.407, 0.459), (0.376, 0.446), (0.345, 0.437), (0.312, 0.430), (0.284, 0.438), (0.283, 0.417), (0.285, 0.384), (0.301, 0.366)] | train | |
21171_frame_15 | <image>
Based on the observation image, the task instruction "pick_the_orange_towel_and_place_it_on_the_middle_if_the_table", and the current 2D position of the robot gripper at (0.312, 0.430), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.320, 0.572), (0.314, 0.557), [(0.304, 0.429), (0.296, 0.429), (0.289, 0.431), (0.284, 0.438), (0.281, 0.442), (0.281, 0.434), (0.282, 0.425), (0.283, 0.417), (0.283, 0.409), (0.284, 0.401), (0.285, 0.393), (0.285, 0.384), (0.284, 0.376), (0.285, 0.369), (0.293, 0.367), (0.301, 0.366)] | train | |
36866_frame_0 | <image>
Based on the observation image, the task instruction "put_lemon_on_plate", and the current 2D position of the robot gripper at (0.268, 0.185), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.496, 0.547), (0.359, 0.234), [(0.305, 0.230), (0.357, 0.265), (0.410, 0.300), (0.462, 0.336), (0.499, 0.388), (0.533, 0.442), (0.545, 0.495), (0.523, 0.513), (0.528, 0.455), (0.510, 0.395), (0.480, 0.339), (0.451, 0.282), (0.422, 0.226), (0.394, 0.169), (0.366, 0.112), (0.339, 0.146)] | train | |
36866_frame_8 | <image>
Based on the observation image, the task instruction "put_lemon_on_plate", and the current 2D position of the robot gripper at (0.537, 0.494), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.490, 0.545), (0.359, 0.234), [(0.512, 0.500), (0.530, 0.502), (0.530, 0.468), (0.525, 0.433), (0.514, 0.400), (0.496, 0.370), (0.480, 0.339), (0.463, 0.307), (0.447, 0.276), (0.432, 0.244), (0.416, 0.213), (0.400, 0.181), (0.385, 0.149), (0.369, 0.118), (0.346, 0.116), (0.339, 0.146)] | train | |
36866_frame_17 | <image>
Based on the observation image, the task instruction "put_lemon_on_plate", and the current 2D position of the robot gripper at (0.522, 0.415), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.492, 0.479), (0.359, 0.234), [(0.510, 0.393), (0.497, 0.371), (0.485, 0.349), (0.473, 0.326), (0.462, 0.304), (0.450, 0.281), (0.439, 0.259), (0.428, 0.236), (0.416, 0.213), (0.405, 0.191), (0.394, 0.168), (0.383, 0.145), (0.371, 0.123), (0.358, 0.107), (0.341, 0.124), (0.339, 0.146)] | train | |
15249_frame_0 | <image>
Based on the observation image, the task instruction "take_broccoli_out_of_pan", and the current 2D position of the robot gripper at (0.521, 0.306), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.311, 0.357), (0.656, 0.590), [(0.517, 0.232), (0.487, 0.151), (0.418, 0.160), (0.344, 0.213), (0.348, 0.284), (0.327, 0.241), (0.328, 0.150), (0.349, 0.062), (0.429, 0.030), (0.513, 0.057), (0.580, 0.119), (0.635, 0.192), (0.682, 0.271), (0.708, 0.357), (0.709, 0.448), (0.675, 0.516)] | train | |
15249_frame_12 | <image>
Based on the observation image, the task instruction "take_broccoli_out_of_pan", and the current 2D position of the robot gripper at (0.320, 0.229), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.312, 0.357), (0.656, 0.590), [(0.348, 0.288), (0.328, 0.262), (0.326, 0.195), (0.328, 0.128), (0.348, 0.064), (0.402, 0.033), (0.468, 0.034), (0.525, 0.066), (0.573, 0.112), (0.617, 0.162), (0.652, 0.219), (0.685, 0.277), (0.707, 0.340), (0.715, 0.406), (0.693, 0.464), (0.675, 0.516)] | train | |
15249_frame_24 | <image>
Based on the observation image, the task instruction "take_broccoli_out_of_pan", and the current 2D position of the robot gripper at (0.355, 0.048), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.316, 0.105), (0.656, 0.590), [(0.398, 0.035), (0.441, 0.031), (0.486, 0.037), (0.522, 0.064), (0.556, 0.092), (0.586, 0.126), (0.615, 0.160), (0.639, 0.198), (0.663, 0.236), (0.684, 0.276), (0.704, 0.316), (0.708, 0.361), (0.715, 0.405), (0.708, 0.449), (0.675, 0.477), (0.675, 0.516)] | train | |
16252_frame_0 | <image>
Based on the observation image, the task instruction "move_the_yellow_cloth_behind_the_pot", and the current 2D position of the robot gripper at (0.713, 0.186), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.238, 0.479), (0.564, 0.443), [(0.653, 0.187), (0.579, 0.137), (0.495, 0.103), (0.408, 0.096), (0.330, 0.138), (0.271, 0.204), (0.265, 0.293), (0.272, 0.345), (0.345, 0.296), (0.434, 0.283), (0.524, 0.290), (0.586, 0.300), (0.601, 0.213), (0.590, 0.124), (0.579, 0.035), (0.583, -0.040)] | train | |
16252_frame_15 | <image>
Based on the observation image, the task instruction "move_the_yellow_cloth_behind_the_pot", and the current 2D position of the robot gripper at (0.261, 0.360), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.242, 0.492), (0.564, 0.443), [(0.290, 0.325), (0.329, 0.300), (0.374, 0.289), (0.420, 0.283), (0.466, 0.283), (0.512, 0.287), (0.554, 0.305), (0.582, 0.309), (0.599, 0.267), (0.600, 0.220), (0.601, 0.174), (0.590, 0.129), (0.587, 0.083), (0.580, 0.037), (0.575, -0.009), (0.583, -0.040)] | train | |
16252_frame_30 | <image>
Based on the observation image, the task instruction "move_the_yellow_cloth_behind_the_pot", and the current 2D position of the robot gripper at (0.573, 0.325), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.543, 0.434), (0.564, 0.443), [(0.584, 0.304), (0.596, 0.282), (0.599, 0.259), (0.600, 0.234), (0.601, 0.210), (0.601, 0.186), (0.598, 0.162), (0.591, 0.138), (0.589, 0.114), (0.588, 0.090), (0.585, 0.066), (0.581, 0.042), (0.576, 0.018), (0.575, -0.006), (0.577, -0.030), (0.583, -0.040)] | train | |
32090_frame_0 | <image>
Based on the observation image, the task instruction "put_the_green_block_in_the_middle_of_the_table", and the current 2D position of the robot gripper at (0.129, 0.061), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.182, 0.400), (0.314, 0.441), [(0.139, 0.127), (0.139, 0.195), (0.115, 0.257), (0.127, 0.275), (0.149, 0.335), (0.160, 0.321), (0.201, 0.348), (0.267, 0.342), (0.333, 0.360), (0.396, 0.383), (0.457, 0.413), (0.504, 0.427), (0.532, 0.367), (0.528, 0.320), (0.461, 0.320), (0.413, 0.280)] | train | |
32090_frame_14 | <image>
Based on the observation image, the task instruction "put_the_green_block_in_the_middle_of_the_table", and the current 2D position of the robot gripper at (0.127, 0.271), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.182, 0.400), (0.314, 0.441), [(0.147, 0.314), (0.155, 0.355), (0.167, 0.328), (0.198, 0.349), (0.248, 0.341), (0.298, 0.350), (0.347, 0.364), (0.394, 0.383), (0.441, 0.403), (0.482, 0.433), (0.509, 0.413), (0.532, 0.368), (0.544, 0.325), (0.495, 0.318), (0.447, 0.312), (0.413, 0.280)] | train | |
32090_frame_28 | <image>
Based on the observation image, the task instruction "put_the_green_block_in_the_middle_of_the_table", and the current 2D position of the robot gripper at (0.183, 0.355), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.201, 0.439), (0.314, 0.441), [(0.220, 0.342), (0.259, 0.341), (0.297, 0.350), (0.335, 0.361), (0.372, 0.374), (0.408, 0.388), (0.444, 0.404), (0.475, 0.428), (0.502, 0.432), (0.516, 0.395), (0.537, 0.362), (0.545, 0.329), (0.511, 0.317), (0.471, 0.320), (0.439, 0.304), (0.413, 0.280)] | train | |
20501_frame_0 | <image>
Based on the observation image, the task instruction "move_silver_pot_to_left_burner_of_stove", and the current 2D position of the robot gripper at (0.531, 0.317), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.672, 0.414), (0.416, 0.299), [(0.570, 0.325), (0.618, 0.298), (0.674, 0.299), (0.729, 0.304), (0.774, 0.331), (0.774, 0.364), (0.766, 0.309), (0.742, 0.259), (0.707, 0.219), (0.654, 0.201), (0.598, 0.203), (0.550, 0.224), (0.502, 0.249), (0.491, 0.272), (0.483, 0.217), (0.473, 0.162)] | train | |
20501_frame_10 | <image>
Based on the observation image, the task instruction "move_silver_pot_to_left_burner_of_stove", and the current 2D position of the robot gripper at (0.776, 0.374), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.650, 0.408), (0.416, 0.299), [(0.771, 0.341), (0.765, 0.306), (0.751, 0.273), (0.732, 0.243), (0.708, 0.219), (0.674, 0.208), (0.640, 0.201), (0.604, 0.202), (0.569, 0.205), (0.543, 0.230), (0.512, 0.246), (0.483, 0.260), (0.491, 0.267), (0.485, 0.232), (0.481, 0.197), (0.473, 0.162)] | train | |
20501_frame_20 | <image>
Based on the observation image, the task instruction "move_silver_pot_to_left_burner_of_stove", and the current 2D position of the robot gripper at (0.481, 0.259), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.389, 0.297), (0.416, 0.299), [(0.486, 0.265), (0.486, 0.274), (0.492, 0.278), (0.491, 0.269), (0.490, 0.260), (0.488, 0.251), (0.487, 0.242), (0.485, 0.233), (0.484, 0.224), (0.483, 0.215), (0.482, 0.206), (0.481, 0.197), (0.480, 0.188), (0.480, 0.179), (0.476, 0.171), (0.473, 0.162)] | train | |
22394_frame_0 | <image>
Based on the observation image, the task instruction "pick_the_spoon_up_off_the_blue_cloth_and_place_it_to_the_right_of_the_metal_pot,_on_the_table.", and the current 2D position of the robot gripper at (0.348, 0.390), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.594, 0.436), (0.314, 0.420), [(0.390, 0.413), (0.439, 0.393), (0.488, 0.372), (0.536, 0.348), (0.581, 0.342), (0.607, 0.384), (0.599, 0.415), (0.589, 0.363), (0.556, 0.322), (0.517, 0.289), (0.464, 0.279), (0.411, 0.271), (0.359, 0.270), (0.310, 0.290), (0.313, 0.302), (0.309, 0.249)] | train | |
22394_frame_8 | <image>
Based on the observation image, the task instruction "pick_the_spoon_up_off_the_blue_cloth_and_place_it_to_the_right_of_the_metal_pot,_on_the_table.", and the current 2D position of the robot gripper at (0.602, 0.366), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.576, 0.486), (0.314, 0.420), [(0.604, 0.398), (0.599, 0.418), (0.593, 0.384), (0.585, 0.351), (0.561, 0.327), (0.537, 0.302), (0.507, 0.287), (0.473, 0.280), (0.439, 0.274), (0.405, 0.271), (0.371, 0.267), (0.338, 0.276), (0.308, 0.292), (0.318, 0.316), (0.311, 0.283), (0.309, 0.249)] | train | |
22394_frame_17 | <image>
Based on the observation image, the task instruction "pick_the_spoon_up_off_the_blue_cloth_and_place_it_to_the_right_of_the_metal_pot,_on_the_table.", and the current 2D position of the robot gripper at (0.526, 0.291), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.541, 0.338), (0.314, 0.420), [(0.505, 0.287), (0.485, 0.283), (0.465, 0.279), (0.445, 0.275), (0.424, 0.273), (0.404, 0.270), (0.384, 0.268), (0.363, 0.268), (0.344, 0.274), (0.324, 0.280), (0.307, 0.292), (0.307, 0.309), (0.316, 0.310), (0.312, 0.290), (0.310, 0.270), (0.309, 0.249)] | train | |
25215_frame_0 | <image>
Based on the observation image, the task instruction "put_carrot_on_plate", and the current 2D position of the robot gripper at (0.831, 0.326), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.434, 0.557), (0.746, 0.650), [(0.783, 0.376), (0.711, 0.343), (0.644, 0.301), (0.563, 0.300), (0.483, 0.320), (0.430, 0.376), (0.419, 0.456), (0.435, 0.455), (0.470, 0.381), (0.525, 0.319), (0.606, 0.302), (0.687, 0.311), (0.762, 0.337), (0.814, 0.402), (0.829, 0.480), (0.801, 0.557)] | train | |
25215_frame_7 | <image>
Based on the observation image, the task instruction "put_carrot_on_plate", and the current 2D position of the robot gripper at (0.433, 0.369), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.434, 0.559), (0.746, 0.650), [(0.418, 0.418), (0.421, 0.470), (0.429, 0.470), (0.447, 0.421), (0.473, 0.377), (0.508, 0.338), (0.551, 0.313), (0.602, 0.303), (0.653, 0.304), (0.704, 0.314), (0.754, 0.328), (0.787, 0.368), (0.820, 0.409), (0.829, 0.458), (0.821, 0.509), (0.801, 0.557)] | train | |
25215_frame_15 | <image>
Based on the observation image, the task instruction "put_carrot_on_plate", and the current 2D position of the robot gripper at (0.458, 0.394), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.445, 0.479), (0.746, 0.650), [(0.483, 0.366), (0.507, 0.338), (0.536, 0.316), (0.572, 0.309), (0.609, 0.302), (0.645, 0.302), (0.682, 0.310), (0.719, 0.317), (0.753, 0.327), (0.777, 0.356), (0.801, 0.385), (0.824, 0.414), (0.828, 0.451), (0.827, 0.488), (0.816, 0.523), (0.801, 0.557)] | train | |
18325_frame_0 | <image>
Based on the observation image, the task instruction "take_the_toy_mouse_out_of_the_drawer_and_put_it_on_the_right_side_of_the_table", and the current 2D position of the robot gripper at (0.633, 0.169), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.203, 0.484), (0.203, 0.484), [(0.639, 0.164), (0.603, 0.156), (0.622, 0.165), (0.621, 0.232), (0.618, 0.156), (0.616, 0.080), (0.669, 0.080), (0.727, 0.131), (0.782, 0.184), (0.834, 0.239), (0.869, 0.308), (0.915, 0.365), (0.916, 0.426), (0.912, 0.353), (0.900, 0.278), (0.882, 0.203)] | train | |
18325_frame_15 | <image>
Based on the observation image, the task instruction "take_the_toy_mouse_out_of_the_drawer_and_put_it_on_the_right_side_of_the_table", and the current 2D position of the robot gripper at (0.605, 0.191), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.203, 0.484), (0.203, 0.484), [(0.629, 0.185), (0.618, 0.220), (0.618, 0.155), (0.616, 0.091), (0.648, 0.072), (0.701, 0.108), (0.749, 0.151), (0.794, 0.197), (0.837, 0.245), (0.866, 0.303), (0.909, 0.349), (0.929, 0.407), (0.909, 0.393), (0.912, 0.329), (0.897, 0.266), (0.882, 0.203)] | train | |
18325_frame_30 | <image>
Based on the observation image, the task instruction "take_the_toy_mouse_out_of_the_drawer_and_put_it_on_the_right_side_of_the_table", and the current 2D position of the robot gripper at (0.616, 0.082), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.203, 0.484), (0.203, 0.484), [(0.641, 0.069), (0.683, 0.092), (0.719, 0.124), (0.754, 0.156), (0.788, 0.190), (0.821, 0.225), (0.847, 0.264), (0.868, 0.307), (0.903, 0.339), (0.923, 0.382), (0.924, 0.425), (0.909, 0.391), (0.913, 0.343), (0.904, 0.297), (0.893, 0.250), (0.882, 0.203)] | train | |
34393_frame_0 | <image>
Based on the observation image, the task instruction "take_the_purple_cloth_and_leave_it_on_the_top_right_of_the_stove", and the current 2D position of the robot gripper at (0.335, 0.229), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.467, 0.646), (0.684, 0.625), [(0.328, 0.283), (0.355, 0.309), (0.393, 0.324), (0.427, 0.365), (0.462, 0.407), (0.483, 0.457), (0.490, 0.508), (0.487, 0.559), (0.538, 0.543), (0.592, 0.533), (0.645, 0.526), (0.698, 0.525), (0.710, 0.481), (0.705, 0.427), (0.701, 0.373), (0.691, 0.320)] | train | |
34393_frame_10 | <image>
Based on the observation image, the task instruction "take_the_purple_cloth_and_leave_it_on_the_top_right_of_the_stove", and the current 2D position of the robot gripper at (0.488, 0.524), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.467, 0.660), (0.684, 0.625), [(0.484, 0.553), (0.507, 0.555), (0.534, 0.544), (0.563, 0.538), (0.592, 0.533), (0.621, 0.528), (0.650, 0.526), (0.679, 0.522), (0.703, 0.524), (0.708, 0.495), (0.710, 0.466), (0.706, 0.436), (0.704, 0.407), (0.701, 0.377), (0.697, 0.348), (0.691, 0.320)] | train | |
34393_frame_21 | <image>
Based on the observation image, the task instruction "take_the_purple_cloth_and_leave_it_on_the_top_right_of_the_stove", and the current 2D position of the robot gripper at (0.670, 0.523), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.643, 0.615), (0.684, 0.625), [(0.685, 0.521), (0.700, 0.526), (0.705, 0.517), (0.708, 0.502), (0.709, 0.487), (0.710, 0.471), (0.708, 0.456), (0.707, 0.441), (0.705, 0.426), (0.704, 0.410), (0.703, 0.395), (0.702, 0.380), (0.700, 0.364), (0.697, 0.349), (0.694, 0.334), (0.691, 0.320)] | train | |
1107_frame_0 | <image>
Based on the observation image, the task instruction "put_the_brush_on_the_cloth.", and the current 2D position of the robot gripper at (0.539, 0.226), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.236, 0.453), (0.551, 0.621), [(0.492, 0.250), (0.430, 0.236), (0.367, 0.238), (0.306, 0.252), (0.278, 0.299), (0.261, 0.331), (0.253, 0.272), (0.302, 0.255), (0.361, 0.268), (0.415, 0.301), (0.466, 0.337), (0.511, 0.382), (0.538, 0.399), (0.502, 0.349), (0.480, 0.290), (0.467, 0.236)] | train | |
1107_frame_10 | <image>
Based on the observation image, the task instruction "put_the_brush_on_the_cloth.", and the current 2D position of the robot gripper at (0.280, 0.297), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.227, 0.449), (0.551, 0.621), [(0.258, 0.329), (0.251, 0.314), (0.253, 0.270), (0.284, 0.256), (0.327, 0.254), (0.366, 0.271), (0.404, 0.293), (0.440, 0.318), (0.475, 0.344), (0.505, 0.376), (0.536, 0.405), (0.527, 0.389), (0.504, 0.353), (0.489, 0.312), (0.472, 0.272), (0.467, 0.236)] | train | |
1107_frame_20 | <image>
Based on the observation image, the task instruction "put_the_brush_on_the_cloth.", and the current 2D position of the robot gripper at (0.407, 0.295), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.379, 0.455), (0.551, 0.621), [(0.427, 0.309), (0.447, 0.323), (0.467, 0.337), (0.484, 0.354), (0.501, 0.372), (0.518, 0.388), (0.536, 0.405), (0.540, 0.403), (0.524, 0.386), (0.509, 0.367), (0.501, 0.344), (0.492, 0.322), (0.484, 0.299), (0.474, 0.277), (0.465, 0.254), (0.467, 0.236)] | train | |
10805_frame_0 | <image>
Based on the observation image, the task instruction "put_the_chicken_on_the_cloth", and the current 2D position of the robot gripper at (0.384, 0.160), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.668, 0.711), (0.592, 0.398), [(0.396, 0.232), (0.444, 0.283), (0.499, 0.332), (0.548, 0.387), (0.595, 0.442), (0.611, 0.513), (0.610, 0.587), (0.612, 0.632), (0.655, 0.603), (0.653, 0.530), (0.645, 0.457), (0.621, 0.389), (0.555, 0.357), (0.548, 0.289), (0.543, 0.216), (0.490, 0.234)] | train | |
10805_frame_10 | <image>
Based on the observation image, the task instruction "put_the_chicken_on_the_cloth", and the current 2D position of the robot gripper at (0.605, 0.633), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.664, 0.705), (0.592, 0.398), [(0.614, 0.621), (0.641, 0.615), (0.654, 0.588), (0.653, 0.550), (0.652, 0.511), (0.650, 0.473), (0.639, 0.436), (0.628, 0.399), (0.599, 0.377), (0.564, 0.361), (0.547, 0.331), (0.549, 0.296), (0.545, 0.257), (0.543, 0.219), (0.520, 0.210), (0.490, 0.234)] | train | |
10805_frame_20 | <image>
Based on the observation image, the task instruction "put_the_chicken_on_the_cloth", and the current 2D position of the robot gripper at (0.566, 0.362), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.600, 0.400), (0.592, 0.398), [(0.553, 0.355), (0.549, 0.341), (0.546, 0.327), (0.548, 0.313), (0.550, 0.301), (0.548, 0.287), (0.546, 0.272), (0.545, 0.258), (0.545, 0.243), (0.544, 0.229), (0.543, 0.214), (0.535, 0.204), (0.524, 0.207), (0.513, 0.216), (0.501, 0.225), (0.490, 0.234)] | train | |
5832_frame_0 | <image>
Based on the observation image, the task instruction "move_the_yellow_cloth_to_the_top_center_of_the_table_to_the_left_of_the_knife.", and the current 2D position of the robot gripper at (0.576, 0.167), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.496, 0.617), (0.576, 0.402), [(0.548, 0.234), (0.512, 0.298), (0.467, 0.357), (0.425, 0.417), (0.412, 0.487), (0.474, 0.510), (0.520, 0.561), (0.558, 0.499), (0.580, 0.429), (0.596, 0.356), (0.599, 0.314), (0.601, 0.261), (0.615, 0.188), (0.631, 0.116), (0.642, 0.043), (0.637, -0.026)] | train | |
5832_frame_10 | <image>
Based on the observation image, the task instruction "move_the_yellow_cloth_to_the_top_center_of_the_table_to_the_left_of_the_knife.", and the current 2D position of the robot gripper at (0.517, 0.556), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.494, 0.619), (0.576, 0.402), [(0.539, 0.532), (0.559, 0.495), (0.573, 0.456), (0.583, 0.415), (0.592, 0.374), (0.600, 0.333), (0.603, 0.306), (0.596, 0.301), (0.602, 0.259), (0.609, 0.218), (0.617, 0.176), (0.626, 0.136), (0.635, 0.095), (0.641, 0.053), (0.643, 0.011), (0.637, -0.026)] | train | |
5832_frame_20 | <image>
Based on the observation image, the task instruction "move_the_yellow_cloth_to_the_top_center_of_the_table_to_the_left_of_the_knife.", and the current 2D position of the robot gripper at (0.607, 0.299), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.588, 0.363), (0.576, 0.402), [(0.595, 0.321), (0.596, 0.302), (0.598, 0.278), (0.603, 0.255), (0.607, 0.231), (0.611, 0.207), (0.616, 0.184), (0.621, 0.160), (0.626, 0.136), (0.631, 0.113), (0.636, 0.089), (0.639, 0.065), (0.642, 0.041), (0.643, 0.017), (0.643, -0.006), (0.637, -0.026)] | train | |
12313_frame_0 | <image>
Based on the observation image, the task instruction "move_the_silver_cover_to_the_left_upper_burner", and the current 2D position of the robot gripper at (0.661, 0.221), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.840, 0.725), (0.840, 0.725), [(0.646, 0.266), (0.622, 0.297), (0.574, 0.290), (0.527, 0.283), (0.479, 0.278), (0.433, 0.264), (0.394, 0.283), (0.365, 0.290), (0.399, 0.257), (0.434, 0.224), (0.472, 0.196), (0.481, 0.165), (0.469, 0.181), (0.492, 0.139), (0.517, 0.165), (0.525, 0.212)] | train | |
12313_frame_9 | <image>
Based on the observation image, the task instruction "move_the_silver_cover_to_the_left_upper_burner", and the current 2D position of the robot gripper at (0.373, 0.293), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.840, 0.725), (0.840, 0.725), [(0.366, 0.289), (0.384, 0.272), (0.403, 0.254), (0.421, 0.236), (0.440, 0.219), (0.460, 0.204), (0.481, 0.190), (0.491, 0.170), (0.469, 0.166), (0.469, 0.189), (0.477, 0.166), (0.489, 0.144), (0.502, 0.141), (0.515, 0.163), (0.523, 0.186), (0.525, 0.212)] | train | |
12313_frame_18 | <image>
Based on the observation image, the task instruction "move_the_silver_cover_to_the_left_upper_burner", and the current 2D position of the robot gripper at (0.488, 0.167), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.840, 0.725), (0.840, 0.725), [(0.477, 0.163), (0.467, 0.168), (0.470, 0.180), (0.468, 0.186), (0.472, 0.175), (0.478, 0.164), (0.484, 0.153), (0.490, 0.142), (0.496, 0.132), (0.503, 0.143), (0.509, 0.153), (0.516, 0.164), (0.522, 0.175), (0.523, 0.187), (0.524, 0.199), (0.525, 0.212)] | train | |
25242_frame_0 | <image>
Based on the observation image, the task instruction "put_the_orange_towel_in_the_white_basket", and the current 2D position of the robot gripper at (0.533, 0.506), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.424, 0.807), (0.818, 0.850), [(0.453, 0.590), (0.441, 0.687), (0.456, 0.595), (0.520, 0.494), (0.460, 0.529), (0.395, 0.630), (0.427, 0.732), (0.475, 0.669), (0.566, 0.590), (0.658, 0.511), (0.775, 0.483), (0.893, 0.485), (0.887, 0.573), (0.823, 0.582), (0.705, 0.565), (0.587, 0.573)] | train | |
25242_frame_15 | <image>
Based on the observation image, the task instruction "put_the_orange_towel_in_the_white_basket", and the current 2D position of the robot gripper at (0.395, 0.631), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.389, 0.812), (0.818, 0.850), [(0.401, 0.698), (0.450, 0.740), (0.465, 0.678), (0.521, 0.628), (0.579, 0.579), (0.636, 0.530), (0.702, 0.498), (0.776, 0.482), (0.849, 0.477), (0.917, 0.499), (0.892, 0.560), (0.876, 0.610), (0.809, 0.574), (0.735, 0.567), (0.660, 0.566), (0.587, 0.573)] | train | |
25242_frame_30 | <image>
Based on the observation image, the task instruction "put_the_orange_towel_in_the_white_basket", and the current 2D position of the robot gripper at (0.824, 0.472), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.729, 0.654), (0.818, 0.850), [(0.858, 0.478), (0.892, 0.485), (0.918, 0.501), (0.917, 0.531), (0.895, 0.557), (0.884, 0.589), (0.885, 0.613), (0.854, 0.598), (0.823, 0.582), (0.791, 0.571), (0.757, 0.568), (0.722, 0.566), (0.688, 0.564), (0.653, 0.567), (0.619, 0.569), (0.587, 0.573)] | train | |
8277_frame_0 | <image>
Based on the observation image, the task instruction "put_the_red_object_on_the_top_left_side_of_the_cloth", and the current 2D position of the robot gripper at (0.113, 0.101), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.264, 0.320), (0.410, 0.439), [(0.123, 0.131), (0.143, 0.152), (0.160, 0.160), (0.171, 0.163), (0.202, 0.159), (0.220, 0.175), (0.236, 0.194), (0.247, 0.213), (0.252, 0.239), (0.269, 0.253), (0.292, 0.234), (0.320, 0.236), (0.345, 0.256), (0.367, 0.279), (0.375, 0.308), (0.394, 0.332)] | train | |
8277_frame_15 | <image>
Based on the observation image, the task instruction "put_the_red_object_on_the_top_left_side_of_the_cloth", and the current 2D position of the robot gripper at (0.213, 0.161), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.264, 0.320), (0.410, 0.439), [(0.221, 0.175), (0.231, 0.188), (0.239, 0.197), (0.248, 0.213), (0.253, 0.230), (0.255, 0.242), (0.269, 0.253), (0.285, 0.242), (0.300, 0.230), (0.320, 0.236), (0.337, 0.249), (0.352, 0.264), (0.367, 0.279), (0.370, 0.300), (0.383, 0.315), (0.394, 0.332)] | train | |
8277_frame_30 | <image>
Based on the observation image, the task instruction "put_the_red_object_on_the_top_left_side_of_the_cloth", and the current 2D position of the robot gripper at (0.252, 0.238), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.260, 0.328), (0.410, 0.439), [(0.256, 0.242), (0.263, 0.253), (0.278, 0.252), (0.286, 0.241), (0.295, 0.231), (0.308, 0.232), (0.321, 0.237), (0.332, 0.245), (0.343, 0.254), (0.353, 0.264), (0.362, 0.274), (0.369, 0.286), (0.370, 0.300), (0.378, 0.311), (0.387, 0.321), (0.394, 0.332)] | train | |
12882_frame_0 | <image>
Based on the observation image, the task instruction "placed_the_silver_pot_on_the_yellow_cloth", and the current 2D position of the robot gripper at (0.660, 0.280), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.352, 0.627), (0.529, 0.414), [(0.620, 0.342), (0.541, 0.358), (0.471, 0.394), (0.446, 0.465), (0.452, 0.539), (0.433, 0.477), (0.436, 0.403), (0.480, 0.338), (0.539, 0.288), (0.616, 0.264), (0.695, 0.251), (0.680, 0.327), (0.692, 0.338), (0.634, 0.366), (0.619, 0.293), (0.605, 0.288)] | train | |
12882_frame_12 | <image>
Based on the observation image, the task instruction "placed_the_silver_pot_on_the_yellow_cloth", and the current 2D position of the robot gripper at (0.463, 0.518), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.355, 0.570), (0.529, 0.414), [(0.433, 0.476), (0.428, 0.427), (0.445, 0.376), (0.481, 0.336), (0.516, 0.296), (0.566, 0.280), (0.617, 0.264), (0.669, 0.254), (0.698, 0.278), (0.680, 0.328), (0.674, 0.338), (0.668, 0.329), (0.633, 0.366), (0.621, 0.320), (0.617, 0.267), (0.605, 0.288)] | train | |
12882_frame_25 | <image>
Based on the observation image, the task instruction "placed_the_silver_pot_on_the_yellow_cloth", and the current 2D position of the robot gripper at (0.695, 0.335), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.562, 0.383), (0.529, 0.414), [(0.682, 0.334), (0.668, 0.328), (0.655, 0.336), (0.646, 0.348), (0.640, 0.361), (0.628, 0.366), (0.625, 0.351), (0.622, 0.335), (0.621, 0.320), (0.620, 0.305), (0.619, 0.290), (0.618, 0.274), (0.617, 0.259), (0.610, 0.259), (0.607, 0.273), (0.605, 0.288)] | train | |
17830_frame_0 | <image>
Based on the observation image, the task instruction "place_the_banana_on_top_of_the_green_towel", and the current 2D position of the robot gripper at (0.310, 0.097), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.389, 0.367), (0.678, 0.510), [(0.287, 0.104), (0.304, 0.180), (0.342, 0.246), (0.378, 0.315), (0.372, 0.282), (0.367, 0.204), (0.397, 0.147), (0.467, 0.171), (0.525, 0.223), (0.577, 0.282), (0.627, 0.341), (0.669, 0.360), (0.667, 0.340), (0.657, 0.263), (0.642, 0.187), (0.617, 0.113)] | train | |
17830_frame_9 | <image>
Based on the observation image, the task instruction "place_the_banana_on_top_of_the_green_towel", and the current 2D position of the robot gripper at (0.364, 0.295), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.393, 0.369), (0.678, 0.510), [(0.376, 0.323), (0.370, 0.264), (0.367, 0.204), (0.379, 0.146), (0.439, 0.150), (0.485, 0.187), (0.530, 0.227), (0.569, 0.273), (0.608, 0.318), (0.643, 0.366), (0.667, 0.349), (0.669, 0.337), (0.659, 0.286), (0.655, 0.227), (0.636, 0.170), (0.617, 0.113)] | train | |
17830_frame_19 | <image>
Based on the observation image, the task instruction "place_the_banana_on_top_of_the_green_towel", and the current 2D position of the robot gripper at (0.652, 0.381), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.609, 0.488), (0.678, 0.510), [(0.670, 0.369), (0.667, 0.348), (0.664, 0.328), (0.672, 0.320), (0.668, 0.341), (0.663, 0.320), (0.660, 0.299), (0.658, 0.278), (0.657, 0.256), (0.655, 0.235), (0.651, 0.214), (0.644, 0.194), (0.637, 0.174), (0.631, 0.153), (0.624, 0.133), (0.617, 0.113)] | train | |
28464_frame_0 | <image>
Based on the observation image, the task instruction "take_the_blue_cloth_from_the_basket_and_put_it_in_the_washing_machine", and the current 2D position of the robot gripper at (0.811, 0.641), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.705, 0.855), (0.672, 0.246), [(0.779, 0.741), (0.775, 0.740), (0.795, 0.638), (0.812, 0.536), (0.807, 0.432), (0.792, 0.330), (0.752, 0.233), (0.711, 0.138), (0.650, 0.054), (0.589, 0.080), (0.588, 0.184), (0.588, 0.217), (0.633, 0.123), (0.638, 0.156), (0.647, 0.186), (0.664, 0.288)] | train | |
28464_frame_10 | <image>
Based on the observation image, the task instruction "take_the_blue_cloth_from_the_basket_and_put_it_in_the_washing_machine", and the current 2D position of the robot gripper at (0.815, 0.519), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.756, 0.729), (0.672, 0.246), [(0.808, 0.442), (0.801, 0.366), (0.777, 0.293), (0.748, 0.222), (0.718, 0.151), (0.674, 0.088), (0.625, 0.036), (0.589, 0.089), (0.588, 0.166), (0.577, 0.241), (0.603, 0.190), (0.633, 0.124), (0.639, 0.179), (0.639, 0.136), (0.651, 0.212), (0.664, 0.288)] | train | |
28464_frame_20 | <image>
Based on the observation image, the task instruction "take_the_blue_cloth_from_the_basket_and_put_it_in_the_washing_machine", and the current 2D position of the robot gripper at (0.618, 0.154), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.625, 0.266), (0.672, 0.246), [(0.627, 0.135), (0.634, 0.129), (0.634, 0.151), (0.636, 0.172), (0.639, 0.186), (0.638, 0.164), (0.637, 0.143), (0.636, 0.121), (0.639, 0.139), (0.643, 0.160), (0.646, 0.182), (0.650, 0.203), (0.653, 0.224), (0.656, 0.246), (0.660, 0.267), (0.664, 0.288)] | train | |
3904_frame_0 | <image>
Based on the observation image, the task instruction "move_blue_napkin_to_back_right_corner_of_stove_top", and the current 2D position of the robot gripper at (0.375, 0.250), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.391, 0.562), (0.693, 0.455), [(0.376, 0.294), (0.365, 0.336), (0.364, 0.379), (0.368, 0.422), (0.383, 0.459), (0.422, 0.440), (0.456, 0.413), (0.498, 0.402), (0.540, 0.392), (0.582, 0.380), (0.623, 0.367), (0.665, 0.355), (0.699, 0.337), (0.702, 0.297), (0.708, 0.254), (0.709, 0.210)] | train | |
3904_frame_8 | <image>
Based on the observation image, the task instruction "move_blue_napkin_to_back_right_corner_of_stove_top", and the current 2D position of the robot gripper at (0.384, 0.459), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.393, 0.592), (0.693, 0.455), [(0.411, 0.446), (0.435, 0.430), (0.459, 0.412), (0.488, 0.405), (0.517, 0.398), (0.546, 0.391), (0.574, 0.382), (0.603, 0.373), (0.632, 0.365), (0.660, 0.357), (0.687, 0.344), (0.695, 0.329), (0.702, 0.299), (0.707, 0.270), (0.708, 0.240), (0.709, 0.210)] | train | |
3904_frame_16 | <image>
Based on the observation image, the task instruction "move_blue_napkin_to_back_right_corner_of_stove_top", and the current 2D position of the robot gripper at (0.675, 0.351), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.672, 0.484), (0.693, 0.455), [(0.684, 0.346), (0.692, 0.340), (0.700, 0.335), (0.695, 0.331), (0.697, 0.321), (0.699, 0.311), (0.701, 0.301), (0.703, 0.291), (0.706, 0.282), (0.707, 0.272), (0.708, 0.261), (0.708, 0.251), (0.708, 0.241), (0.708, 0.231), (0.709, 0.221), (0.709, 0.210)] | train | |
662_frame_0 | <image>
Based on the observation image, the task instruction "move_cloth_to_the_top_left", and the current 2D position of the robot gripper at (0.445, 0.173), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.391, 0.545), (0.295, 0.301), [(0.474, 0.267), (0.501, 0.362), (0.490, 0.461), (0.460, 0.556), (0.402, 0.632), (0.447, 0.572), (0.470, 0.475), (0.471, 0.375), (0.461, 0.276), (0.441, 0.178), (0.414, 0.082), (0.349, 0.012), (0.274, 0.060), (0.225, 0.138), (0.223, 0.134), (0.211, 0.035)] | train | |
662_frame_13 | <image>
Based on the observation image, the task instruction "move_cloth_to_the_top_left", and the current 2D position of the robot gripper at (0.415, 0.641), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.393, 0.576), (0.295, 0.301), [(0.444, 0.582), (0.461, 0.516), (0.475, 0.451), (0.472, 0.384), (0.467, 0.316), (0.455, 0.250), (0.442, 0.184), (0.425, 0.119), (0.401, 0.057), (0.353, 0.013), (0.292, 0.028), (0.254, 0.083), (0.226, 0.136), (0.220, 0.168), (0.221, 0.101), (0.211, 0.035)] | train | |
662_frame_27 | <image>
Based on the observation image, the task instruction "move_cloth_to_the_top_left", and the current 2D position of the robot gripper at (0.229, 0.123), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.299, 0.307), (0.295, 0.301), [(0.226, 0.136), (0.223, 0.150), (0.221, 0.162), (0.220, 0.170), (0.216, 0.181), (0.220, 0.168), (0.222, 0.155), (0.223, 0.141), (0.224, 0.128), (0.223, 0.115), (0.221, 0.101), (0.220, 0.088), (0.218, 0.075), (0.216, 0.061), (0.214, 0.048), (0.211, 0.035)] | train | |
17963_frame_0 | <image>
Based on the observation image, the task instruction "put_the_red_block_on_top_of_the_yellow_block", and the current 2D position of the robot gripper at (0.359, 0.287), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.352, 0.514), (0.816, 0.486), [(0.321, 0.333), (0.346, 0.393), (0.372, 0.314), (0.422, 0.225), (0.493, 0.155), (0.589, 0.127), (0.683, 0.133), (0.760, 0.193), (0.832, 0.249), (0.894, 0.320), (0.853, 0.340), (0.846, 0.239), (0.802, 0.148), (0.744, 0.068), (0.653, 0.056), (0.558, 0.091)] | train | |
17963_frame_15 | <image>
Based on the observation image, the task instruction "put_the_red_block_on_top_of_the_yellow_block", and the current 2D position of the robot gripper at (0.350, 0.402), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.363, 0.516), (0.816, 0.486), [(0.371, 0.316), (0.414, 0.239), (0.470, 0.173), (0.546, 0.134), (0.634, 0.128), (0.709, 0.150), (0.771, 0.208), (0.836, 0.253), (0.896, 0.313), (0.852, 0.362), (0.849, 0.274), (0.824, 0.191), (0.784, 0.112), (0.721, 0.053), (0.641, 0.061), (0.558, 0.091)] | train | |
17963_frame_30 | <image>
Based on the observation image, the task instruction "put_the_red_block_on_top_of_the_yellow_block", and the current 2D position of the robot gripper at (0.870, 0.277), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.812, 0.422), (0.816, 0.486), [(0.895, 0.307), (0.888, 0.348), (0.855, 0.358), (0.853, 0.331), (0.850, 0.289), (0.847, 0.248), (0.833, 0.209), (0.814, 0.172), (0.795, 0.135), (0.776, 0.098), (0.748, 0.070), (0.713, 0.048), (0.675, 0.048), (0.636, 0.063), (0.597, 0.077), (0.558, 0.091)] | train | |
14097_frame_0 | <image>
Based on the observation image, the task instruction "move_spoon_to_purple_cloth,_laying_diagonal.", and the current 2D position of the robot gripper at (0.344, 0.121), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.482, 0.359), (0.396, 0.584), [(0.375, 0.157), (0.413, 0.186), (0.429, 0.246), (0.394, 0.233), (0.341, 0.201), (0.313, 0.250), (0.301, 0.312), (0.307, 0.374), (0.325, 0.434), (0.341, 0.494), (0.332, 0.446), (0.336, 0.384), (0.346, 0.322), (0.303, 0.277), (0.247, 0.254), (0.185, 0.250)] | train | |
14097_frame_13 | <image>
Based on the observation image, the task instruction "move_spoon_to_purple_cloth,_laying_diagonal.", and the current 2D position of the robot gripper at (0.432, 0.254), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.482, 0.359), (0.396, 0.584), [(0.397, 0.236), (0.356, 0.206), (0.322, 0.223), (0.307, 0.271), (0.302, 0.321), (0.307, 0.371), (0.320, 0.420), (0.333, 0.468), (0.335, 0.485), (0.332, 0.435), (0.336, 0.385), (0.344, 0.335), (0.322, 0.294), (0.284, 0.262), (0.235, 0.253), (0.185, 0.250)] | train | |
14097_frame_27 | <image>
Based on the observation image, the task instruction "move_spoon_to_purple_cloth,_laying_diagonal.", and the current 2D position of the robot gripper at (0.332, 0.455), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.391, 0.553), (0.396, 0.584), [(0.335, 0.481), (0.336, 0.496), (0.333, 0.471), (0.332, 0.445), (0.332, 0.419), (0.334, 0.393), (0.339, 0.367), (0.343, 0.342), (0.344, 0.317), (0.326, 0.298), (0.307, 0.280), (0.286, 0.264), (0.262, 0.256), (0.237, 0.253), (0.211, 0.251), (0.185, 0.250)] | train | |
36467_frame_0 | <image>
Based on the observation image, the task instruction "pick_up_the_silver_bowl_and_place_it_on_the_blue_cloth.", and the current 2D position of the robot gripper at (0.574, 0.402), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.322, 0.270), (0.580, 0.668), [(0.511, 0.377), (0.459, 0.310), (0.430, 0.229), (0.414, 0.148), (0.402, 0.213), (0.408, 0.180), (0.472, 0.234), (0.511, 0.308), (0.543, 0.386), (0.598, 0.452), (0.654, 0.517), (0.712, 0.581), (0.756, 0.648), (0.704, 0.607), (0.708, 0.521), (0.686, 0.439)] | train | |
36467_frame_10 | <image>
Based on the observation image, the task instruction "pick_up_the_silver_bowl_and_place_it_on_the_blue_cloth.", and the current 2D position of the robot gripper at (0.398, 0.180), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.318, 0.275), (0.580, 0.668), [(0.384, 0.226), (0.406, 0.181), (0.452, 0.216), (0.496, 0.259), (0.514, 0.318), (0.534, 0.375), (0.573, 0.423), (0.613, 0.470), (0.653, 0.517), (0.694, 0.562), (0.736, 0.608), (0.752, 0.653), (0.704, 0.620), (0.706, 0.558), (0.706, 0.497), (0.686, 0.439)] | train | |
36467_frame_20 | <image>
Based on the observation image, the task instruction "pick_up_the_silver_bowl_and_place_it_on_the_blue_cloth.", and the current 2D position of the robot gripper at (0.639, 0.501), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.471, 0.523), (0.580, 0.668), [(0.658, 0.522), (0.677, 0.543), (0.696, 0.564), (0.715, 0.585), (0.734, 0.606), (0.753, 0.627), (0.755, 0.652), (0.736, 0.639), (0.710, 0.627), (0.705, 0.604), (0.706, 0.576), (0.707, 0.548), (0.708, 0.520), (0.705, 0.492), (0.696, 0.465), (0.686, 0.439)] | train | |
5026_frame_0 | <image>
Based on the observation image, the task instruction "moove_the_silver_pot_between_two_burners", and the current 2D position of the robot gripper at (0.194, 0.225), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.641, 0.348), (0.426, 0.326), [(0.258, 0.241), (0.321, 0.197), (0.374, 0.137), (0.428, 0.078), (0.504, 0.082), (0.539, 0.153), (0.551, 0.231), (0.560, 0.259), (0.542, 0.181), (0.496, 0.117), (0.442, 0.062), (0.370, 0.080), (0.339, 0.152), (0.338, 0.213), (0.324, 0.135), (0.317, 0.055)] | train | |
5026_frame_8 | <image>
Based on the observation image, the task instruction "moove_the_silver_pot_between_two_burners", and the current 2D position of the robot gripper at (0.552, 0.202), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.641, 0.352), (0.426, 0.326), [(0.551, 0.249), (0.555, 0.274), (0.555, 0.229), (0.543, 0.183), (0.519, 0.144), (0.488, 0.108), (0.458, 0.073), (0.417, 0.063), (0.374, 0.073), (0.350, 0.113), (0.339, 0.157), (0.338, 0.204), (0.334, 0.194), (0.325, 0.148), (0.322, 0.101), (0.317, 0.055)] | train | |
5026_frame_16 | <image>
Based on the observation image, the task instruction "moove_the_silver_pot_between_two_burners", and the current 2D position of the robot gripper at (0.380, 0.065), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.494, 0.180), (0.426, 0.326), [(0.369, 0.083), (0.357, 0.101), (0.346, 0.119), (0.340, 0.138), (0.339, 0.159), (0.338, 0.180), (0.338, 0.201), (0.340, 0.222), (0.336, 0.201), (0.332, 0.181), (0.328, 0.160), (0.324, 0.139), (0.323, 0.118), (0.321, 0.097), (0.320, 0.076), (0.317, 0.055)] | train | |
29935_frame_0 | <image>
Based on the observation image, the task instruction "put_carrot_on_plate", and the current 2D position of the robot gripper at (0.595, 0.353), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.279, 0.383), (0.656, 0.484), [(0.535, 0.302), (0.487, 0.223), (0.418, 0.162), (0.329, 0.139), (0.283, 0.193), (0.245, 0.271), (0.230, 0.361), (0.211, 0.327), (0.225, 0.244), (0.294, 0.190), (0.383, 0.178), (0.468, 0.213), (0.547, 0.262), (0.614, 0.325), (0.646, 0.409), (0.656, 0.430)] | train | |
29935_frame_12 | <image>
Based on the observation image, the task instruction "put_carrot_on_plate", and the current 2D position of the robot gripper at (0.256, 0.260), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.258, 0.398), (0.656, 0.484), [(0.238, 0.312), (0.227, 0.369), (0.219, 0.351), (0.202, 0.294), (0.226, 0.244), (0.262, 0.199), (0.319, 0.184), (0.375, 0.176), (0.432, 0.191), (0.482, 0.222), (0.532, 0.253), (0.578, 0.290), (0.618, 0.332), (0.646, 0.384), (0.645, 0.443), (0.656, 0.430)] | train | |
29935_frame_24 | <image>
Based on the observation image, the task instruction "put_carrot_on_plate", and the current 2D position of the robot gripper at (0.227, 0.374), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.311, 0.350), (0.656, 0.484), [(0.211, 0.328), (0.201, 0.280), (0.229, 0.240), (0.258, 0.201), (0.306, 0.187), (0.353, 0.174), (0.401, 0.183), (0.447, 0.199), (0.489, 0.226), (0.531, 0.252), (0.569, 0.282), (0.605, 0.316), (0.633, 0.357), (0.646, 0.403), (0.645, 0.452), (0.656, 0.430)] | train | |
5406_frame_0 | <image>
Based on the observation image, the task instruction "place_the_pan_in_the_front_right_edge_of_the_stove.", and the current 2D position of the robot gripper at (0.470, 0.233), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.645, 0.439), (0.514, 0.598), [(0.530, 0.272), (0.608, 0.276), (0.684, 0.294), (0.743, 0.343), (0.754, 0.413), (0.751, 0.363), (0.770, 0.287), (0.740, 0.236), (0.718, 0.311), (0.698, 0.387), (0.697, 0.466), (0.699, 0.544), (0.643, 0.543), (0.643, 0.467), (0.636, 0.389), (0.612, 0.352)] | train | |
5406_frame_10 | <image>
Based on the observation image, the task instruction "place_the_pan_in_the_front_right_edge_of_the_stove.", and the current 2D position of the robot gripper at (0.743, 0.420), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.623, 0.438), (0.514, 0.598), [(0.749, 0.368), (0.764, 0.318), (0.773, 0.266), (0.742, 0.231), (0.726, 0.282), (0.712, 0.333), (0.698, 0.384), (0.697, 0.437), (0.697, 0.489), (0.698, 0.542), (0.660, 0.565), (0.638, 0.521), (0.643, 0.468), (0.640, 0.416), (0.630, 0.364), (0.612, 0.352)] | train | |
5406_frame_20 | <image>
Based on the observation image, the task instruction "place_the_pan_in_the_front_right_edge_of_the_stove.", and the current 2D position of the robot gripper at (0.656, 0.563), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.512, 0.623), (0.514, 0.598), [(0.647, 0.549), (0.638, 0.536), (0.638, 0.520), (0.640, 0.504), (0.642, 0.488), (0.642, 0.472), (0.643, 0.456), (0.644, 0.440), (0.641, 0.424), (0.639, 0.408), (0.636, 0.391), (0.633, 0.376), (0.629, 0.360), (0.624, 0.344), (0.619, 0.337), (0.612, 0.352)] | train | |
17135_frame_0 | <image>
Based on the observation image, the task instruction "move_the_orange_cloth_to_the_right", and the current 2D position of the robot gripper at (0.555, 0.548), please first predict the start and end positions of the target object to be manipulated, denoted as (x_start, y_start) and (x_end, y_end). Then, generate a sequence of 16 future keypoints representing the gripper's trajectory to complete the task. The output should be in the format: (x_start, y_start), (x_end, y_end), [(x1, y1), (x2, y2), ...], with all coordinates normalized between 0 and 1. | (0.332, 0.691), (0.549, 0.762), [(0.513, 0.565), (0.463, 0.564), (0.413, 0.559), (0.365, 0.572), (0.330, 0.607), (0.311, 0.651), (0.349, 0.681), (0.398, 0.693), (0.448, 0.704), (0.498, 0.708), (0.546, 0.723), (0.588, 0.751), (0.625, 0.724), (0.632, 0.676), (0.628, 0.625), (0.614, 0.578)] | train |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 30