GraspMolmo / README.md
abhaybd's picture
Update README.md
52eaaa3 verified
|
raw
history blame
456 Bytes
metadata
license: mit
datasets:
  - allenai/PRISM
language:
  - en
base_model:
  - allenai/Molmo-7B-D-0924
pipeline_tag: robotics

GraspMolmo

GraspMolmo is a generalizable open-vocabulary task-oriented grasping (TOG) model for robotic manipulation. Given an image and a task to complete (e.g. "Pour me some tea"), GraspMolmo will point to the most appropriate grasp location, which can then be matched to the closest stable grasp.

Code Sample

Coming soon!