Shape Completion and Grasp Prediction for Fast and Versatile Grasping with a Multi-Fingered Hand

Поділитися
Вставка
  • Опубліковано 25 лип 2023
  • Grasping objects with limited or no prior knowl- edge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem. Strong challenges arise from object shape diversity with only partial visibility. To address these challenges, we present a deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape. The shape completion network is based on VQDIF and predicts spatial occupancy values at arbitrary query points. As grasp predictor, we use our two-stage architecture that first generates hand poses using an autoregressive model, and then regresses finger joint configurations per pose. To take this approach to the real world we introduce adapted procedures for training data generation and for the training itself. Critical factors turn out to be sufficient data realism and augmentation as well as special attention to difficult cases during training. We further show how to make the grasp predictions more robust against uncertainties in the relative pose between hand and object and propose a new way to handle ambiguities in the grasp training dataset by adapting the network architecture. Experiments on a physical robot platform demonstrate successful grasping of a wide range of household objects based on a depth image from a single viewpoint.
  • Наука та технологія

КОМЕНТАРІ •