Variation-robust Few-shot 3D Affordance Segmentation for Robotic Manipulation
Published in under review, 2024
Recommended citation: Dingchang Hu, Tianyu Sun, Pengwei Xie, Siang Chen, Yixiang Dai, Huazhong Yang, Guijin Wang. (2024). Variation-robust Few-shot 3D Affordance Segmentation for Robotic Manipulation.
Abstract
Traditional affordance segmentation on 3D point cloud objects requires massive amounts of annotated training data and can only make predictions within predefined classes and affordance tasks. To overcome these limitations, we propose a variation-robust few-shot 3D affordance segmentation network (VRNet) for robotic manipulation, which requires only several affordance annotations for novel object classes and manipulation tasks. In particular, we design an orientationtolerant feature extractor to address pose variation between support and query point cloud objects, and present a multiscale label propagation algorithm for variation in completeness. Extensive experiments on affordance datasets show that VRNet provides the best segmentation performance compared with previous works. Moreover, experiments in real robotic scenarios demonstrate the generalization ability of our method.