Skip to yearly menu bar Skip to main content


Poster

Shape-guided Configuration-aware Learning for Endoscopic-image-based Pose Estimation of Flexible Robotic Instruments

Yiyao Ma · Kai Chen · Hon-Sing Tong · Ruofeng Wei · Yui-Lun Ng · Ka-Wai Kwok · Qi Dou

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Accurate estimation of both the external orientation and internal bending angle is crucial for understanding a soft robot state within its environment. However, existing sensor-based methods face limitations in cost, environmental constraints, and integration issues. Conventional image-based methods struggle with the shape complexity of soft robots. In this paper, we propose a novel shape-guided configuration-aware learning framework for image-based soft robot pose estimation. Inspired by the recent advances in 2D-3D joint representation learning, we leverage the 3D shape prior of the soft robot to enhance its image-based shape representation. Concretely, we first extract the part-level geometry representation of the 3D shape prior, then adapt this representation to the image by querying the image features corresponding to different robot parts. Furthermore, we present an effective mechanism to dynamically deform the shape prior. It aims to mitigate the shape difference between the adopted shape prior and the soft robot depicted in the image. This more expressive shape guidance further boosts the image-based robot representation and can be effectively used for soft robot pose refinement. Extensive experiments on surgical soft robots demonstrate the advantages of our method when compared with a series of keypoint-based, skeleton-based and direct regression-based methods.

Live content is unavailable. Log in and register to view live content