Image-goal navigation in complex environments via modular learning

Oct 55, 30300·
Qiaoyun Wu
,
Jun Wang
Jing Liang
Jing Liang
,
Xiaoxi Gong
,
Dinesh Manocha
· 0 min read
Abstract
We present a novel approach for image-goal navigation, where an agent navigates with a goal image rather than accurate target information, which is more challenging. Our goal is to decouple the learning of navigation goal planning, collision avoidance, and navigation ending prediction, which enables more concentrated learning of each part. This is realized by four different modules. The first module maintains an obstacle map during robot navigation. The second predicts a long-term goal on the real-time map periodically, which can thus convert an image-goal navigation task to several point-goal navigation tasks. To achieve these point-goal navigation tasks, the third module plans collision-free command sets for navigating to these long-term goals. The final module stops the robot properly near the goal image. The four modules are designed or maintained separately, which helps cut down the search time during navigation and improve the generalization to previously unseen real scenes. We evaluate the method in both a simulator and in the real world with a mobile robot. The results in real complex environments show that our method attains at least a 17% increase in navigation success rate and a 23% decrease in navigation collision rate over some state-of-the-art models.
Type
Publication
2022 IEEE Robotics and Automation Letters