Visual place recognition using 3D lidar in robotics is a popular research topic. Feature representation descriptors, such as scan-context descriptors, which are 2D descriptors obtained from 3D point clouds, are one of the research directions with great in-depth research value. This paper proposes a new method to improve the robot visual place recognition method based on scan-context descriptors. Our approach focuses on using a combination of loss functions and optimizers to improve the accuracy and robustness of the recognition system. Specifically, we introduce a custom loss function that encourages the network to learn more discriminative features and a Heun optimizer that enables faster and more stable convergence during training.
To evaluate the effectiveness of our method, we conduct experiments on the NCLT benchmark dataset. The results show that our method outperforms some existing methods in terms of recognition accuracy and computational efficiency.
Our proposed method provides a promising solution for improving robot place recognition, which has significant implications for a wide range of robotic applications, including autonomous navigation, mapping, and localization.