The purpose of Oracle bone inscriptions image retrieval is to find the most similar image to a given query image from the Oracle database, which can provide practical tools and methods for scholars. It is of great significance for promoting digital research and the in--depth development of oracle bone inscriptions. The background of oracle bone inscriptions is complex, many samples are damaged or lost, and the similarity in the same category is low. In contrast, the similarity of different categories is high. To address these problems, this paper proposes a metric learning-based method for oracle bone inscriptions image retrieval, which uses a dual-branch network to measure the similarity between the input images and achieve the comparison between the query image and the image library to be queried. To enhance the ability of network feature extraction, this article introduces residual connections in the original VGG network. Meanwhile, the influence of different connection methods of spatial and channel attention modules on feature extraction ability in the experiment was explored. In addition, we constructed three retrieval datasets that increased with sample size based on the OBC306 dataset. Finally, to verify the effectiveness of our method, we performed experiments on these three retrieval datasets. The experimental results show that the MAP of our approach on these three datasets reached 71.38%, 92.33%, and 92.97%, respectively.