Specular highlights in images can damage or completely obliterate the color and texture details of objects, posing significant challenges to various visual tasks. Traditional highlight removal methods often struggle with handling specular highlights on surfaces with complex textures or rely on stringent and complicated shooting conditions. On the other hand, deep learning-based methods excel at highlight removal on single images with intricate surfaces due to their robust encoding capabilities. However, these methods still face issues with texture distortion in highlight regions. In this paper, we propose a novel method for highlight removal inspired by the observation that specular highlights typically result in increased brightness and decreased saturation. Our method relies on generating pseudo-SV (saturation-value) modulated image bases, effectively constructing a discrete color space that closely approximates the brightness, saturation, and hues of highlight-free pixels. We employ a dual-network architecture, jointly training a highlight detection sub-network and a highlight removal sub-network. By leveraging the generated image bases and highlight positional priors from the highlight detection network, our method enables the highlight removal network to learn the nuances of texture alterations across different highlight levels, thereby producing high-quality highlight-free images via a weighted fusion process. Our experiments demonstrate that our approach effectively restores texture and color details in highlighted regions, significantly outperforming existing methods, as evidenced by superior PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index) scores. To encourage future research and collaboration, we have made our source code publicly available at https://github.com/XufangPANG/Highlight-Removal-based-on-Pesudo-image-bases-fusion.