Knowledge graphs, recognized for their ability to amalgamate multiple data sources and enrich semantic information, have emerged as valuable auxiliary information. They offer the potential to infer latent relationships within data. However, existing knowledge graph-based recommendation systems often overlook the wealth of multimodal information. In response, this paper introduces a novel model designed to harness the propagation of information on multimodal knowledge graphs. Unlike its counterparts, our model actively considers the fusion of multimodal information, presenting a comprehensive solution to bridge the gap between recommendation systems and multimodal data. To evaluate the effectiveness of the proposed model, a comparative analysis with state-of-the-art models is conducted. The proposed model showcases promising results, demonstrating its efficacy in leveraging multimodal knowledge graphs. The empirical validation is performed on two real-world datasets, underscoring the potential impact of advancing the capabilities of contemporary recommendation systems.