Text-driven image style transfer methods offer users intuitive control over artistic style, bypassing the need for reference style images. However, traditional approaches face challenges in maintaining content structure and achieving realistic stylization. In this paper, we present a novel multi-channel correlated diffusion model for text-driven artistic style transfer. By leveraging the CLIP model to guide the generation of learnable noise and introducing multi-channel correlation diffusion, along with refining the channels to filter out redundant information produced by the multi-channel calculation, we overcome the disruptive effect of noise on image texture during diffusion. Furthermore, we design a threshold-constrained contrastive balance text-image matching loss to ensure a strong correlation between textual descriptions and stylized images. Experimental results demonstrate that our method outperforms state-of-the-art models, achieving outstanding image stylization while maintaining content structure and adhering closely to text style descriptions. Quantitative and qualitative evaluations confirm the effectiveness of our approach. The relevant code is available at https://github.com/shehuiyao-a11y/mccstyler.