Thanks to the development of 2D keypoint detectors, monocular 3D human pose estimation (HPE) via 2D-to-3D lifting approaches have achieved remarkable improvements. However, monocular 3D HPE is still a challenging problem due to the inherent depth ambiguities and occlusions. Recently, diffusion models have achieved great success in the field of image generation. Inspired by this, we transform 3D human pose estimation problem into a reverse diffusion process, and propose a dual-branch diffusion model that could fully explore the global and local correlations between joints. Furthermore, we propose conditional dual-branch diffusion model to enhance the performance of 3D human pose estimation, in which the joint-level semantic information are regarded as the condition of the diffusion model, and integrated into the joint-level representations of 2D pose to enhance the expression of joints. The proposed method is verified on two widely used datasets and the experimental results have demonstrated the superiority.