With the proliferation of unlabeled data, increasing efforts have been devoted to unsupervised learning. As one of the most representative branches of unsupervised learning, contrastive learning has made great progress with its high efficiency. Unfortunately, privacy threats on con-trastive learning have become sophisticated, making it imperative to develop effective technologies that are able to deal with such threats. To alleviate the privacy issue in contrastive learning, we propose some novel techniques based on differential privacy, which aim at reducing the high sensitivity of gradient in the private training caused by interactive contrastive learning. Specifically, we add differentially private protection to the connection point related to different per-example gradients, which decreases the sensitivity of the gradients significantly. Our experiments on SimCLR and Barlow Twins demonstrate the superiority of our approach with higher accuracy under the same privacy protection.