Dynamic graph representation learning is critical for graph-based downstream tasks such as link prediction, node classification, and graph reconstruction. Many graph-neural-network–based methods have emerged recently, but most are incapable of tracing graph evolution patterns over time. To solve this problem, we propose a continuous-time dynamic graph framework: dynamic graph temporal contextual contrasting (DGTCC) model, which integrates both the temporal and topology information and mines the latent evolution trend of graph representation. In this model, the node representation is first generated by a self-attention–based temporal encoder, which measures the importance weights of neighbor nodes in temporal sub-graphs and then stores them in the contextual memory module. After sampling the node representation from the memory module, the model maximizes the mutual information of the same node that occurred in two nearby temporal views by the contrastive learning mechanism, which helps track the evolutional trend of nodes. In inductive learning settings, the results on four real datasets demonstrate the superiority of the proposed DGTCC model.