Ensuring fairness in recommendation systems necessitates that models do not discriminate against users based on demographic information such as gender and age. Current fairness strategies often apply a unified fairness intervention, presuming that users' recommendation results are adversely influenced by sensitive attributes. This approach can sometimes diminish both the utility and fairness of recommendations for certain users. Drawing inspiration from the studies of human-like behavior in large language models(LLMs), we investigate whether LLMs can serve as fairness recognizers in recommendation systems. Specifically, we explore if the fairness awareness inherent in LLMs can be harnessed to construct fair recommendations. To this end, we generate recommendation results on MovieLens and LastFM datasets using the Variational Autoencoder(VAE) and VAE with integrated fairness strategies. Our findings reveal that LLMs can indeed recognize fair recommendations by evaluating the fairness of users' recommendation results. We then propose a method to design fair recommendations by incorporating LLMs: replacing the recommendation results generated by the VAE of users identified as unfair by LLMs with those generated by a fair VAE. Evaluating these reconstructed recommendations demonstrates that leveraging the fairness recognition capabilities of LLMs achieves a better balance between effectiveness and fairness.