The emergence of large language models (LLMs) that leverage deep learning and web-scale corpora has made it possible for artificial intelligence (AI) to tackle many higher-order cognitive tasks, with critical implications for industry, government, and labor markets in the US and globally. Here, we investigate whether existing, openly-available LLMs are capable of influencing humans’ political attitudes, an ability recently regarded as the unique purview of other humans. Across three preregistered experiments featuring diverse samples of Americans (total N=4,836), we find consistent evidence that messages generated by LLMs (OpenAI’s GPT 3 and 3.5 models) are able to persuade humans across a number of policy issues, including highly polarized issues, such as an assault weapon ban, a carbon tax, and a paid parental-leave program. Overall, LLM-generated messages were as persuasive as messages crafted by lay humans. Our results show LLMs can persuade humans, even on highly polarized policy issues. As the capacity of LLMs is expected to improve substantially in the near future, these results suggest that LLMs may change political discourse, calling for immediate attention for the identification and regulation of potential misuse of LLMs.