Artificial intelligence (AI) systems have increasingly achieved expert-level performance, particularly in medical imaging (1). However, there is growing concern that AI systems will reflect and amplify human bias against under-served subpopulations (2-7). Such biases are especially troubling in the context of underdiagnosis: if AI systems falsely predict that patients are healthy, patients would be denied care when they need it most. This use case is particularly relevant in the context of existing health disparities where high underdiagnosis rates for under-served subgroups are well documented (8-11). Although bias in underdiagnosis can potentially delay access to medical treatment unequally, underdiagnosis due of AI has been relatively unexplored. In this work we examine algorithmic underdiagnosis in chest X-ray pathology classifiers and find that classifiers consistently and selectively underdiagnose under-served patients, actively amplifying the existing biases in clinical care. These effects are worse on intersectional subpopulations, e.g., Black females, and persist across three large and a multi-source chest X-ray dataset. Our work demonstrates that deploying AI systems risks exacerbating biases present in current care practices. Developers, clinical staff, and regulators must address the serious ethical concerns of -- and barriers to -- effective deployment of these models in the clinic.