Artificial Intelligence (AI) systems are precious support for decision-making, with many applications in the medical domain. However, there is little understanding of how human experts interact with AI. Health policy-makers fear flat reliance on AI advice. In this multicentric study, twenty-one endoscopists reviewed 504 videos of lesions from real colonoscopies, with and without the assistance of an AI support system. Endoscopists were influenced by AI (OR = 3.05), but not erratically: they followed the AI advice more when it was correct (OR = 3.48) than incorrect (OR = 1.85). Endoscopists achieved this outcome through a weighted integration of their and the AI opinions, considering the case-by-case estimations of the two reliabilities. This Bayesian-like rational behavior allowed the human-AI hybrid team to outperform both agents taken alone. We discuss the features of the interaction that determined this favorable outcome.