Design flows, code errors, or inadequate countermeasures may occur in software development. Some of them lead to vulnerabilities in the code, opening the door to attacks. Assorted techniques are developed to detect vulnerable code samples, making artificial intelligence techniques, such as Machine Learning (ML), a common practice. Nonetheless, the security of ML is a major concern. This includes the the case of ML-based detection whose training process is affected by data poisoning. More generally, vulnerability detection can be evaded unless poisoning attacks are properly handled. This paper tackles this problem. A novel vulnerability detection system based on ML-based image processing, using Convolutional Neural Network (CNN), is proposed. The system, hereinafter called IVul, is evaluated under the presence of backdoor attacks, a precise type of poisoning in which a pattern is introduced in the training data to alter the expected behavior of the learned models. IVul is evaluated with more than three thousand code samples associated with two representative programming languages (C# and PHP). IVul outperforms other comparable state-of-the-art vulnerability detectors in the literature, reaching 82% to 99% detection accuracy. Besides, results show that the type of attack may affect a particular language more than another, though, in general, PHP is more resilient to proposed attacks than C#.