The study of explainable artificial intelligence in education is a major preoccupation of the research community. In this paper, we introduce explanations in a competency-based system, in the context of computer education. Our overall aim is to increase learners' understanding of the levels of mastery of their competencies that are computed automatically, and thus to increase their trust in the system. In particular, we are investigating which of the students' personal characteristics affect their consultation and understanding of the explanations, and what impact these explanations have on their perception and behavior within the system. Our study focuses on 98 first-year computer science students in higher education and combines qualitative and quantitative analyses. The quantitative analyses show significant correlations between certain characteristics studied and the use, perception and understanding of explanations on the one hand, as well as learners' general perception of the system on the other hand. For example, students with a low perception of their ability to succeed are less likely to access and understand explanations, and students with low engagement and performance are less likely to understand explanations. Qualitative analyses, meanwhile, help us to identify certain learner needs in terms of the content, form, and timing of explanations: combining local and counterfactual explanations; combining textual and visual formats; providing explanations in real time.