The Jeongganbo notation, the first music representation system in East Asia capable of jointly expressing pitch and duration, has been extensively used -and still is- in the Korean music tradition since its inception in the 15th century. In this regard, there exists a plethora of music works that exclusively endure as physical sheets, which not only constitutes a heritage preservation challenge due to the inherent degradation of this format but also impedes the use of computational tools to study and exploit this music tradition. While the Optical Music Recognition (OMR) field, which represents the research area devoted to devising methods capable of automatically transcribing music sheets into digital formats, has addressed this issue in a number of music notations from the Western tradition, no previous research has considered the preservation of Jeonganbo scores. In this context, this work presents the following contributions: (i) the first data assortment of real Jeongganbo scores for OMR tasks; (ii) a collection of synthetic data generation and augmentation mechanisms to alleviate the scarcity of manual annotation; and (iii) a neural-based transcription scheme based on state-of-the-art OMR strategies specifically tailored to Jeongganbo scores. The experiments performed prove the validity of the approach - performance rates close to a 90% of success - and open new research avenues for under-resourced yet challenging music notations.