[1] Sekundo W, Kunert KS, Blum M. Small incision corneal refractive surgery using the small incision lenticule extraction (SMILE) procedure for the correction of myopia and myopic astigmatism: results of a 6 month prospective study. Br J Ophthalmol 2011;95(3):335–339
[2] Shah R, Shah S, Sengupta S. Results of small incision lenticule extraction: all-in-one femtosecond laser refractive surgery. J Cataract Refract Surg 2011;37(1):127–137
[3] Jin H-Y, Wan T, Wu F, Yao K. Comparison of visual results and higher-order aberrations after small incision lenticule extraction (SMILE): high myopia vs. mild to moderate myopia. BMC Ophthalmol 2017;17(1):118
[4] Zhang J, Wang Y, Wu W, Xu L, Li X, Dou R. Vector analysis of low to moderate astigmatism with small incision lenticule extraction (SMILE): results of a 1-year follow-up. BMC Ophthalmol 2015;15(1):1–10
[5] Mosquera SA, Ortueta DD, Verma S. The art of nomogram. Eye Vis 2018;5(1):2
[6] Mrochen M, Hafezi F, Iseli HP, Löffler J, Seiler T. Nomograms for the improvement of refractive outcomes. Ophthalmologe 2006;103:331–8
[7] Liang G, Chen X, Zha X, Zhang F. A nomogram to improve predictability of small-incision lenticule extraction surgery. Med Sci Mon 2017;23: 5168–5175
[8] Subbaram MV, MacRae SM. Customized lasik treatment for myopia based on preoperative manifest refraction and higher order aberrometry: The Rochester Nomogram. J Refract Surg 2007;23(5): 435–441
[9] Biebesheimer JB, Kang TS, Huang CY, Yu F, Hamilton R. Development of an advanced nomogram for myopic astigmatic wavefront-guided laser in situ keratomileusis (LASIK). Ophthalmic Surg Lasers Imag Retina 2011;42(3): 241–247
[10] Allan BD, Hassan H, Ieong A. Multiple regression analysis in nomogram development for myopic wavefront laser in situ keratomileusis: improving astigmatic outcomes. J Cataract Refract Surg 2015;41(5): 1009–1017
[11] Wang M, Zhang Y, Wu W, et al. Predicting Refractive Outcome of Small Incision Lenticule Extraction for Myopia Using Corneal Properties. Transl Vis Sci Technol 2018;7(5):11
[12] Bragheeth MA, Dua HS. Effect of refractive and topographic astigmatic axis on LASIK correction of myopic astigmatism. J Refract Surg 2005;21(3):269–275
[13] Evans RS. Electronic health records: then, now, and in the future. Yearb Medical Inform 2016;25(S 01):S48–S61
[14] Lu W, Tong Y, Yu Y, Xing Y, Chen C, Shen Y. Applications of artificial intelligence in ophthalmology: general overview. J Ophthalmol 2018;11(9):1555
[15] Yang SH, Van Gelder RN, Pepose JS. Neural network computer program to determine photorefractive keratectomy nomograms. J Cataract Refract Surg 1998;24(7): 917–924
[16] Cui T, Wang Y, Ji S, et al. Applying machine learning techniques in nomogram prediction and analysis for SMILE treatment. Am J Ophthalmol 2020;210:71–77
[17] Popov S, Morozov S, Babenko A. Neural oblivious decision ensembles for deep learning on tabular data. arXiv preprint arXiv:1909.06312, 2019
[18] Verma P, Anwar S, Khan S, Mane SB. Network intrusion detection using clustering and gradient boosting. In: 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE; 2018:1–7
[19] Abou Omar. XGBoost and LGBM for Porto Seguro’s Kaggle challenge: A comparison. Preprint Semester Project, 2018.
[20] Bethapudi S, Desai S. Separation of pulsar signals from noise using supervised machine learning algorithms. Astron Comput 2018,23:15–26
[21] Hoyle B, Rau MM, Zitlau R, Seitz S, Weller J. Feature importance for machine learning redshifts applied to SDSS galaxies. Mon Not R Astron Soc 2015;449.2:1275–1283
[22] Sevilla-Noarbe I, Etayo-Sotos P. Effect of training characteristics on object classification: An application using boosted decision trees. Astron Comput 2015;11:64–72
[23] Elorrieta F, Eyheramendy S, Jordan A, et al. A machine learned classifier for RR Lyrae in the VVV survey. Astron Astrophys 2016;595:A82
[24] Acquaviva V. How to measure metallicity from five-band photometry with supervised machine learning algorithms. Mon Not R Astron Soc 2016;456.2:1618–1626
[25] Zitlau R, Hoyle B, Paech K, Weller J, Rau MM, Seitz S. Stacking for machine learning redshifts applied to SDSS galaxies. Mon Not R Astron Soc 2016;460.3:3152–3162
[26] Jhaveri S, Khedkar I, Kantharia Y, Jaswal S. Success prediction using random forest, CatBoost, XGBoost and AdaBoost for Kickstarter campaigns. In: 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). IEEE; 2019:1170–1173
[27] Sekundo, Walter. Small Incision Lenticule Extraction (SMILE): Principles, Techniques, Complication Management, and Future Concepts. Springer, 2015.
[28] Breiman L. Classification and regression trees. 1st ed. New York: Wadsworth International Group; 1984
[29] Drucker H. Improving regressors using boosting techniques. In: Proceedings of the 14th International Conference on Machine Learning;1997(97):107-115
[30] Chen T, Guestrin C. XGBoost: A scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016:785–794
[31] Freund Y, Schapire RE. Experiments with a new boosting algorithm. In: International Conference on Machine Learning;1996(96):148–156
[32] Bethapudi S, Desai S. Separation of pulsar signals from noise using supervised machine learning algorithms. Astron Comput 2018;23:15–26
[33] Maaji SS, Cosma G, Taherkhani A, Alani AA, McGinnity TM. On-line voltage stability monitoring using an Ensemble AdaBoost classifier. In: 2018 4th International Conference on Information Management (ICIM). IEEE; 2018:253–259
[34] Abou Omar KB. XGBoost and LGBM for Porto Seguro’s Kaggle challenge: A comparison. Preprint Semester Project, 2018
[35] Khademi F, Akbari M, Jamal SM, Nikoo M. Multiple linear regression, artificial neural network, and fuzzy logic prediction of 28 days compressive strength of concrete. Front Struct Civ Eng 2017;11(1):90–99
[36] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436
[37] Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python. J Mach Learn Res 2011;12:2825–2830
[38] Breiman L. Random forests. Mach Learn 2001;45(1):5–32