The cultivation of children's intellectual and literacy skills will benefit from heuristic questioning in educational readings like fairy tales. However, as not all stories naturally encompass expert-derived questions, machine-generated questions need to serve as indispensable supplements to enrich the learning experience. Unfortunately, current text generation models fail to generate high-cognitive educational questions closely related to diverse knowledge of stories. To this end, we propose a novel framework that employs automatic hybrid prompt learning to enable self-questioning to organize knowledge. Initially, we design an identifier to locate sentences containing knowledge within a given text, and then train a model to generate corresponding knowledge inferences. Each inference, which acts as the hard prompt, is concatenated with the soft prompt (\textit{i.e.}, learnable parameters) to construct the hybrid prompt. Equipped with these prompts, pre-trained language models can be facilitated to generate questions and then their answers. These question and answer pairs are distillations of the reading's knowledge. We evaluate the generation performance of our framework on an educational question-answering benchmark known as FairytaleQA. Experimental results demonstrate that our framework outperforms baselines according to automatic and manual evaluation metrics. Notably, our approach excels at generating diverse heuristic questions. Moreover, our work holds the potential to contribute significantly to the advancement of children's education.