Deep neural network models with higher performance generally have high amount of parameters and calculations and highly nonlinear structure. This also causes obstructs the understanding of the internal working mechanism of the deep neural network model and its deployment to smart terminals or embedded platforms, which hinders further promotion and application of the deep neural network models. Therefore, it is necessary to design a lightweight and interpretable deep neural network model. This paper first combined multi-scale depthwise and pointwise convolution to design a lightweight feature-map augmentation (FA) module and used residual connection to stack FA modules to design feature augmentation convolution blocks. Then, according to the idea of finding prototype samples, by measuring the maximum mean discrepancy distance between prototype data distribution and overall data distribution, the ILSVRC2012 original training set was pre-filtered, a representative prototype training set was selected. Then, the prototype dictionary interpretability (PDI) module was designed. During network training, the prototype dictionary was updated by minimising the Euclidean distance between the training samples and prototype samples in the prototype dictionary to realise the visualisation of model classification and reasoning. Finally, referring to the network architecture of MobileNetV3, using feature map augmentation convolutional blocks and a prototype dictionary interpretability module, a lightweight interpretable deep network model, FAPI-Net, was built. Experimental verification was carried out on two image classification datasets: CIFAR-10 and ILSVRC2012. The experimental results verified the effectiveness of the FA module, the interpretability of the PDI module and the FAPI-Net also performed well in image classification tasks.