Disordered nanoclusters with multi-electrode input-output functionality had recently been experimentally realized with energy-efficient and emergent computational capacity, and thus an interconnected network of several such nanoclusters had been proposed to realize artificial neural networks. To aid that end, here we show that nanocluster functionality can be fit to the simplest dendritic neuron model, where the only form of nonlinearity is due to multiplicative interactions. This work brings in to the spotlight higher-order neural networks (known for their efficient encoding of geometric invariances) to serve as an explainable baseline model of nano-networks against which experimentalists can compare more sophisticated models (deep neural networks or physics-based models such as the lin-min network introduced here); and provides ground for designing novel approximate hardware and a statistical mechanics analysis of the learning performance of interconnected nanoclusters vs. perceptrons (where neurons output a nonlinear function of the weighted sum of their inputs). A network with just 10 higher-order neurons is shown to achieve a classification accuracy of more than 96% on the MNIST benchmark for handwritten digit recognition (which required 100 times more neurons in 3-layer perceptrons).