The behavior of data in space and time is usually based on a preliminary model of variability. Deviations observed from the model characterize both the model itself and possible data errors. Gaussian error theory is usually used to characterize the distribution of deviations, but very often non-gaussian nature of these deviations is visually obvious. We may see this on the semi-logarithmic chart of the corresponding histogram. Therefore, the percentage of large deviations may be significantly larger than that predicted with the help of the Gaussian error theory: a different method of processing the residuals is necessary for real data. If we assume that the variability itself depends on space and time (in the language of probability theory this behavior is characterized as a mixture of Gaussian distributions), then the non-gaussian shapes of histograms can be explained. For practical purposes, an explicit characterization of the mixing law is of interest, since the share of significant deviations is determined by this very law. The article substantiates the explicit description of the mixing law and shows how to calculate its parameters from the data using the example of magnetic data. Such an approach is also applicable in more general situations.