Both the growing demand to cope with "big data" (based on, or assisted by, artificial intelligence) and the interest in understanding the operation of our brain more completely, stimulated the efforts to build biology-mimicking computing systems from inexpensive conventional components and build different ("neuromorphic") computing systems. On one side, those systems require an unusually large number of processors, which introduces performance limitations and nonlinear scaling. On the other side, the neuronal operation drastically differs from the conventional workloads. The conduction time (transfer time) is ignored in both in conventional computing and "spatiotemporal" computational models of neural networks, although von Neumann warned: "In the human nervous system the conduction times along the lines (axons) can be longer than the synaptic delays, hence our procedure of neglecting them aside of the processing time would be unsound" [1], section 6.3. This difference alone makes imitating biological behavior in technical implementation hard. Besides, the recent issues in computing called the attention to that the temporal behavior is a general feature of computing systems, too. Some of their effects in both biological and technical systems were already noticed. Instead of introducing some "looks like" models, the correct handling of the transfer time is suggested here. Introducing the temporal logic, based on the Minkowski transform, gives quantitative insight into the operation of both kinds of computing systems, furthermore provides a natural explanation of decades-old empirical phenomena. Without considering their temporal behavior correctly, neither effective implementation nor a true imitation of biological neural systems are possible.