Deep learning velocity signals allow quantifying turbulence intensity

Deep learning velocity signals allow quantifying turbulence intensity from very short signals and with extremely high accuracy.


Viscosity estimates via structure functions
In this section we outline the derivation of the physical argument that we employ in order to estimate the viscosity. In particular, we derive the relation between the second order structure function, S 2 (τ ), in the small τ limit, and the viscosity ν. We apply the standard Eulerian-Lagrangian bridge relation to write the Lagrangian structure functions in the multi-fractal formalism. The velocity signals from the shell model of turbulence can indeed be regarded as a Lagrangian signal, due to the lack of sweeping. In this work we used data from shell model as well as data from Lagrangian passive tracers in fully resolved 3d DNS both signals should therefore be compared with the Lagrangian statistics.
In the inertial range, velocity differences, δv(l) = v(r + l) − v(r), exhibit a singular, i.e. power law, behavior as a function of the observation scale l, i.e. δv(l) ∼ l h , for some 0 < h < 1. Let τ l the typical time-scale at scale l (i.e. τ l ∼ l/δv(l)), then it holds The dissipative scale of the system (η, τ η ) satisfies or, in Lagrangian perspective, δv(τ η )τ from which we have Let D(h) be the fractal co-dimension in which an h-singular behavior is observed, i.e. singularities with Hölder exponent h occur with probability This enables to express explicitly structure functions as which in the fully developed turbulence limit satisfies the power law where α is determined by a Legendre transform as The Kolmogorov K41 approach considers D(h) = D = 3, which yields h = 1 3 and α K41 = 1 2 . Conversely, employing the She-Leveque multifractal model for D(h), yields α ≈ 0.57.
Inverting the relation in Eq. (8), in combination with the value of α, yields an estimator of the viscosity based on the second order structure function.

Width of the inertial range
In presence of limited statistics as in the case of relatively short signals, the estimation of the width of the inertial range or, similarly, the estimation of the viscosity, is enslaved to large scale energy fluctuations. On a time scale comparable to the large scale fluctuations, local increments or decrements of the system energy yield almost instantaneous widenings or shortenings of the inertial range. This effect can be naturally interpreted in terms of viscosity, where local energy increments play the same effect of a lower viscosity on the width of inertial range (see Figure S.1, where show this aspect for Eulerian structure functions). We compare a reference case with, respectively, a dynamics characterized by increased forcing (structure function translated and superimposed, a posteriori, to the reference), and a dynamics characterized by decreased viscosity. Both these two cases yield a higher Reynolds number wider extension of the inertial range.
In Figure  normalization with respect to the integral scale energy, i.e. the asymptotic value S 2 (∞) = 2v 2 rms ≈ S 2 (T ). Each plot reports a collection of 25 structure functions extracted from the training set and with associated viscosity ν = 0.0005. The x-axis is in units of sampling time, ∆t, as presented to the DNN.
We include in Table 1 the parameters considered in the shell model simulations by which the training, validation and test datasets have been created. In Figure S.3 we complement Figure 1 by including, for the same three viscosity levels, further features of the considered signals. These are: (a) second order Eulerian structure functions, S 2 E (n) (where S p,E (n) = S p,E (k n ) = |u n | p ) showing that changing the viscosity only affects the extension of the inertial range; (b) relevant time scales (computed by inertial scaling) associated with the dynamics of the different shells; (c) signals energy as a function of time. In Figure S.4, we report the diagram of the neural network. Relevant structural parameters (e.g. size of the convolutional filters) are reported in the figure caption. 2.5 · 10 −5 6.0 · 10 −5 max(ν) 9.75 · 10 −4 9.6 · 10 −4 increment ν 2.5 · 10 −5 6.0 · 10 −5 levels 39 16 set size 192.000 6.600 training:validation ratio 75%:25% N/A

Features observed by the DNN
During training, the DNN develops feature detectors. As discussed in the main text, we expect these detectors to select features that, at the same time, strongly correlate with the turbulence intensity and that are insensitive to large scale oscillations. As generally expected in deep learning, detectors are likely specific to the parameter range and statistical properties of the signals contained in the training set.
In this section, to understand the characteristics of the signal that our model relies on, we develop an ablation study by systematically altering the content of randomly selected testing signals. The modifications considered involve the suppression of frequency components, or the random shuffling of the time structure. This enables us to identify features mostly ignored by the DNN and, conversely, restrict the set of characteristics of the signals relevant for the DNN.
In Figure S.5 we consider testing signals that have been altered through a high-pass (a) or a band-pass filter (b). In the case of Lagrangian signals, filtering operations are easily performed by restricting the summation in Eq. (2) to a subset of the shell signals. We select one testing signal per viscosity level, we ablate its spectral structure and we plot the DNN prediction. We notice that the neural network is almost insensitive to the large scale dynamics, as the estimates after the high-pass filter remain unaltered if the large-scale shells are removed. We notice, in particular, that any selection of a band of shells that includes the last part of the inertial range yield almost error-free predictions.
Similarly, we can alter the time structure of the signals by partitioning them in disjoint contiguous blocks of length T B , and then by randomly mixing these blocks. In Figure S.6 we report the predictions for different block extensions. As the block extension remains in the same order of the integral scale, the prediction remain mostly unaltered, to then degrade as the block size become comparable to the dissipative time-scale. This shows how the training develop feature extractors targeting fine scales and correlations existing around the dissipative end of the inertial range.