Neko
0.8.99
A portable framework for highorder spectral element flow simulations

Under development, updated incrementally
Statistics in the context of Neko, is the common name for fields that are averaged in time and possible also space.
The statistics module in Neko computes the temporal average of a wide range of fields.
In this page we use the following convention for a field
The temporal average of a field \(u\) is the approximation of the integral
$$ \langle u \rangle_t = \int_{T_0}^{T_N} u dt $$
In Neko, this is computed as
$$ \langle u \rangle_t = \sum_{i=0}^N u_i \Delta t_i $$ where \( u_0 \) is the fields value at \( T_0 \) and \( N \) is the number of time steps needed to reach \( T_N \), \( T_N = T_0 + \sum_{i=0}^N \Delta t_i \).
In the statistics in Neko, various averages of the the different velocity components, derivatives and pressure are computed. In total, 44 "raw statistics" are computed that are required to compute the Reynolds stress budgets, mean fields, and the different terms in the turbulent kinetic energy equation.
Statistics are enable in the the case file as the following:
Name  Description  Admissible values  Default value 

enabled  Whether to enable the statistics computation.  true or false  true 
start_time  Time at which to start gathering statistics.  Positive real  0 
sampling_interval  Interval, in timesteps, for sampling the flow fields for statistics.  Positive integer  10 
In addition to the usual controls for the output, which then outputs the averages computes from the last time the statistics were written to file.
For example, if one wants to sample the fields every 4 time steps and compute the averages in time intervals of 20 and write the output every 20 time units, and start collecting statistics after an initial transient of 50 time units the following would work:
When the output is written one obtains two .fld files called mean_field and stats.
In mean_field
the following averages are stored. The stored in variable column is which field one finds the computed statistic if one opens the file in paraview or visit.
Number  Statistic  Stored in variable 

1  \( \langle p \rangle \)  Pressure 
2  \( \langle u \rangle \)  XVelocity 
3  \( \langle v \rangle \)  YVelocity 
4  \( \langle w \rangle \)  ZVelocity 
In stats
several other statistics are stored, and while not all might be interesting to your specific use case, with them most different budgets and quantities of interest can be computed. They are stored as the following:
Number  Statistic  Stored in variable 

1  \( \langle pp \rangle \)  Pressure 
2  \( \langle uu \rangle \)  XVelocity 
3  \( \langle vv \rangle \)  YVelocity 
4  \( \langle ww \rangle \)  ZVelocity 
5  \( \langle uv \rangle \)  Temperature 
6  \( \langle uw \rangle \)  Scalar 1 (s1) 
7  \( \langle vw \rangle \)  Scalar 2 (s2) 
8  \( \langle uuu \rangle \)  s3 
9  \( \langle vvv \rangle \)  s4 
10  \( \langle www \rangle \)  s5 
11  \( \langle uuv \rangle \)  s6 
12  \( \langle uuw \rangle \)  s7 
13  \( \langle uvv \rangle \)  s8 
14  \( \langle uvw \rangle \)  s9 
15  \( \langle vvw \rangle \)  s10 
16  \( \langle uww \rangle \)  s11 
17  \( \langle vww \rangle \)  s12 
18  \( \langle uuuu \rangle \)  s13 
19  \( \langle vvvv \rangle \)  s14 
20  \( \langle wwww \rangle \)  s15 
21  \( \langle ppp \rangle \)  s16 
22  \( \langle pppp \rangle \)  s17 
23  \( \langle pu \rangle \)  s18 
24  \( \langle pv \rangle \)  s19 
25  \( \langle pw \rangle \)  s20 
26  \( \langle p \frac{\partial u} {\partial x} \rangle \)  s21 
27  \( \langle p \frac{\partial u} {\partial y}\rangle \)  s22 
28  \( \langle p \frac{\partial u} {\partial z}\rangle \)  s23 
29  \( \langle p \frac{\partial v} {\partial x}\rangle \)  s24 
30  \( \langle p \frac{\partial v} {\partial y}\rangle \)  s25 
31  \( \langle p \frac{\partial v} {\partial z}\rangle \)  s26 
32  \( \langle p \frac{\partial w} {\partial x}\rangle \)  s27 
33  \( \langle p \frac{\partial w} {\partial y}\rangle \)  s28 
34  \( \langle p \frac{\partial w} {\partial z}\rangle \)  s29 
35  \( \langle e11 \rangle \)  s30 
36  \( \langle e22 \rangle \)  s31 
37  \( \langle e33 \rangle \)  s32 
38  \( \langle e12 \rangle \)  s33 
39  \( \langle e13 \rangle \)  s34 
40  \( \langle e23 \rangle \)  s35 
where \(e11,e22...\) is computed as: $$ \begin{aligned} e11 &= \left(\frac{\partial u}{\partial x}\right)^2 + \left(\frac{\partial u}{\partial y}\right)^2 + \left(\frac{\partial u}{\partial z}\right)^2 \\ e22 &= \left(\frac{\partial v}{\partial x}\right)^2 + \left(\frac{\partial v}{\partial y}\right)^2 + \left(\frac{\partial v}{\partial z}\right)^2 \\ e33 &= \left(\frac{\partial w}{\partial x}\right)^2 + \left(\frac{\partial w}{\partial y}\right)^2 + \left(\frac{\partial w}{\partial z}\right)^2 \\ e12 &= \frac{\partial u}{\partial x} \frac{\partial v}{\partial x} + \frac{\partial u}{\partial y}\frac{\partial v}{\partial y}+ \frac{\partial u}{\partial z}\frac{\partial v}{\partial z} \\ e13 &= \frac{\partial u}{\partial x} \frac{\partial w}{\partial x} + \frac{\partial u}{\partial y}\frac{\partial w}{\partial y}+ \frac{\partial u}{\partial z}\frac{\partial w}{\partial z} \\ e23 &= \frac{\partial v}{\partial x} \frac{\partial w}{\partial x} + \frac{\partial v}{\partial y}\frac{\partial w}{\partial y}+ \frac{\partial v}{\partial z}\frac{\partial w}{\partial z} \\ \end{aligned} $$
Of course, these statistics are only the "raw statistics" in the sense that in general we are not interested in \( \langle uu\rangle \), but rather say the rms of the velocity fluctuation. FOr this we need to postprocess the statistics.
There is some rudimentary postprocessing to compute the spatial averages of fld filesa and also to combine the statistics collected from several runs (compute average in time) and also compute both the mean velocity gradient and the Reynolds stresses available among the contrib scripts. By running the contrib scripts without any arguments one gets a hint on their usage, and also the text below gives a guide on how to postprocess the raw statistics. The postprocessing part of Neko is expanding and changing quite a lot at the moment, where we currently envision primarily using python for the postprocessing of the final statistics.
Daniele Massaro, Martin Karp (KTH)
1) Run your simulations and collect mean_field* and stats* files by having the statistics object added to the case file, and specifying the write interval to something suitable.
2) For each RUN_i, you get a set of mean_field* and stats* files. You can average them for each single RUN_i, or average all of them only once (after reordering them properly). If you follow the second approach, go to step 4. Here, for each RUN_i, we compute the averaged means with "average_fields_in_time": –mean srun unbuffered /your/location/neko/bin/average_fields_in_time meanXX.fld T0 mean_p.fld
where T0 is the initial time. To get some hints on the input for the script one can simply run ./average_fields_in_time
without any arguments. For RUN_1 the time T0 can be taken from the log of the first simulation, or from the header of the first mean_field* file; in this way you discard that file. For RUN_i, with i>1, it can be taken from header of the last file mean_field* of the previous simulation RUN_{i1}. In the command line, for the name "meanXX.fld", XX indicates the number of the nek5000 file. In mean_fieldXX.nek5000 you set the number of the first mean0* file to read and the number of steps corresponding to the number of files. In this way, the code generates a mean_p0.f00000 and mean_post0.nek5000. It is suggested to rename mean_p0.f00000 as mean_p0.f0000i and move it to a separate folder where you take the average with all the others. –stats srun unbuffered /your/location/neko/bin/average_fields_in_time statXX.fld T0 stat_p.fld
T0 is the same as before. In stat0.nek5000 you set the number of the first stat0* file to read and the number of steps corresponds to the number of files. It is suggested to rename stat_p0.f00000 as stat_p0.f0000i and move it to a separate folder where you take the average with all the others. Repeat this for each RUN_i folder. Eventually, given n RUN_i folders, you will get n mean_p* and stat_p* files.
3) Take the average of the averaged runs. Now, the time average over all the n simulations is taken. The procedure is similar, but changing the output name is recommended to avoid overwriting. – mean srun unbuffered /your/location/neko/bin/average_fields mean_p0.fld T0 mean_post.fld
where T0 is the initial time which has been used to compute mean_p* for RUN_1. – stats srun unbuffered /your/location/neko/bin/average_fields stat_p0.fld T0 stat_post.fld
where T0 is the initial time which has been used to compute mean_p* for RUN_1.
4) Compute Reynolds stress tensors and other statistical moments (see the list). srun unbuffered /your/location/neko/bin/postprocess_fluid_stats mesh_file.nmsh mean_post0.fld stat_post0.fld
5) We also provide a tool to average the resulting field in a homogenous direction in bin/average_field_in_space
. The required arguments are shown if one runs the program without any input. Currently it requires the number of elements in the homogenous direction as an input argument, e.g. ./average_field_in_space mesh.nmsh field.fld x 18 outfield.fld
if we want to average a field in the x direction on a mesh with 18 elements in x and output the averaged field in outfield0.nek5000.