Neko
0.9.0
A portable framework for high-order spectral element flow simulations
|
The user file is a fortran file where the user can implement their own functions to extend the capabilities of the default Neko executable. The user file can be used for setting advanced initial/boundary conditions, source terms, I/O operations, and interactions with the Neko framework.
The user file is a regular Fortran .f90
file that needs to be compiled with makeneko
, located in the bin
folder of your neko installation. To compile a user file user.f90
, run:
If everything goes well, you should observe the following output:
Compiling your user file with makeneko
will create a neko
executable, which you will need to execute with your case file as an argument. For example, if your case file is called user.case
:
Or in parallel using MPI:
The current high-level structure of the user file is shown below.
The user file implements the user
module. The user
modules contains a subroutine named user_setup
, which we use to interface the internal procedures defined in src/common/user_intf.f90
with the subroutines that you will implement in your user file. Each user subroutine should be implemented under the contains
statement, below user_setup
.
bin/neko
in your local neko installation folder.The following user functions, if defined in the user file, will always be executed, regardless of what is set in the case file:
rho
, mu
, cp
and lambda
.d=1
. For more information on the scalar, see the relevant section of the case file.The two subroutines user_init_modules
and user_finalize_modules
may be used to initialize/finalize any user defined variables, external objects, or processes. They are respectively executed right before/after the simulation time loop.
In the example above, the subroutines initialize
and finalize
contain the actual implementations. They must also be interfaced to the internal procedures user_init_modules
and user_finalize_modules
in user_setup
:
user_init_modules
and user_finalize_modules
are independent of each other. Using one does not require the use of the other.The subroutine user_check
is executed at the end of every time step. It can be used for computing and/or outputting your own variables/quantities at every time step.
In the example above, the subroutine usercheck
contains the actual implementation, and needs to be registered by adding:
to our user_setup
.
material_properties
allows for more complex computations and setting of various material properties, such as rho
, mu
for the fluid and cp
, lambda
for the scalar. The example below is taken from the rayleigh-benard-cylinder example.
And of course not forgetting to register our function in user_setup
by adding the following line:
This user function allows for the modification of the mesh at runtime, by acting on the element nodes of the mesh specified in the case file. This function is only called once before the simulation time loop. The example below is taken from the tgv example.
The registering of the above function in user_setup
should then be done as follows:
This user function can be used to specify the scalar boundary values, on all zones that are not already set to uniform Dirichlet or Neumann values e.g. d=1
or n=0
. For more information on the scalar, see the relevant section of the case file. The example below sets the scalar boundary condition values to be a linear function of the z
coordinate (taken from the rayleigh-benard example).
This function will be called on all the points on the relevant boundaries. The registering of the above function in user_setup
should be done as follows:
In addition to the case-specific user functions, the user can also define their own simulation components. This can be done by writing a new type which extends the simulation_component_t type, and implementing the necessary functions for the new type. The user can then specify the component in the list of simulation components in the case file. The setting is_user
should be set to true
in the JSON object for the new simulation component. The typename is used to extract the settings for the simulation component from the JSON file.
In the example above, the subroutine user_simcomp
contains the actual implementation, and needs to be registered by adding:
A full example of a user-defined simulation component can be found in the examples.
As explained in the case file page, certain components of the simulation can be set to be user defined. These components and their associated user functions are:
Description | User function | JSON Object in the case file |
---|---|---|
Fluid initial condition | fluid_user_ic | case.fluid.initial_condition |
Scalar initial condition | scalar_user_ic | case.scalar.initial_condition |
Fluid inflow boundary condition | fluid_user_if | case.fluid.inflow_condition |
Scalar boundary conditions | scalar_user_bc | (user function is always called) |
Fluid source term | fluid_user_f_vector or fluid_user_f | case.fluid.source_terms |
Scalar source term | scalar_user_f_vector or scalar_user_f | case.scalar.source_terms |
Fluid and Scalar boundary conditions | field_dirichlet_update | case.fluid.boundary_types and/or case.scalar.boundary_types |
Note that scalar_user_bc
is included for completeness but is technically not case-specific.
Enabling user defined initial conditions for the fluid and/or scalar is done by setting the initial_condition.type
to "user"
in the relevant sections of the case file, case.fluid
and/or case.scalar
.
See the relevant sections on the fluid and scalar initial conditions in the case file page for more details.
The associated user functions for the fluid and/or scalar initial conditions can then be added to the user file. An example for the fluid taken from the advecting cone example, is shown below.
NEKO_BCKND_DEVICE
flag, which will be set to 1 if running on GPUs, and the calls to device_memcpy
to transfer data between the host and the device. See Running on GPUs for more information on how this works.The same can be done for the scalar, with the example below also inspired from the advecting cone example:
NEKO_BCKND_DEVICE
flag, which will be set to 1 if running on GPUs, and the calls to device_memcpy
to transfer data between the host and the device. See Running on GPUs for more information on how this works.We should also add of the following lines in user_setup
, registering our user functions set_velocity
and set_s_ic
to be used as the fluid and scalar initial conditions:
Enabling user defined inflow condition for the fluid is done by setting the case.fluid.inflow_condition.type
to "user"
:
See the the relevant section in the case file page for more details. The associated user function for the fluid inflow condition can then be added to the user file. An example inspired from the lid-driven cavity example is shown below.
We should also add of the following line in user_setup
, registering our user function user_bc
to be used as the fluid inflow conditions:
Enabling user defined source terms for the fluid and/or scalar is done by adding JSON Objects to the case.fluid.source_terms
and/or case.scalar.source_terms
lists.
See the relevant sections on the fluid and scalar source terms in the case file page for more details.
_user_f
and _user_f_vector
. The former is called when setting "user_pointwise"
as the source term type, while the latter requires the use of the "user_vector"
keyword in the case file. The pointwise variant, fluid_user_f
is not supported on GPUs. In general, fluid_user_f_vector
is the prefered variant, and is the one which will be use in our examples below. The same applies for the scalar source term user functions.The associated user functions for the fluid and/or scalar source terms can then be added to the user file. An example for the fluid, taken from the rayleigh-benard-cylinder example, is shown below.
neko_field_registry
to retrieve the velocity and scalar fields. See Registries for more information about registries in neko. NEKO_BCKND_DEVICE
flag, which will be set to 1 if running on GPUs, and the use of device_
functions. See Running on GPUs for more information on how this works.The same can be done for the scalar, with the example below also taken from the scalar_mms example:
NEKO_BCKND_DEVICE
flag, which will be set to 1 if running on GPUs, and the call to device_memcpy
to transfer data between the host and the device. See Running on GPUs for more information on how this works.We should also add of the following lines in user_setup
, registering our user functions set_boussinesq_forcing_term
and set_source
to be used as the fluid and scalar source terms:
This user function can be used to specify dirichlet boundary values for velocity components u,v,w
, the pressure p
, and/or the scalar s
. This type of boundary condition allows for time-dependent velocity profiles (currently not possible with a standard user_inflow
) or non-uniform pressure profiles to e.g. impose an outlet pressure computed from another simulation.
The selection of such boundary condition is done in the case.fluid.boundary_types
array for the velocities and pressure, and in the case.scalar.boundary_types
array for the scalar. The case file outlines which keywords can be used for such purpose:
d_vel_u
for the u
component of the velocity fieldd_vel_v
for the v
component of the velocity fieldd_vel_w
for the w
component of the velocity fieldd_pres
for the pressure fieldd_s
for the scalar field (cannot be combined with the above)The separator "/"
can be used to combine the keywords related to u,v,w
and p
. For example, if one wants to only apply u,v
and p
values on a given boundary, one should use "d_vel_u/d_vel_v/d_pres"
. In this case, the w
component would be left untouched (not zeroed!). An example of case file from the cyl-boundary-layer example is shown below.
In this example, we indicate in case.fluid.boundary_types
that we would like to apply a velocity profile on all three components u,v,w
on the boundary number 1 (in this case, the inlet boundary). On boundary number 2 (the outlet boundary), we also indicate the three velocity components, with the addition of the pressure. In case.scalar.boundary_types
, we indicate the same for the scalar on boundaries 1 and 2 (inlet and outlet).
d_s
and d=x
boundary conditions for the scalar. The latter is to be used to specify a constant Dirichlet value x
along the relevant boundary.Once the appropriate boundaries have been identified and labeled, the user function field_dirichlet_update
should be used to compute and apply the desired values to our velocity/pressure/scalar field(s). The prefix "field" in field_dirichlet_update
refers to the fact that a list of entire fields is passed down for the user to edit.
The fields that are passed down are tied to the boundary_types
keywords passed in the case file. The function field_dirichlet_update
is then called internally, one time in the fluid
solver and one time in the scalar
solver (if enabled).
Finally, depending on which boundary labels were input, the fields given to the user are copied onto the solution field boundaries.
The header of the user function is given in the code snippet below.
The arguments and their purpose are as follows:
field_bc_list
is the list of the field that can be edited. It is a list of field_t
objects.i
contained in field_bc_list
is accessed using field_bc_list%items(i)%ptr
and will refer to a field_t
object.which_solver = "fluid"
, it will contain the 4 fields u,v,w,p
. They are retrieved in that order in field_bc_list
, i.e. u
corresponds to field_bc_list%items(1)%ptr
, etc.which_solver = "scalar"
, it will only contain the scalar field s
.bc_bc_list
contains a list of the bc_t
objects to help access the boundary indices through the boundary mask
.i
contained in bc_bc_list
is accessed with bc_bc_list%bc(i)%bcp
.i
-th bc_t
object contained in bc_bc_list
is accessed with bc_bc_list%bc(i)%bcp%msk
. It contains the linear indices of each GLL point on the i
-th boundary facets. msk(0)
contains the size of the array. The first boundary index is msk(1)
.which_solver = "fluid"
, it will contain the 4 bc_t
objects corresponding to d_vel_u
, d_vel_v
, d_vel_w
, and d_pres
. They can be retrieved in that order, in the same way as for field_bc_list
.which_solver = "scalar"
, it will only the 1 bc_t
object corresponding to d_s
.coef
is a coef_t
object containing various numerical parameters and variables, such as the polynomial order lx
, derivatives, facet normals...t
, tstep
are self-explanatory.which_solver
takes the value "fluid"
when the user function is called in the fluid solver. It takes the value "scalar"
when it is called in the scalar solver.Links to the documentation to learn more about what the types mentioned above contain and how to use them: src/field/field.f90
, src/bc/bc.f90
, src/sem/coef.f90
.
The user function should be registered in user_setup
with the following line:
A very simple example illustrating the above is shown below, which is taken from the cyl_boundary_layer example
This example is applying constant dirichlet values at the selected boundaries for the velocity components and presure. The scalar is applied a function s(y,z) = sin(y)*sin(z)
to demonstrate the usage of boundary masks.
u = 1.0_rp
is only possible because of the overloading of the assignement operator =
in field_t
. In general, a field's array should be accessed and modified with u%x
.Note that we are only applying our boundary values at the first timestep, which is done simply with the line if (tstep .ne. 1) return
. This is a trick that can be used for time independent boundary profiles that require some kind of time consuming operation like interpolation or reading from a file, which would add overhead if executed at every time step.
Observe that we always check if the fields are allocated before manipulating them. This is to prevent accidental memory access if only part of the velocity components or pressure are given in case.fluid.boundary_types
. Fields in the lists are only allocated if they are present in the case file.For example, if we removed the d_pres
condition in the JSON case file code snippet above, the pressure field for our boundary condition would not be allocated ( in the example above, allocated(p%x)
would never be true
). "boundary_types": ["d_vel_u", "d_vel_v"]
will allocate the two first fields in field_bc_list
, which is the same behaviour as "boundary_types": ["d_vel_u/d_vel_v", ""]
.
device_memcpy
to make sure the device arrays are also updated.When running on GPUs, special care must be taken when using certain user functions. The short explanation is that the device (GPU) has its own memory and cannot directly access the memory on the host (CPU). This means that data and more specifically arrays must be copied manually from the host to the device (see device::device_memcpy).
device_memcpy
is avoidable. Neko has some device math functions implemented that operate directly on device arrays. If you can decompose whatever operations you are performing in a user function into a set of instructions from the math
module (e.g. cadd
, cfill
, sub2
, ...), you may use the corresponding device_math
functions to offload work to the GPU. See the fluid forcing code snippet for a simple example. For more advanced examples, see the rayleigh-benard example or the tgv example.To illustrate this, let us have a look at the fluid initial condition code snippet:
The code above is used to set the fluid initial condition, by specifying the values of fields u,v,w
(and p
) at all points in the domain. Notice that we have divided the above code into two parts.
In the first part, we set the velocity components u=-y*pi*
, v=x*pi*
, and w=0
, which updates the velocity field arrays u%x, v%x, w%x
allocated on the host (CPU). If we were to run on GPUs, these lines of code would only act on the velocity arrays on the host (CPU), leaving the device (GPU) arrays untouched.
We take care of this in the second part, for all three velocity arrays. To update the device (GPU) arrays, we use device_memcpy
to copy the data contained in a host (CPU) array to a device (GPU) array. Looking at the details of the device_memcpy
calls, we note the following:
_d
to the host array variable name (e.g. u%x
and u%x_d
). This is the standard in Neko.HOST_TO_DEVICE
. Other flags can also be used to move data from device to host (DEVICE_TO_HOST
) or device to device (DEVICE_TO_DEVICE
). See the accelerators page for more details on this.sync
argument is a non-optional argument which dictates wether or not to perform the data transfer synchronously.sync = .true.
as a starting point.Finally, observe that we use the flag NEKO_BCKND_DEVICE
to check if we are indeed running on GPUs. In that case, NEKO_BCKND_DEVICE
would be equal to 1.
Neko uses the concept of registry
as a practical way to retrieve fields and point zones anywhere in the user file.
The field registry neko_field_registry
is often used in user functions where certain fields are not directly accessible as arguments. One can retrieve any field in the registry by its name
with neko_field_registry%get_field(name)
. Default fields that are added to the registry are u,v,w,p
and s
if running with the scalar enabled. For a practical example of usage, see the rayleigh benard example
Other fields may be added to the registry by various simulation components. For example:
simulation_components.vorticity
enabled, the fields omega_x, omega_y, omega_z
will be accessible in the registry.simulation_components.lambda2
enabled, the field lambda2
will be accessible in the registry.neko_field_registry%add_field
(see field_registry::field_add).The point zone registry, neko_point_zone_registry
, can be used to retrieve pointers to point_zone_t
objects defined in the case file. See using point zones for detailed instructions.