A different version of the existing
When applying
The upper row shows the original grid and pressure field at an arbitrary time step while the lower row presents the
grid generated by
To get started with
# | topic | notebook |
---|---|---|
1 | 2D flow past a cylinder | view |
2 | ONERA OAT15A at high speed stall conditions | view |
3 | How to select the best settings and advanced options | view |
4 | Loading existing s_cube objects and export options | view |
To view out all the available workflows, as well as the
Note: There are some issues, which may arise when visualizing the results of
For executing
# install venv
sudo apt update && sudo apt install python3.12-venv
# clone the S^3 repository
git clone https://github.com/JanisGeise/sparseSpatialSampling.git
# create a virtual environment inside the repository
python3.12 -m venv s_cube_venv
# activate the environment and install all dependencies
source s_cube_venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
# once everything is installed, leave the environment
deactivate
To check if the installation was successful activate the Python environment and type s_cube.__version__
(should display the current version).
For executing the example scripts in examples/
, the CFD data must be provided. Further the paths to the data as well
as the setup needs to be adjusted accordingly. A script can then be executed as
# start the virtual environment
source s_cube_venv/bin/activate
# add the path to the repository
. source_path
# execute a script
cd examples/
python3 s3_for_cylinder2D.py
The setup for executing
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=72
#SBATCH --time=08:00:00
#SBATCH --job-name=s_cube
# load python
module load release/24.04 GCCcore/13.3.0
module load Python/3.12.3
# activate venv
source s_cube_venv/bin/activate
# add the path to s_cube
. source_path
# path to the python script
cd examples/
python3 cylinder3D_Re3900.py &> "log.main"
An example jobscript for the Barnard HPC of TU Dresden is provided.
Once the grid is generated and a field is interpolated, an SVD from this field can be computed:
from sparseSpatialSampling.utils import write_svd_s_cube_to_file
# compute SVD on grid generated by S^3 and export the results to HDF5 & XDMF
write_svd_s_cube_to_file(field_names, save_path, save_name, new_file, n_modes, rank)
The HDF5 file will contain the following quantities:
- cell area (2D) / cell volume (3D)
- one field for each exported mode
- all singular values
- all temporal mode coefficients
The singular values and mode coefficients are not referenced in the XDMF file since they don't match the size of the field and can therefore not be visualized in ParaView. Before performing the SVD, the fields are weighted with the square-root of the cell areas (volumes) to improve the accuracy and comparability. This weighting is accounted for prior exporting the results of the SVD, however, it needs to be kept in mind when comparing the results to SVDs executed on the original (potentially unweighted) data.
The Datawriter
class provides a common interface for exporting
- once the grid is generated, the original fields from CFD can be interpolated onto this grid using the
ExportData
class - therefore, each field that should be interpolated has to be provided as tensor with the size
[n_cells, n_dimensions, n_snapshots]
. - a scalar field has to be of the size
[n_cells, 1, n_snapshots]
- a vector field has to be of the size
[n_cells, n_entries, n_snapshots]
- the snapshots can either be passed into
export
method all at once, in batches, or each snapshot separately depending on the size of the dataset and available RAM
The grid generation currently requires the storage of the complete tree structure when creating the mesh, which is a
bottleneck for large meshes (see section known issues).
A solution for this issue will be implemented in future versions of
For the interpolation and export of fields, the RAM needs to be large enough to hold at least:
- a single snapshot of the original grid
- the original grid
- the interpolated grid (size depends on the specified target metric)
- the levels of the interpolated grid (size depends on the specified target metric)
- a snapshot of the interpolated field (size depends on the specified target metric)
To build the documentation for sphinx
has to be installed:
pip3 install sphinx sphinx_rtd_theme nbsphinx recommonmark
you can then build the documentation by running
cd docs/
make html
To perform unit tests, pytest
has to be installed via
pip install pytest
To perform all available unit tests, execute
pytest sparseSpatialSampling/tests/
To execute a specific unit test, e.g., for the Dataloader
module, execute
pytest sparseSpatialSampling/tests/test_s_cube_dataloader.py
Note that these commands have to be executed from the top-level of the repository.
If you have any questions or something is not working as expected, feel free to open up a new issue. There are some known issues, which are listed below.
- when dealing with a large number of cells which have to be created,
$S^3$ will eventually run out of memory - this issue occurs in the order of
$N_{cells} \approx 1.5e8$ cells, depending on the available memory maybe sooner - this issue is caused, because currently
$S^3$ holds the complete sampling tree of the mesh in memory, a solution for this issue will be implemented in future versions of$S^3$ .
As a work-around for now, the number of mesh cells to generate can be limited by the argument n_cells_max, e.g.:
s_cube = SparseSpatialSampling(vertices, metric, [domain, geometry], save_dir, save_name, write_times=write_times,
n_jobs=126, n_cells_max=1e8)
In general, if both a value for min_metric and n_cells_max is provided, the value for min_metric will be ignored.
- when having STL files with many points, this will lead to a significant increase in runtime of
$S^3$ - to counter this problem, the
GeometrySTL3D
geometry object provides a compression functionality using the keywordreduce_by
, which decreases the number of points within the STL file - since the final grid will only be an approximation of the geometry, this factor can be set very high (depending on the STL file),
values of
reduce_by=0.9 ... 0.98
were tested successfully (0
means no compression)
- Existing version of the
$S^3$ algorithm can be found under:- D. Fernex, A. Weiner, B. R. Noack and R. Semaan. Sparse Spatial Sampling: A mesh sampling algorithm for efficient processing of big simulation data, DOI: https://doi.org/10.2514/6.2021-1484 (January, 2021).
- Idea & 1D implementation of the current version taken from Andre Weiner
- the flow_data repository containing the implementation of the cylinder2D_Re100 and cylinder3D_Re3900 test cases
- The data for the ONERA OAT15A was kindly provided by research partners of the
TU Stuttgart, the numerical setup can be found in:
- Kleinert, Johannes and Ehrle, Maximilian and Waldmann, Andreas and Lutz, Thorsten. Wake tail plane interactions for a tandem wing configuration in high-speed stall conditions, DOI: https://link.springer.com/article/10.1007/s13272-023-00670-1 (June, 2023).