Skip to content

Commit

Permalink
Nalu Post-processsing Utilities
Browse files Browse the repository at this point in the history
Introduce post-processing utility infrastructure and add a utility to calculate
ABL statistics as an example.
  • Loading branch information
sayerhs committed Nov 10, 2017
1 parent 288f8c0 commit 09cfa24
Show file tree
Hide file tree
Showing 15 changed files with 861 additions and 3 deletions.
25 changes: 25 additions & 0 deletions doc/manual/source/user/files/nalu_postprocess.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Example input file for Nalu Post-processing utility

nalu_postprocess:

# Name of the solution results or restart database
input_db: rst/precursor.e

# List of post-processing tasks to be performed
tasks:
- abl_statistics

# Input parameters for the post-processing tasks
abl_statistics:
fluid_parts:
- Unspecified-2-HEX

field_map:
velocity: velocity_raone
temperature: temperature_raone
sfs_stress: sfs_stress_raone

height_info:
min_height: 0.0
max_height: 1000.0
delta_height: 10.0
60 changes: 60 additions & 0 deletions doc/manual/source/user/nalu_postprocess.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
.. _util_nalu_postprocess_exe:

``nalu_postprocess`` -- Nalu Post-processing Utilities
======================================================

This utility loads an Exodus-II solution file and performs various
post-processing *tasks* on the database. Currently, the following *tasks* have
been implemented within this utility.

==================== ===================================================
Task type Description
==================== ===================================================
``abl_statistics`` Calculate various ABL statistics of interest
==================== ===================================================

The input file (:download:`download <files/nalu_postprocess.yaml>`) must contain
a **nalu_postprocess** section a shown below. Input options for various *tasks*
are provided as sub-sections within **nalu_postprocess** with the corresponding
task names under tasks.

.. literalinclude:: files/nalu_postprocess.yaml
:language: yaml

Command line invocation
-----------------------

.. code-block:: bash
mpiexec -np <N> nalu_postprocess -i [YAML_INPUT_FILE]
.. program:: nalu_postprocess

.. option:: -i, --input-file

Name of the YAML input file to be used. Default: ``nalu_postprocess.yaml``.

Common input file options
-------------------------

.. confval:: input_db

Path to an existing Exodus-II mesh database file, e.g.., ``ablPrecursor.e``

.. confval:: tasks

A list of task names that define the various pre-processing tasks that will
be performed on the input mesh database by this utility. The program expects
to find additional sections with headings matching the task names that
provide additional inputs for individual tasks.


``abl_statistics``
------------------

This task computes various various statistics relevant for ABL simulations and
outputs vertical profiles of various quantities of interest.

.. literalinclude:: files/nalu_postprocess.yaml
:language: yaml
:lines: 12-25
1 change: 1 addition & 0 deletions src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@

add_subdirectory(core)
add_subdirectory(preprocessing)
add_subdirectory(postprocessing)
add_subdirectory(mesh)

if (ENABLE_WRFTONALU)
Expand Down
5 changes: 3 additions & 2 deletions src/core/CFDMesh.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -48,12 +48,12 @@ CFDMesh::CFDMesh
stkio_(comm)
{}

void CFDMesh::init()
void CFDMesh::init(stk::io::DatabasePurpose db_purpose)
{
if (input_db_ == "")
throw std::runtime_error("CFDMesh::init called on empty database file");

stkio_.add_mesh_database(input_db_, stk::io::READ_MESH);
stkio_.add_mesh_database(input_db_, db_purpose);
stkio_.set_bulk_data(bulk_);
stkio_.create_input_mesh();
stkio_.add_all_mesh_fields_as_input_fields();
Expand All @@ -76,6 +76,7 @@ void CFDMesh::write_database
{
size_t fh = stkio_.create_output_mesh(output_db, stk::io::WRITE_RESULTS);
for (auto fname: output_fields_ ) {
std::cerr << fname << std::endl;
stk::mesh::FieldBase* fld = stk::mesh::get_field_by_name(fname, meta_);
if (fld != NULL) {
stkio_.add_field(fh, *fld, fname);
Expand Down
2 changes: 1 addition & 1 deletion src/core/CFDMesh.h
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ class CFDMesh
* If an input DB is provided, the mesh is read from the file. The MetaData
* is committed and the BulkData is ready for use/manipulation.
*/
void init();
void init(stk::io::DatabasePurpose db_purpose = stk::io::READ_MESH);

//! Reference to the MPI communicator object
inline stk::ParallelMachine& comm() { return comm_; }
Expand Down
4 changes: 4 additions & 0 deletions src/core/YamlUtils.h
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@
/** \file
* Miscellaneous utilities for working with YAML C++ library
*/
#ifndef YAMLUTILS_H
#define YAMLUTILS_H

#include "yaml-cpp/yaml.h"

Expand Down Expand Up @@ -69,3 +71,5 @@ bool get_optional(const YAML::Node& node, const std::string& key, T& result, con
} // wind_utils
} // nalu
} // sierra

#endif /* YAMLUTILS_H */
246 changes: 246 additions & 0 deletions src/postprocessing/ABLStatistics.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,246 @@
// Copyright 2016 National Renewable Energy Laboratory
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//

#include "ABLStatistics.h"

#include "stk_util/parallel/ParallelReduce.hpp"

#include <cmath>
#include <fstream>

namespace sierra {
namespace nalu {

REGISTER_DERIVED_CLASS(PostProcessingTask, ABLStatistics, "abl_statistics");

ABLStatistics::ABLStatistics
(
CFDMesh& mesh,
const YAML::Node& node
) : PostProcessingTask(mesh),
meta_(mesh_.meta()),
bulk_(mesh_.bulk()),
fluid_parts_(0),
ndim_(meta_.spatial_dimension())
{
if (ndim_ != 3)
throw std::runtime_error("ABLStatistics is only supported for 3-D meshes");

// Load some default field names
field_map_["velocity"] = "velocity";
field_map_["temperature"] = "temperature";
field_map_["sfs_stress"] = "sfs_stress";
field_map_["temperature_resolved_stress"] = "temperature_resolved_stress";
field_map_["temperature_variance"] = "temperature_variance";
load(node);
}

void ABLStatistics::load(const YAML::Node& node)
{
bool dowrite = (bulk_.parallel_rank() == 0);
auto fluid_partnames = node["fluid_parts"].as<std::vector<std::string>>();

fluid_parts_.resize(fluid_partnames.size());
size_t ii = 0;
for (auto name: fluid_partnames) {
stk::mesh::Part* part = meta_.get_part(name);
if (nullptr == part) {
throw std::runtime_error("ABLStatistics:: Missing fluid part in mesh database: " +
name);
} else {
fluid_parts_[ii++] = part;
}
}

auto& field_name_map = node["field_map"];
for (auto it: field_name_map)
field_map_[it.first.as<std::string>()] = it.second.as<std::string>();

// Implement constant spacing only for now
auto& htnode = node["height_info"];
zmin_ = htnode["min_height"].as<double>();
zmax_ = htnode["max_height"].as<double>();
dz_ = htnode["delta_height"].as<double>();

nheights_ = static_cast<int>((zmax_ - zmin_) / dz_) + 1;
heights_.resize(nheights_);
for (int i=0; i < nheights_; i++)
heights_[i] = zmin_ + i * dz_;

node_counters_.resize(nheights_);
velMean_.resize(nheights_ * ndim_);
tempMean_.resize(nheights_);
sfsMean_.resize(nheights_ * ndim_ * 2);

if (dowrite) {
std::cerr << "ABLStatistics:: Will process data at " << nheights_ << " heights." << std::endl;
}
}

void
ABLStatistics::initialize()
{
auto& velocity = meta_.declare_field<ScalarFieldType>(
stk::topology::NODE_RANK, field_map_["velocity"]);
auto& temperature = meta_.declare_field<ScalarFieldType>(
stk::topology::NODE_RANK, field_map_["temperature"]);
stk::mesh::FieldBase* sfs_stress = &meta_.declare_field<stk::mesh::Field<double, stk::mesh::SimpleArrayTag>>(
stk::topology::NODE_RANK, field_map_["sfs_stress"]);

for (auto* part: fluid_parts_) {
stk::mesh::put_field(velocity, *part, ndim_);
stk::mesh::put_field(temperature, *part, 1);
stk::mesh::put_field(*sfs_stress, *part, ndim_*2);
}
}

void
ABLStatistics::populate_solution()
{
bool dowrite = (bulk_.parallel_rank() == 0);
auto num_steps = mesh_.stkio().get_num_time_steps();
auto times = mesh_.stkio().get_time_steps();

auto final_time = times[num_steps - 1];
std::vector<stk::io::MeshField> missing_fields;
auto found_time = mesh_.stkio().read_defined_input_fields(final_time, &missing_fields);

if (missing_fields.size() > 0) {
if (dowrite) {
std::cout << "Missing fields in the solution file: " << std::endl;
for (size_t i=0; i < missing_fields.size(); i++)
std::cout << " -" << missing_fields[i].field()->name() << std::endl;
}
throw std::runtime_error("ABLStatistics:: missing required fields in database");
}

if (dowrite)
std::cout << "ABLStatistics:: Found " << num_steps << " time steps in database. Using time = "
<< found_time << " to calculate ABL statistics" << std::endl;
}

void
ABLStatistics::average_planes()
{
const stk::mesh::Selector sel = stk::mesh::selectUnion(fluid_parts_);
auto bkts = bulk_.get_buckets(stk::topology::NODE_RANK, sel);
const VectorFieldType* coords = meta_.get_field<VectorFieldType>(
stk::topology::NODE_RANK, "coordinates");

const VectorFieldType* velocity = meta_.get_field<VectorFieldType>(
stk::topology::NODE_RANK, field_map_["velocity"]);
const ScalarFieldType* temperature = meta_.get_field<ScalarFieldType>(
stk::topology::NODE_RANK, field_map_["temperature"]);
const stk::mesh::FieldBase* sfs_stress = meta_.get_field(
stk::topology::NODE_RANK, field_map_["sfs_stress"]);

// Initialize mean arrays to zero before we start accumulation
for (int ih=0; ih < nheights_; ih++) {
tempMean_[ih] = 0.0;

for (int d=0; d < ndim_; d++) {
velMean_[ih * ndim_ + d] = 0.0;
}
}

// Process node buckets in this MPI rank and sum up velocities on a plane
for (auto b: bkts) {
for (size_t in =0; in < b->size(); in++) {
auto node = (*b)[in];

// Determine the index from the z-coordinate. Here we are assuming
// constant z-spacing as well as flat terrain.
double* crd = stk::mesh::field_data(*coords, node);
int ih = static_cast<int>(std::floor(((crd[2] - zmin_) + 1.0e-10) / dz_));
node_counters_[ih] += 1;

// Velocity calculations
double* vel_vec = stk::mesh::field_data(*velocity, node);
for (int d=0; d < ndim_; d++)
velMean_[ih * ndim_ + d] += vel_vec[d];

// temperature calculations
double* temp = stk::mesh::field_data(*temperature, node);
tempMean_[ih] += *temp;

// TODO: Add SFS stress calculations here
}
}

// Perform global sum and average
std::vector<double> gVelMean(nheights_ * ndim_, 0.0);
std::vector<double> gTempMean(nheights_, 0.0);
std::vector<int> gNodeCtr(nheights_, 0);

stk::all_reduce_sum(bulk_.parallel(), velMean_.data(), gVelMean.data(), nheights_ * ndim_);
stk::all_reduce_sum(bulk_.parallel(), tempMean_.data(), gTempMean.data(), nheights_);
stk::all_reduce_sum(bulk_.parallel(), node_counters_.data(), gNodeCtr.data(), nheights_);

// Reassign averages to member data structures
for (int ih=0; ih < nheights_; ih++) {
tempMean_[ih] = gTempMean[ih] / gNodeCtr[ih];

for (int d=0; d < ndim_; d++) {
velMean_[ih * ndim_ + d] = gVelMean[ih * ndim_ + d] / gNodeCtr[ih];
}
}
}

void
ABLStatistics::output_averages()
{
// Only output data files on MPI master rank
if (bulk_.parallel_rank() != 0) return;

std::ofstream velfile;
std::ofstream tempfile;

// TODO: Provide user defined output filenames
velfile.open("abl_velocity_stats.dat", std::ofstream::out);
tempfile.open("abl_temperature_stats.dat", std::ofstream::out);

velfile << "# Height, Ux, Uy, Uz" << std::endl;
tempfile << "# Height, T" << std::endl;

// TODO: Provide precision changes
for (int ih=0; ih < nheights_; ih++) {
tempfile << heights_[ih] << " " << tempMean_[ih] << std::endl;

velfile << heights_[ih];
for (int d=0; d < ndim_; d++)
velfile << " " << velMean_[ih * ndim_ + d];
velfile << std::endl;
}
velfile.close();
tempfile.close();

std::cout << "ABLStatistics:: Finished writing statistics files" << std::endl;
}

void
ABLStatistics::run()
{
// Load the solution from the database
populate_solution();

// Process nodes on mesh and compute spatial averages for the fields
average_planes();

// Output to text files
output_averages();
}

} // nalu
} // sierra

0 comments on commit 09cfa24

Please sign in to comment.