Skip to content

Releases: parflow/parflow

ParFlow Version 3.11.0

10b8ecc
Compare
Choose a tag to compare

ParFlow Release Notes 3.11.0


This release contains several bug fixes and minor feature updates.

ParFlow development and bug-fixes would not be possible without contributions of the ParFlow community. Thank you for all the great contributions.

Overview of Changes

  • Improved reading of PFB file in Python PFTools
  • ParFlow Documentation Update
  • PDF User Manual removed from the repository
  • Initialization of evap_trans vector has been moved
  • CUDA fixes
  • OASIS array fix

User Visible Changes

Improved reading of PFB file in Python PFTools

Subgrid header information is read directly from the file to enable reading of files with edge data like the velocity files.

Fixes some cases where PFB files with different z-dimension shapes could not be merged together in xarray. Notably this happens for surface parameters which have shape (1, ny, nx) which really should be represented by xarray by squeezing out the z dimension. This now happens in xarray transparently. Loading files with the standard read_pfb or read_pfb_sequence will not auto-squeeze dimensions.

Perfomance of reading should be improved by using memmapped and only the first subgrid header is read when loading a sequence of PFB files. Parallism should be better in the read.

The ability to give keys to the pf.read_pfb function for subsetting was added.

ParFlow Documentation Update

The User Manual is being transitioned to ReadTheDocs from the previous LaTex manual. A first pass at the conversion of the ParFlow LaTeX manual to ReadTheDocs format. This new documentation format contains the selected sections from the ParFlow LaTeX manual along with Kitware's introduction to Python PFTools and resulting tutorials. Added new sections documenting the Python PFTools Hydrology module, the Data Accessor class, and updated the PFB reading/writing tutorial to use the updated PFTools functions instead of parflowio.

The original LaTeX files remain intact for now as this documentation conversion isn't fully complete. Currently this version of the ReadTheDocs is not generating the KitWare version of the ParFlow keys documentation but as a longer-term task they can be re-integrated into the new manual.

PDF User Manual removed from the repository

The PDF of the User Manual that was in the repository has been removed. An online version of the users manual is available on Read the Docks:Parflow Users Manual. A PDF version is available at Parflow Users Manual PDF.

Internal/Developer Changes

Initialization of evap_trans has been moved

The Vector evap_trans was made part of the InstanceXtra structure, initialized is done in SetupRichards() and deallocated in TeardownRichards().

CUDA 11.5 update

Starting from CUB 1.14, CUB_NS_QUALIFIER macro must be specified. The CUB_NS_QUALIFIER macro in the same way it was added in the CUB (see https://github.com/NVIDIA/cub/blob/94a50bf20cc01f44863a524ba36e089fd80f342e/cub/util_namespace.cuh#L99-L109)

CUDA Linux Repository Key Rotation

Updating NVidia CUDA repository keys due to rotation as documented here

Bug Fixes

Minor improvements/bugfix for python IO

Fix a bug on xarray indexing which require squeezing out multiple dimensions. Lazy loading is implemented natively now with changes to the indexing methods.

OASIS array fix

vshape should be a 1d array instead of a 2d array. Its attributes are specified as [INTEGER, DIMENSION(2*id var nodims(1)), IN] based on the OASIS3-MCT docs

Python pftools version parsing

Minor bugfix was needed in Python pftools for parsing versions.

Known Issues

See https://github.com/parflow/parflow/issues for current bug/issue reports.

ParFlow Version 3.10.0

4064e38
Compare
Choose a tag to compare

ParFlow Release Notes 3.10.0


`ParFlow development and bug-fixes would not be possible without
contributions of the ParFlow community. Thank you for all the great
contributions.

These release notes cover changes made in 3.10.0.

Overview of Changes

  • Python dependency is now 3.6
  • Extend velocity calculations to domain boundary faces and option to output velocity
  • Python PFB reader/writer updated
  • Bug fixes

User Visible Changes

Extend velocity calculations to domain boundary faces and option to output velocity (Core)

Extends the calculation of velocity (Darcy flux) values to the
exterior boundary faces of the domain. Here are the highlights:

Values are calculated for simulations using solver_impes (DirichletBC
or FluxBC) or solver_richards (all BCs).

Velocities work with TFG and variable dz options.

The saturated velocity flux calculation (phase_velocity_face.c) has
been added to the accelerated library.

The description of Solver.PrintVelocities in pf-keys/definitions and
the manual has been augmented with more information.

Velocity has been added as a test criteria to the following tests from
parflow/test/tcl :

  • default_single.tcl
  • default_richards.tcl
  • default_overland.tcl
  • LW_var_dz.tcl

Also fixes incorrect application of FluxBC in saturated pressure
discretization. Previously, positive fluxes assigned on a boundary
flowed into the domain, and negative fluxes flowed out of the domain,
regardless of alignment within the coordinate system. The new method
allows more intuitive flux assignment where positive fluxes move up a
coordinate axis and negative fluxes move down a coordinate axis.

Python version dependency update (Python)

Python 3.6 or greater is now required for building and running ParFlow
if Python is being used.

PFB reader/writer updated (Python)

Add simple and fast pure-python based readers and writers of PFB
files. This eliminates the need for the external ParflowIO
dependency. Implemented a new backend for the xarray package that
let's you open both .pfb files as well as .pfmetadata files directly
into xarray datastructures. These are very useful for data wrangling
and scientific analysis

Basic usage of the new functionality:

import parflow as pf

# Read a pfb file as numpy array:
x = pf.read_pfb('/path/to/file.pfb')

# Read a pfb file as an xarray dataset:
ds = xr.open_dataset('/path/to/file.pfb', name='example')

# Write a pfb file with distfile:
pf.write_pfb('/path/to/new_file.pfb', x, 
             p=p, q=q, r=r, dist=True)

SolidFileBuilder simplification (Python)

Support simple use case in SolidFileBuilder when all work can simply
be delegated to pfmask-to-pfsol. Added a generate_asc_files (default
False) argument to SolidFileBuilder.write.

Fixed reading of vegm array (Python)

Fixed indices so that the x index of the vegm_array correctly reflects
the columns and y index reflects the rows. The _read_vegm function in
PFTools was inconsistent with parflow-python xy indexing.

Python PFTools version updates (Python)

Updated Python PFTools dependency to current version 3.6.

Bug Fixes

Fix errors in LW_Test test case (Examples/Tests)

LW_Test runs successfully and works in parallel.

Increased input database maximum value size from 4097 to 65536 (Core)

The maximum input database value length was increased from 4097
to 65536. A bounds check is performed that emits a helpful error
message when a database value is too big.

Python interface fixed issue where some keys failed to set when unless set in a particular order (Python)

  1. Update some documentation for contributing to pf-keys
  2. Fix a bugs found in pf-keys where some keys failed to set when unless set in a particular order
  3. Add constraint for lists of names

This change lets us express that one list of names should be a subset
of another list of names Constraint Example

Values for PhaseSources.{phase_name}.GeomNames should be a subset of
values from either GeomInput.{geom_input_name}.GeomNames or
GeomInput.{geom_input_name}.GeomName. Setting the domain to EnumDomain
like so expresses that constraint. A more detailed example can be seen
in this test case.

Internal/Developer Changes

Cleaned up dependencies for Python (Python)

Diverting ParFlow output to stream (Core)

Added new method for use when ParFlow is embedded in another
application to control the file stream used for ParFlow logging
messages. In the embedded case will be disabled by default unless
redirected by the calling application.

Change required to meet IDEAS Watersheds best practices.

Add keys and generator for Simput (Python)

Added keys and generator to allow use Simput and applications based on
Simput to write inputs for ParFlow with a graphical web interface.

Remove use of MPI_COMM_WORLD (Core)

Enable use of a communicator other than MPI_COMM_WORLD for more
general embedding. Meet IDEAS Watersheds best practices policy.

Known Issues

See https://github.com/parflow/parflow/issues for current bug/issue reports.

New Contributors

ParFlow Version 3.9.0

bc80e3a
Compare
Choose a tag to compare

ParFlow Release Notes 3.9.0


ParFlow improvements and bug-fixes would not be possible without
contributions of the ParFlow community. Thank you for all the great
contributions.

Note : Version 3.9.0 is a minor update to v3.8.0. These release notes cover
changes made in both 3.8.0 and 3.9.0. In 3.9.0, we are improving our Spack
support and added a smoke test for running under Spack and creating a release tag
for Spack to download. If you have version 3.8.0 installed the 3.9.0 update
does NOT add any significant features or bug fixes.

Overview of Changes

  • ParFlow Google Group
  • Default I/O mode changed to amps_sequential_io
  • CLM Solar Zenith Angle Calculation
  • Parallel NetCDF dataset compression
  • Kokkos support
  • Output of Van Genuchten variables
  • Python interface updates
  • Update Hypre testing to v2.18.2
  • MPI runner change
  • Python Interface
  • Segmentation fault at end of simulation run with Van Genuchten
  • Memory errors when rank contained no active cells
  • PFMGOctree solver
  • GFortran compilation errors
  • CMake CI fixes
  • CLM initialization bug
  • CMake cleanup
  • Fixed compilation issues in sequential amps layer
  • CI has moved to Google Actions
  • Sponsors acknowledgment
  • PFModule extended to support output methods
  • Hypre SMG and PFMG
  • Installation of example for smoke test

User Visible Changes

ParFlow Google Group

ParFlow has switched to using a Google group for discussion from the
previous email list server.

https://groups.google.com/g/parflow

Default I/O mode changed to amps_sequential_io

Change the default I/O model to amps_sequential_io because this is the
most common I/O model being used.

CLM Solar Zenith Angle Calculation

Add slope and aspect when determining the solar zenith angle in CLM.
A new key was added Solver.CLM.UseSlopeAspect for the inclusion of
slopes when determining solar zenith angles

Parallel NetCDF dataset compression

Added configurable deflate (zlib) compression capabilities to the
NetCDF based writing routines. The possibilities of parallel data
compression will only work in combination with the latest NetCDF4
v4.7.4 release.

pfset NetCDF.Compression True Enable deflate based compression (default: False)
pfset NetCDF.CompressionLevel 1 Compression level (0-9) (default: 1)

This work was implemented as part of EoCoE-II project (www.eocoe.eu).

Benchmark tests show that the datasize for regular NetCDF output files
could be lowered by a factor of three, in addition, as less data is
written to disk, the reduced data size can also lower the overall I/O
footprint towards the filesystem. Therefore, depending of the selected
setup, the compression overhead can be balanced by reduced writing
times.

Kokkos support

User instructions on how to use Kokkos backend can be found from
README-GPU.md.

Add Kokkos accelerator backend support as an alternative to the
native ParFlow CUDA backend to support more accelerator devices. The
implementation does not rely on any CUDA-specific arguments but still
requires Unified Memory support from the accelerator devices. It
should be compatible with AMD GPUs when sufficient Unified Memory
support is available.

The performance of using CUDA through the Kokkos library is slightly
worse in comparison to the ParFlow native CUDA implementation. This is
because a general Kokkos implementation cannot leverage certain CUDA
features such as cudaMemset() for initialization or CUDA pinned
host/device memory for MPI buffers. Also, Kokkos determines grid and
block sizes for compute kernels differently.

The RMM pool allocator for Unified Memory can be used with Kokkos
(when using Kokkos CUDA backend) and improves the performance very
significantly. In the future, a Unified Memory pool allocator that
supports AMD cards is likely needed to achieve good performance.

Performance of the simulation initialization phase has
been improved significantly when using GPUs (with CUDA and Kokkos).

Output of Van Genuchten variables

Add output for Van Genuchten values alpha, n, sres, ssat. The new
output will be generated when the print_subsurf_data key is set.

Python interface updates

Update Hypre testing to v2.18.2

The version of Hypre used for testing was updated to v2.18.2. This
matches XSDK 0.5 version requirements.

MPI runner change

The method used to find automatically find the MPI runner (mpiexec,
srun etc) is based purely on the CMake FindMPI script. This should
be invisible to most users.

Python Interface

The Beta Python interface continues to be developed. Many
improvements and bugfixes have been made.

  • Add hydrology functions
  • CLM API bug fixes
  • CLM ET calculation
  • Allow clm_output function to return only 2D arrays
  • Add irrigation to CLM variables
  • dx/dy/dz support when writing PFB files
  • Python testing support was added
  • New feature to only show validation results for errors
  • Table builder update: adding databases, cleanup
  • Domain builder helper with examples, docs

Installation of example for smoke testing

The simple single phase test default_single.tcl is installed for smoke testing an installation.

Bug Fixes

Segmentation fault at end of simulation run with Van Genuchten

Segmentation fault when freeing memory at the end of simulation.

Memory errors when rank contained no active cells

The computation of the real space z vector was running beyond
temporary array (zz) resulting in memory errors.

PFMGOctree solver

PFMGOctree was not inserting the surface coefficients correctly into
the matrix with overland flow enabled.

GFortran compilaton errors

Fixed GFortran compilation errors in ifnan.F90 with later GNU releases.
Build was tested against GNU the 10.2.0 compiler suite.

CMake CI fixes

On some systems, it is necessary for any binary compiled with mpi to
be executed with the appropriate ${mpiexec} command. Setting
PARFLOW_TEST_FORCE_MPIEXEC forces sequential tests to be executed with
the ${MPIEXEC} command with 1 rank.

CLM initialization bug

Fixed CLM bug causing long initialization times.

CMake cleanup

Updated CMake to more current usage patterns and CMake minor bugfixes.

Fixed compilation issues in sequential amps layer

The AMPS sequential layer had several bugs preventing it from
compiling. Tests are passing again with a sequential build.

Internal/Developer Changes

CI has moved to Google Actions

TravisCI integration for CI has been replaced with Google Actions.

Sponsors acknowledgment

A new file has been added (SPONSORS.md) to enable acknowledgment the
sponsors of ParFlow development. Please feel free to submit a pull request
if you wish to add a sponsor.

Testing framework refactoring

The testing framework has been refactored to support Python. Directory
structure for tests has changed.

PFModule extended to support output methods

Add support in PFModule for module output. Two new methods were added
to the PFModule 'class' to output time variant and time invariant
data. This allows modules to have methods on each instance for
generating output directly from the module. Previously the approach
was to copy data to a problem data variable and output from the copy.

Hypre SMG and PFMG

Refactored common Hypre setup code to a method to keep Hypre setup consistent.

Known Issues

See https://github.com/parflow/parflow/issues for current bug/issue reports.

ParFlow Version 3.8.0

7eb7394
Compare
Choose a tag to compare

ParFlow Release Notes 3.8.0


ParFlow improvements and bug-fixes would not be possible without
contributions of the ParFlow community. Thank you for all the great
contributions.

Overview of Changes

  • ParFlow Google Group
  • Default I/O mode changed to amps_sequential_io
  • CLM Solar Zenith Angle Calculation
  • Parallel NetCDF dataset compression
  • Kokkos support
  • Output of Van Genuchten variables
  • Python interface updates
  • Update Hypre testing to v2.18.2
  • MPI runner change
  • Python Interface
  • Segmentation fault at end of simulation run with Van Genuchten
  • Memory errors when rank contained no active cells
  • PFMGOctree solver
  • GFortran compilation errors
  • CMake CI fixes
  • CLM initialization bug
  • CMake cleanup
  • Fixed compilation issues in sequential amps layer
  • CI has moved to Google Actions
  • Sponsors acknowledgment
  • PFModule extended to support output methods
  • Hypre SMG and PFMG

User Visible Changes

ParFlow Google Group

ParFlow has switched to using a Google group for discussion from the
previous email list server.

https://groups.google.com/g/parflow

Default I/O mode changed to amps_sequential_io

Change the default I/O model to amps_sequential_io because this is the
most common I/O model being used.

CLM Solar Zenith Angle Calculation

Add slope and aspect when determining the solar zenith angle in CLM.
A new key was added Solver.CLM.UseSlopeAspect for the inclusion of
slopes when determining solar zenith angles

Parallel NetCDF dataset compression

Added configurable deflate (zlib) compression capabilities to the
NetCDF based writing routines. The possibilities of parallel data
compression will only work in combination with the latest NetCDF4
v4.7.4 release.

pfset NetCDF.Compression True Enable deflate based compression (default: False)
pfset NetCDF.CompressionLevel 1 Compression level (0-9) (default: 1)

This work was implemented as part of EoCoE-II project (www.eocoe.eu).

Benchmark tests show that the datasize for regular NetCDF output files
could be lowered by a factor of three, in addition, as less data is
written to disk, the reduced data size can also lower the overall I/O
footprint towards the filesystem. Therefore, depending of the selected
setup, the compression overhead can be balanced by reduced writing
times.

Kokkos support

User instructions on how to use Kokkos backend can be found from
README-GPU.md.

Add Kokkos accelerator backend support as an alternative to the
native ParFlow CUDA backend to support more accelerator devices. The
implementation does not rely on any CUDA-specific arguments but still
requires Unified Memory support from the accelerator devices. It
should be compatible with AMD GPUs when sufficient Unified Memory
support is available.

The performance of using CUDA through the Kokkos library is slightly
worse in comparison to the ParFlow native CUDA implementation. This is
because a general Kokkos implementation cannot leverage certain CUDA
features such as cudaMemset() for initialization or CUDA pinned
host/device memory for MPI buffers. Also, Kokkos determines grid and
block sizes for compute kernels differently.

The RMM pool allocator for Unified Memory can be used with Kokkos
(when using Kokkos CUDA backend) and improves the performance very
significantly. In the future, a Unified Memory pool allocator that
supports AMD cards is likely needed to achieve good performance.

Performance of the simulation initialization phase has
been improved significantly when using GPUs (with CUDA and Kokkos).

Output of Van Genuchten variables

Add output for Van Genuchten values alpha, n, sres, ssat. The new
output will be generated when the print_subsurf_data key is set.

Python interface updates

Update Hypre testing to v2.18.2

The version of Hypre used for testing was updated to v2.18.2. This
matches XSDK 0.5 version requirements.

MPI runner change

The method used to find automatically find the MPI runner (mpiexec,
srun etc) is based purely on the CMake FindMPI script. This should
be invisible to most users.

Python Interface

The Beta Python interface continues to be developed. Many
improvements and bugfixes have been made.

  • Add hydrology functions
  • CLM API bug fixes
  • CLM ET calculation
  • Allow clm_output function to return only 2D arrays
  • Add irrigation to CLM variables
  • dx/dy/dz support when writing PFB files
  • Python testing support was added
  • New feature to only show validation results for errors
  • Table builder update: adding databases, cleanup
  • Domain builder helper with examples, docs

Bug Fixes

Segmentation fault at end of simulation run with Van Genuchten

Segmentation fault when freeing memory at the end of simulation.

Memory errors when rank contained no active cells

The computation of the real space z vector was running beyond
temporary array (zz) resulting in memory errors.

PFMGOctree solver

PFMGOctree was not inserting the surface coefficients correctly into
the matrix with overland flow enabled.

GFortran compilaton errors

Fixed GFortran compilation errors in ifnan.F90 with later GNU releases.
Build was tested against GNU the 10.2.0 compiler suite.

CMake CI fixes

On some systems, it is necessary for any binary compiled with mpi to
be executed with the appropriate ${mpiexec} command. Setting
PARFLOW_TEST_FORCE_MPIEXEC forces sequential tests to be executed with
the ${MPIEXEC} command with 1 rank.

CLM initialization bug

Fixed CLM bug causing long initialization times.

CMake cleanup

Updated CMake to more current usage patterns and CMake minor bugfixes.

Fixed compilation issues in sequential amps layer

The AMPS sequential layer had several bugs preventing it from
compiling. Tests are passing again with a sequential build.

Internal/Developer Changes

CI has moved to Google Actions

TravisCI integration for CI has been replaced with Google Actions.

Sponsors acknowledgment

A new file has been added (SPONSORS.md) to enable acknowledgment the
sponsors of ParFlow development. Please feel free to submit a pull request
if you wish to add a sponsor.

Testing framework refactoring

The testing framework has been refactored to support Python. Directory
structure for tests has changed.

PFModule extended to support output methods

Add support in PFModule for module output. Two new methods were added
to the PFModule 'class' to output time variant and time invariant
data. This allows modules to have methods on each instance for
generating output directly from the module. Previously the approach
was to copy data to a problem data variable and output from the copy.

Hypre SMG and PFMG

Refactored common Hypre setup code to a method to keep Hypre setup consistent.

Known Issues

See https://github.com/parflow/parflow/issues for current bug/issue reports.

ParFlow Version 3.7.0

5e87b16
Compare
Choose a tag to compare

ParFlow Release Notes 3.7.0


ParFlow improvements and bug-fixes would not be possible without
contributions of the ParFlow community. Thank you for all the great
work.

Overview of Changes

  • Autoconf support has been removed.
  • Support for on-node parallelism using OpenMP and CUDA
  • New overland flow formulations
  • Utility for writing PFB file from R
  • Additional solid file utilities in TCL
  • NetCDF and HDF5 added to Docker instance

User Visible Changes

Autoconf support has been removed

The GNU Autoconf (e.g. configure) support has been dropped. Use CMake
to build ParFlow. See the README.md file for information on building
with CMake.

Support for on-node parallelism using OpenMP and CUDA

ParFlow now has an option so support on-node parallelism in addition to
using MPI. OpenMP and CUDA backends are currently supported.

See the README-CUDA.md and README-OPENMP.md files for information on
how to compile with support for CUDA and OpenMP.

Big thank you goes to Michael Burke and Jaro Hokkanen and the teams at
Boise State, U of Arizona, and FZ-Juelich for their hard work on
adding OpenMP and CUDA support.

CMake dependency on version 3.14

ParFlow now requires CMake version 3.14 or better.

New overland flow formulations

Overland flow saw significant work:

  • OverlandKinematic and OverlandDiffusive BCs per LEC
  • Add OverlandKinematic as a Module
  • Adding new diffusive module
  • Added TFG slope upwind options to Richards Jacobian
  • Added overland eval diffusive module to new OverlandDiffusive BC condition
  • Adding Jacobian terms for diffusive module
  • Updating OverlandDiffusive boundary condition Jacobian
  • Updated documentation for new boundary conditions

Utility for writing PFB file from R

A function to take array inputs and write them as pfbs. See the file:
pftools/prepostproc/PFB-WriteFcn.R

Additional solid file utilities in TCL

A new PF tools for creating solid files with irregular top and bottom
surfaces and conversion utilities to/from ascii or binary solid
files. See user-manual documentation on pfpatchysolid and
pfsolidfmtconver for information on the new TCL commands.

NetCDF and HDF5 added to Docker instance

The ParFlow Docker instance now includes support for NetCDF and HDF5.

Bug Fixes

Fixed compilation issue with NetCDF

CMake support for NetCDF compilation has been improved.

Memory leaks

Several memory leaks were addressed in ParFlow and PFTools.

Parallel issue with overland flow boundary conditions

Fixed bug in nl_function_eval.c that caused MPI error for some
overland BCs with processors outside computational grid.

pfdist/undist issues

Fixed pfdist/undist issues when using the sequential I/O model.

Internal Changes

Boundary condition refactoring

The loops for boundary conditions were refactored to provide a higher
level of abstraction and be more self-documenting (removed magic
numbers). ForPatchCellsPerFace is a new macro for looping over patch
faces. See nl_function_eval.c for example usage and problem_bc.h for
documentation on the new macros.

PVCopy extended to include boundary cells.

PVCopy now includes boundary cells in the copy.

DockerHub Test

A simple automated test of generated DockerHub instances was added.

Etrace support was added

Support for generating function call traces with Etrace was added. Add
-DPARFLOW_ENABLE_TRACE to CMake configure line.

See https://github.com/elcritch/etrace for additional information.

Compiler warnings treated as errors

Our development process now requires code compile cleanly with the
-Wall option on GCC. Code submissions will not be accepted that do
not cleanly compile.

Known Issues

See https://github.com/parflow/parflow/issues for current bug/issue reports.

ParFlow Version 3.6.0

d5e89a1
Compare
Choose a tag to compare

ParFlow Release Notes


IMPORTANT NOTE

Support for GNU Autoconf will be removed in the next release of
ParFlow.  Future releases will only support configuration using CMake.

Overview of Changes

  • New overland flow boundary conditions
  • Flow barrier added
  • Support for metadata file
  • Boundary condition refactoring
  • Bug fixes
  • Coding style update

User Visible Changes

New overland flow boundary conditions

Three new boundary conditions as modules - OverlandKinematic,
OverlandDiffusive and Seepage.

OverlandKinematic is similar to the original OverlandFlow boundary
condition but uses a slightly modified flux formulation that uses the
slope magnitude and it is developed to use face centered slopes (as
opposed to grid centered) and does the upwinding internally.x. See user
manual for additional information on the new boundary conditions.

New test cases were added exercising the new boundary conditions:
overland_slopingslab_DWE.tcl, overland_slopingslab_KWE.tcl,
overland_tiltedv_DWE.tcl, overland_tiltedV_KWE.tcl,
Overland_FlatICP.tcl

Two new options were added to the terrain following grid formulation
to be consistent with the upwinding approach used in the new overland
flow formulation these are specified with the new
TFGUpwindFormullation keys documented in the manual.

For both OverlandDiffusive and OverlandKinematic analytical jacobians
were implemented in the new modules and these were tested and can be
verified in the new test cases noted above.

Flow barrier added

Ability to create a flow barrier capability equivalent to the
hydraulic flow barrier (HFB) or flow and transport parameters at
interfaces. The flow barriers are placed at the fluxes as scalar
multipliers between cells (at cell interfaces).

Flow barriers are set using a PFB file, see user manual for additional
information. The flow barrier is turned off by default.

Support for metadata file

A metadata file is written in JSON format summarizing the inputs to a
run and its output files. This file provides ParaView and other
post-processing tools a simple way to aggregate data for
visualizations and analyses.

Metadata is collected during simulation startup and updated to include
timestep information with each step the simulation takes. It is
rewritten with each timestep so that separate processes may observe
simulation progress by watching the file for changes.

Bug Fixes

Fixed segmentation fault when unitialized variable was referenced in
cases with processors is outside of the active domain.

Internal Changes

Boundary condition refactoring

The framework for boundary conditions was significantly refactored to provide a
macro system to simplify adding new boundary conditions. See
bc_pressure.h for additional documentation.

Coding style update

The Uncrustify coding style was updated and code was reformated.

Known Issues

ParFlow Version 3.5.0

9967d2f
Compare
Choose a tag to compare

ParFlow version 3.5.0 release.

This release contains bug fixes.

Thank you to all the ParFlow contributors, both past and present. Recent contributors may be found here:

https://github.com/parflow/parflow/graphs/contributors

Major upcoming changes:

  • Support for the autoconf configure scripts will be dropped in next release (3.6.0).
    The team does not have resources to support two configuration systems.

Major new features:

  • Added initial support for Docker and automated Docker builds on DockerHub.

Bug Highlights

  • Fixed two bugs causing segmentation faults.

  • Fixed memory leaks; Valgrind is running cleanly on regression tests.

Major outstanding issues:

  • MacOS builds
    We continue to see issues with configuration and building on MacOS. Including issues with Clang 4.x
    compiler releases; which is used in a number of XCode release.

  • Clang 4.x
    The regression tests are failing for us when compiling with Clang 4.x releases. This has been observed
    under both MacOS and Linux. Newer versions of Clang do not exhibit this bug.

ParFlow Version 3.4.0

f811f68
Compare
Choose a tag to compare

ParFlow version 3.4.0 release.

This release contains minor feature additions and bug fixes.

Thank you to all the ParFlow contributors, both past and present. Recent contributors since we have moved to GitHub may be found here:

https://github.com/parflow/parflow/graphs/contributors

Major upcoming changes:

  • Support for the autoconf configure scripts will be dropped in next release (3.5.0).
    The team does not have resources to support two configuration systems.

Major new features:

  • Removed hard coding of clm layers from fortran code and added RootZoneNZ ParFlow input value.
    Users no longer need to recompile CLM to change the number of root zone layers.

  • Moved docs directory from pftools/docs to docs.
    This should make the source for the documentation easier to find.

  • Added support for building doxygen source documentation.
    This is a work-in-progress, most of the code still has original documentation.

  • Added utilities for creating 3D input from 2D mask files.

  • Added R PFB reader.
    Script PFB-ReadFcn.R is in pftools/prepostproc directory.

  • Added new CSV file output file.
    File with .out.timing.csv will be written for easier post-procssing of timing results.

  • Add Galerkin option to Hypre PFMG solver.
    Key name is RAPType, values allowed are Galerkin and NonGalerkin.

  • Performance improvements for the Octree traversal.
    This improves runtime 0-20% depending on domain and grid.

Bug Highlights

  • Fixed bug in PFMGOctree indices for non-symmetric matrix.

  • pftools Fixed memory allocation sizes

  • Removed CLM ts value in drv_dlmin.dat file which was overriding timestep supplied by Parflow.

Major outstanding issues:

  • MacOS builds
    We continue to see issues with configuration and building on MacOS. Including issues with Clang 4.x
    compiler releases; which is used in a number of XCode release.

  • Clang 4.x
    The regression tests are failing for us when compiling with Clang 4.x releases. This has been observed
    under both MacOS and Linux. Newer versions of Clang do not exhibit this bug.

ParFlow Version 3.3.1

Compare
Choose a tag to compare

ParFlow version 3.3.1 release.

ParFlow has undergone some significant additions since the 3.2.1 release plus bug fixes.

Thank you to all the ParFlow contributors, both past and present. Recent contributors since we have moved to GitHub may be found here:

https://github.com/parflow/parflow/graphs/contributors

Major new features:

  • NetCDF support was added.
    Many thanks to Ketan Kulkarni for adding I/O support for NetCDF. See the users manual for how to
    use NetCDF.

  • CMAKE configure support.
    CMAKE support is relatively mature and has been used on multiple platforms for configuring ParFlow.
    Autoconf support will be deprecated in a future ParFlow release but should still work in v3.3.0
    See the README-CMAKE.md file for how to configure with CMAKE.

  • Continuous integration testing.
    ParFlow is using TravisCI to do continuous integration testing on Linux Ubuntu builds.
    Code commits to the master branch should always build and pass the existing regressions tests.

  • ParFlow built and tested on a wider range of current HPC systems.
    This version of ParFlow has been successfully compiled on a wider ranger of HPC systems than v3.2.0.

  • Experimental support for building under Spack.
    See the README-Spack.md file for more information on building with Spack.

Major outstanding issues:

  • MacOS builds
    We are experiencing issues with configuration and building on MacOS. Including issues with Clang 4.x
    compiler releases; which is used in a number of XCode release. Also CMAKE is not

  • Clang 4.x
    The regression tests are failing for us when compiling with Clang 4.x releases. This has been observed
    under both MacOS and Linux.

Initial release of ParFlow on Github

Compare
Choose a tag to compare

First release of ParFlow on Github.

Versioning is being changed over to semantic versioning style from previous version scheme used in the subversion repository. Releases will not longer include SVN revision in version number scheme.

This release contains the latest subversion 3.1.600 version plus some minor enhancements:

  • User Manual should now build using the makefile
  • Readme updated to use markdown tags so it will look better on Github
  • Basic git ignore file was added
  • Autoconf improvements
    • CPP was not being intialized correctly, causing some configure steps to fail
    • Endian check was not working correctly
    • Improved searching for 64 bit versions of TCL
  • Better C++ and ANSI C compliance, compiler warning messages fixed.
  • Sequential compilation issues where fixed, additional guards for MPI-IO calls
  • Fixed compiler issues with IBM XLC/XLF for Blue Gene systems