Releases: parflow/parflow
ParFlow Version 3.13.0
ParFlow Release Notes 3.13.0
This release contains several bug fixes and minor feature updates.
ParFlow development and bug-fixes would not be possible without contributions of the ParFlow community. Thank you for all the great contributions.
Overview of Changes
User Visible Changes
Building ParFlow-CLM only is supported
Configuration flags have been added to support building only ParFlow-CLM for use cases where only CLM is desired.
Kokkos version support updated to version 4.2.01
The Kokkos supported version has been updated to version 4.2.01. This is the version used in our regression suite. Other versions may or may not work.
OASIS version support updated to version 5.1.
The OASIS version used in the regression test suite was updated to OASIS 5.1.
vegm file reading performance improvements in Python pftools
Improve speed of reading large vegm files in the Python read_vegm function.
Documentation updates
Clarified top flux boundary conditions (e.g., OverlandFlow, SeepageFace) and EvapTrans files sign convention. Typo in Haverkamp saturation formula: alpha replaced with A. Key names "TensorByFileX" renamed to the correct "TensorFileX" key name.
Bug Fixes
Python pftools StartCount incorrect bounds check
The StartCount input value was incorrectly checked to be -1 or 0. This prevented setting to larger value for doing a restart. Larger values are now allowed.
Python pftools reading porosity from a file not working
The Python input was throwing an error if the type for the porosity input was set to PFBFile. This has been fixed and using a PFB file for input should work in Python.
Memory corruption when using the PressureFile option
The PressureFile option (and others) were causing memory corruption due to multiple frees of the filenames. Removed the incorrect free calls in the shutdown logic. This will fix some segmentation issues seen by users.
Internal/Developer Changes
Direchelt boundary condition fix in nl_function_eval
Z_mult was incorrectly being divided by 2 in nl_function_eval.
GitHub Actions updated
The CI testing suite was using out-dated GitHub Action modules; the modules have been updated.
Added Python CI test result checks
The Python tests were incorrectly not checking results of runs and passing if the test ran. Checks have been added as in the TCL test suite to check output results for regressions.
See the pf_test_file
and pf_test_file_with_abs
Python methods.
Python CI tests for optional external package dependencies
Python CI tests are now guarded for optional package dependencies such as Hypre, Silo, etc.
See the pf_test_file
and pf_test_file_with_abs
Python methods.
Compilation with Intel-OneAPI compiler fixes
The Intel-OneAPI compiler with fast-floating-point mode does not support isnan() (always evaluates to false). NaN sentinel value was replaced with FLT_MIN.
Improvements to C/C++ standards compliance
Minor code cleanup to remove old K&R style definitions and declarations.
Updated etrace
Update the etrace script to work with Python3.
Known Issues
See https://github.com/parflow/parflow/issues for current bug/issue reports.
ParFlow Version 3.12.0
ParFlow Release Notes 3.12.0
This release contains several bug fixes and minor feature updates.
ParFlow development and bug-fixes would not be possible without contributions of the ParFlow community. Thank you for all the great contributions.
Overview of Changes
- Documentation updates
User Visible Changes
DockerHub Container has Python and TCL support.
The DockerHub Container has been updated to support both Python and TCL input scripts. Previously only TCL was supported. The type of script is determined by the file extension so make sure to use .tcl for TCL and and .py for Python as per standard file extension naming.
Simple examples using Docker:
docker run --rm -v $(pwd):/data parflow/parflow:version-3.12.0 default_single.py
docker run --rm -v $(pwd):/data parflow/parflow:version-3.12.0 default_single.tcl 1 1 1
Dependency Updates
We have tested and updated some dependencies in ParFlow to use more current releases. The following are used in our continuous integration builds and tests.
Ubuntu 22.04
Ubuntu 20.04
CMake 3.25.1
Hypre 2.26.0
Silo 4.11
NetCDF-C 4.9.0
NetCDF-Fortan 4.5.5
CUDA 11.8.0 (with OpenMPI 4.0.3)
UCX 1.13.1
RMM 0.10
Kokkos 3.3.01
Dependencies not listed are coming from the Ubuntu packages. We try to have as few version specific dependencies as possible so other release may work.
Surface Pressure Threshold
The surface pressure may now have a threshold applied. This is controlled with several keys.
pfset Solver.ResetSurfacePressure True ## TCL syntax
<runname>.Solver.ResetSurfacePressure = "True" ## Python syntax
This key changes any surface pressure greater than a threshold value to
another value in between solver timesteps. It works differently than the Spinup keys and is intended to
help with slope errors and issues and provides some diagnostic information. The threshold keys are specified below.
The threshold value is specified with ResetSurfacePressure
pfset Solver.ResetSurfacePressure.ThresholdPressure 10.0 ## TCL syntax
<runname>.Solver.ResetSurfacePressure.ThresholdPressure = 10.0 ## Python syntax
The Solver.SpinUp key removes surface pressure in between solver timesteps.
pfset Solver.SpinUp True ## TCL syntax
<runname>.Solver.SpinUp = "True" ## Python syntax
Top of domain indices output
The capability to output the Top Z index and Top Patch Index have been added to allow easier processing of surface values. The new input keys are PrintTop and WriteSiloTop.
pfset Solver.PrintTop False ## TCL syntax
<runname>.Solver.PrintTop = False ## Python syntax
pfset Solver.WriteSiloTop True ## TCL syntax
<runname>.Solver.WriteSiloTop = True ## Python syntax
The keys are used to turn on printing of the top of domain data. 'TopZIndex' is a NX * NY file with the Z index of the top of the domain. 'TopPatch' is the Patch index for the top of the domain. A value of -1 indicates an (i,j) column does not intersect the domain. The data is written as a PFB or Silo formats.
Documentation Updates
The read-the-docs manual has been cleaned up; many formatting and typos have been fixed from the Latex conversion.
Bug Fixes
CLM
Fixed an issue that was identified by @danielletijerina where some bare soil on vegetated surfaces wasn't being beta-limited in CLM. Fixes to clm_thermal.F90 were implemented. At the same time, CLM snow additions and dew corrections by LBearup were added. A snow-age fix for deep snow was implemented along with canopy dew.
Python PFtools
The _overland_flow_kinematic method was updated to match the outflow of ParFlow along the edges of irregular domains, which the prior Hydrology Python PFTools did not.
a) the slope in both x and y are corrected (by copying the corresponding value inside the mask) outside the mask edges in lower x and y as they both come into play through "slope".
b) because the correction is now done at lower x and y edges in both slopex and slopey, this could lead to overwriting the slopes outside for grid cells that are both outside x and y lower edges. For this, the calculation in x (q_x, qeast) is done first, after adjusting slopes outside lower x edges and then the calculation in y (q_y, qnorth) is done second, after adjusting slopes outside lower y edges.
Internal/Developer Changes
CI Testing Updates
The GitHub Actions tests have been updated to use later Ubuntu releases. The 18.04 tests were removed and tests were moved to to 22.04. Currently testing is done with both 20.04 and 22.04.
Dependencies have been updated for NetCDF, Hypre, GCC
NetCDF Testing
The NetCDF testing has been updated to unify the GitHub Actions for OASIS3 tests and the other regression tests.
Regression Test Comparison Directory
The TCL script pfTestFile used for regression testing has been updated to enable setting the directory for the regression test comparison files. Example usage:
set correct_output_dir "../../correct_output/clm_output"
pftestFile clm.out.press.$i_string.pfb "Max difference in Pressure for timestep $i_string" $sig_digits $correct_output_dir
Known Issues
See https://github.com/parflow/parflow/issues for current bug/issue reports.
ParFlow Version 3.11.0
ParFlow Release Notes 3.11.0
This release contains several bug fixes and minor feature updates.
ParFlow development and bug-fixes would not be possible without contributions of the ParFlow community. Thank you for all the great contributions.
Overview of Changes
- Improved reading of PFB file in Python PFTools
- ParFlow Documentation Update
- PDF User Manual removed from the repository
- Initialization of evap_trans vector has been moved
- CUDA fixes
- OASIS array fix
User Visible Changes
Improved reading of PFB file in Python PFTools
Subgrid header information is read directly from the file to enable reading of files with edge data like the velocity files.
Fixes some cases where PFB files with different z-dimension shapes could not be merged together in xarray. Notably this happens for surface parameters which have shape (1, ny, nx) which really should be represented by xarray by squeezing out the z dimension. This now happens in xarray transparently. Loading files with the standard read_pfb or read_pfb_sequence will not auto-squeeze dimensions.
Perfomance of reading should be improved by using memmapped and only the first subgrid header is read when loading a sequence of PFB files. Parallism should be better in the read.
The ability to give keys to the pf.read_pfb function for subsetting was added.
ParFlow Documentation Update
The User Manual is being transitioned to ReadTheDocs from the previous LaTex manual. A first pass at the conversion of the ParFlow LaTeX manual to ReadTheDocs format. This new documentation format contains the selected sections from the ParFlow LaTeX manual along with Kitware's introduction to Python PFTools and resulting tutorials. Added new sections documenting the Python PFTools Hydrology module, the Data Accessor class, and updated the PFB reading/writing tutorial to use the updated PFTools functions instead of parflowio.
The original LaTeX files remain intact for now as this documentation conversion isn't fully complete. Currently this version of the ReadTheDocs is not generating the KitWare version of the ParFlow keys documentation but as a longer-term task they can be re-integrated into the new manual.
PDF User Manual removed from the repository
The PDF of the User Manual that was in the repository has been removed. An online version of the users manual is available on Read the Docks:Parflow Users Manual. A PDF version is available at Parflow Users Manual PDF.
Internal/Developer Changes
Initialization of evap_trans has been moved
The Vector evap_trans was made part of the InstanceXtra structure, initialized is done in SetupRichards() and deallocated in TeardownRichards().
CUDA 11.5 update
Starting from CUB 1.14, CUB_NS_QUALIFIER macro must be specified. The CUB_NS_QUALIFIER macro in the same way it was added in the CUB (see https://github.com/NVIDIA/cub/blob/94a50bf20cc01f44863a524ba36e089fd80f342e/cub/util_namespace.cuh#L99-L109)
CUDA Linux Repository Key Rotation
Updating NVidia CUDA repository keys due to rotation as documented here
Bug Fixes
Minor improvements/bugfix for python IO
Fix a bug on xarray indexing which require squeezing out multiple dimensions. Lazy loading is implemented natively now with changes to the indexing methods.
OASIS array fix
vshape should be a 1d array instead of a 2d array. Its attributes are specified as [INTEGER, DIMENSION(2*id var nodims(1)), IN] based on the OASIS3-MCT docs
Python pftools version parsing
Minor bugfix was needed in Python pftools for parsing versions.
Known Issues
See https://github.com/parflow/parflow/issues for current bug/issue reports.
ParFlow Version 3.10.0
ParFlow Release Notes 3.10.0
`ParFlow development and bug-fixes would not be possible without
contributions of the ParFlow community. Thank you for all the great
contributions.
These release notes cover changes made in 3.10.0.
Overview of Changes
- Python dependency is now 3.6
- Extend velocity calculations to domain boundary faces and option to output velocity
- Python PFB reader/writer updated
- Bug fixes
User Visible Changes
Extend velocity calculations to domain boundary faces and option to output velocity (Core)
Extends the calculation of velocity (Darcy flux) values to the
exterior boundary faces of the domain. Here are the highlights:
Values are calculated for simulations using solver_impes (DirichletBC
or FluxBC) or solver_richards (all BCs).
Velocities work with TFG and variable dz options.
The saturated velocity flux calculation (phase_velocity_face.c) has
been added to the accelerated library.
The description of Solver.PrintVelocities in pf-keys/definitions and
the manual has been augmented with more information.
Velocity has been added as a test criteria to the following tests from
parflow/test/tcl :
- default_single.tcl
- default_richards.tcl
- default_overland.tcl
- LW_var_dz.tcl
Also fixes incorrect application of FluxBC in saturated pressure
discretization. Previously, positive fluxes assigned on a boundary
flowed into the domain, and negative fluxes flowed out of the domain,
regardless of alignment within the coordinate system. The new method
allows more intuitive flux assignment where positive fluxes move up a
coordinate axis and negative fluxes move down a coordinate axis.
Python version dependency update (Python)
Python 3.6 or greater is now required for building and running ParFlow
if Python is being used.
PFB reader/writer updated (Python)
Add simple and fast pure-python based readers and writers of PFB
files. This eliminates the need for the external ParflowIO
dependency. Implemented a new backend for the xarray package that
let's you open both .pfb files as well as .pfmetadata files directly
into xarray datastructures. These are very useful for data wrangling
and scientific analysis
Basic usage of the new functionality:
import parflow as pf
# Read a pfb file as numpy array:
x = pf.read_pfb('/path/to/file.pfb')
# Read a pfb file as an xarray dataset:
ds = xr.open_dataset('/path/to/file.pfb', name='example')
# Write a pfb file with distfile:
pf.write_pfb('/path/to/new_file.pfb', x,
p=p, q=q, r=r, dist=True)
SolidFileBuilder simplification (Python)
Support simple use case in SolidFileBuilder when all work can simply
be delegated to pfmask-to-pfsol. Added a generate_asc_files (default
False) argument to SolidFileBuilder.write.
Fixed reading of vegm array (Python)
Fixed indices so that the x index of the vegm_array correctly reflects
the columns and y index reflects the rows. The _read_vegm function in
PFTools was inconsistent with parflow-python xy indexing.
Python PFTools version updates (Python)
Updated Python PFTools dependency to current version 3.6.
Bug Fixes
Fix errors in LW_Test test case (Examples/Tests)
LW_Test runs successfully and works in parallel.
Increased input database maximum value size from 4097 to 65536 (Core)
The maximum input database value length was increased from 4097
to 65536. A bounds check is performed that emits a helpful error
message when a database value is too big.
Python interface fixed issue where some keys failed to set when unless set in a particular order (Python)
- Update some documentation for contributing to pf-keys
- Fix a bugs found in pf-keys where some keys failed to set when unless set in a particular order
- Add constraint for lists of names
This change lets us express that one list of names should be a subset
of another list of names Constraint Example
Values for PhaseSources.{phase_name}.GeomNames should be a subset of
values from either GeomInput.{geom_input_name}.GeomNames or
GeomInput.{geom_input_name}.GeomName. Setting the domain to EnumDomain
like so expresses that constraint. A more detailed example can be seen
in this test case.
Internal/Developer Changes
Cleaned up dependencies for Python (Python)
Diverting ParFlow output to stream (Core)
Added new method for use when ParFlow is embedded in another
application to control the file stream used for ParFlow logging
messages. In the embedded case will be disabled by default unless
redirected by the calling application.
Change required to meet IDEAS Watersheds best practices.
Add keys and generator for Simput (Python)
Added keys and generator to allow use Simput and applications based on
Simput to write inputs for ParFlow with a graphical web interface.
Remove use of MPI_COMM_WORLD (Core)
Enable use of a communicator other than MPI_COMM_WORLD for more
general embedding. Meet IDEAS Watersheds best practices policy.
Known Issues
See https://github.com/parflow/parflow/issues for current bug/issue reports.
New Contributors
- @DrewLazzeriKitware made their first contribution in #352
- @arbennett made their first contribution in #365
- @aureliayang made their first contribution in #380
ParFlow Version 3.9.0
ParFlow Release Notes 3.9.0
ParFlow improvements and bug-fixes would not be possible without
contributions of the ParFlow community. Thank you for all the great
contributions.
Note : Version 3.9.0 is a minor update to v3.8.0. These release notes cover
changes made in both 3.8.0 and 3.9.0. In 3.9.0, we are improving our Spack
support and added a smoke test for running under Spack and creating a release tag
for Spack to download. If you have version 3.8.0 installed the 3.9.0 update
does NOT add any significant features or bug fixes.
Overview of Changes
- ParFlow Google Group
- Default I/O mode changed to amps_sequential_io
- CLM Solar Zenith Angle Calculation
- Parallel NetCDF dataset compression
- Kokkos support
- Output of Van Genuchten variables
- Python interface updates
- Update Hypre testing to v2.18.2
- MPI runner change
- Python Interface
- Segmentation fault at end of simulation run with Van Genuchten
- Memory errors when rank contained no active cells
- PFMGOctree solver
- GFortran compilation errors
- CMake CI fixes
- CLM initialization bug
- CMake cleanup
- Fixed compilation issues in sequential amps layer
- CI has moved to Google Actions
- Sponsors acknowledgment
- PFModule extended to support output methods
- Hypre SMG and PFMG
- Installation of example for smoke test
User Visible Changes
ParFlow Google Group
ParFlow has switched to using a Google group for discussion from the
previous email list server.
https://groups.google.com/g/parflow
Default I/O mode changed to amps_sequential_io
Change the default I/O model to amps_sequential_io because this is the
most common I/O model being used.
CLM Solar Zenith Angle Calculation
Add slope and aspect when determining the solar zenith angle in CLM.
A new key was added Solver.CLM.UseSlopeAspect for the inclusion of
slopes when determining solar zenith angles
Parallel NetCDF dataset compression
Added configurable deflate (zlib) compression capabilities to the
NetCDF based writing routines. The possibilities of parallel data
compression will only work in combination with the latest NetCDF4
v4.7.4 release.
pfset NetCDF.Compression True Enable deflate based compression (default: False)
pfset NetCDF.CompressionLevel 1 Compression level (0-9) (default: 1)
This work was implemented as part of EoCoE-II project (www.eocoe.eu).
Benchmark tests show that the datasize for regular NetCDF output files
could be lowered by a factor of three, in addition, as less data is
written to disk, the reduced data size can also lower the overall I/O
footprint towards the filesystem. Therefore, depending of the selected
setup, the compression overhead can be balanced by reduced writing
times.
Kokkos support
User instructions on how to use Kokkos backend can be found from
README-GPU.md.
Add Kokkos accelerator backend support as an alternative to the
native ParFlow CUDA backend to support more accelerator devices. The
implementation does not rely on any CUDA-specific arguments but still
requires Unified Memory support from the accelerator devices. It
should be compatible with AMD GPUs when sufficient Unified Memory
support is available.
The performance of using CUDA through the Kokkos library is slightly
worse in comparison to the ParFlow native CUDA implementation. This is
because a general Kokkos implementation cannot leverage certain CUDA
features such as cudaMemset() for initialization or CUDA pinned
host/device memory for MPI buffers. Also, Kokkos determines grid and
block sizes for compute kernels differently.
The RMM pool allocator for Unified Memory can be used with Kokkos
(when using Kokkos CUDA backend) and improves the performance very
significantly. In the future, a Unified Memory pool allocator that
supports AMD cards is likely needed to achieve good performance.
Performance of the simulation initialization phase has
been improved significantly when using GPUs (with CUDA and Kokkos).
Output of Van Genuchten variables
Add output for Van Genuchten values alpha, n, sres, ssat. The new
output will be generated when the print_subsurf_data key is set.
Python interface updates
Update Hypre testing to v2.18.2
The version of Hypre used for testing was updated to v2.18.2. This
matches XSDK 0.5 version requirements.
MPI runner change
The method used to find automatically find the MPI runner (mpiexec,
srun etc) is based purely on the CMake FindMPI script. This should
be invisible to most users.
Python Interface
The Beta Python interface continues to be developed. Many
improvements and bugfixes have been made.
- Add hydrology functions
- CLM API bug fixes
- CLM ET calculation
- Allow clm_output function to return only 2D arrays
- Add irrigation to CLM variables
- dx/dy/dz support when writing PFB files
- Python testing support was added
- New feature to only show validation results for errors
- Table builder update: adding databases, cleanup
- Domain builder helper with examples, docs
Installation of example for smoke testing
The simple single phase test default_single.tcl is installed for smoke testing an installation.
Bug Fixes
Segmentation fault at end of simulation run with Van Genuchten
Segmentation fault when freeing memory at the end of simulation.
Memory errors when rank contained no active cells
The computation of the real space z vector was running beyond
temporary array (zz) resulting in memory errors.
PFMGOctree solver
PFMGOctree was not inserting the surface coefficients correctly into
the matrix with overland flow enabled.
GFortran compilaton errors
Fixed GFortran compilation errors in ifnan.F90 with later GNU releases.
Build was tested against GNU the 10.2.0 compiler suite.
CMake CI fixes
On some systems, it is necessary for any binary compiled with mpi to
be executed with the appropriate ${mpiexec} command. Setting
PARFLOW_TEST_FORCE_MPIEXEC forces sequential tests to be executed with
the ${MPIEXEC} command with 1 rank.
CLM initialization bug
Fixed CLM bug causing long initialization times.
CMake cleanup
Updated CMake to more current usage patterns and CMake minor bugfixes.
Fixed compilation issues in sequential amps layer
The AMPS sequential layer had several bugs preventing it from
compiling. Tests are passing again with a sequential build.
Internal/Developer Changes
CI has moved to Google Actions
TravisCI integration for CI has been replaced with Google Actions.
Sponsors acknowledgment
A new file has been added (SPONSORS.md) to enable acknowledgment the
sponsors of ParFlow development. Please feel free to submit a pull request
if you wish to add a sponsor.
Testing framework refactoring
The testing framework has been refactored to support Python. Directory
structure for tests has changed.
PFModule extended to support output methods
Add support in PFModule for module output. Two new methods were added
to the PFModule 'class' to output time variant and time invariant
data. This allows modules to have methods on each instance for
generating output directly from the module. Previously the approach
was to copy data to a problem data variable and output from the copy.
Hypre SMG and PFMG
Refactored common Hypre setup code to a method to keep Hypre setup consistent.
Known Issues
See https://github.com/parflow/parflow/issues for current bug/issue reports.
ParFlow Version 3.8.0
ParFlow Release Notes 3.8.0
ParFlow improvements and bug-fixes would not be possible without
contributions of the ParFlow community. Thank you for all the great
contributions.
Overview of Changes
- ParFlow Google Group
- Default I/O mode changed to amps_sequential_io
- CLM Solar Zenith Angle Calculation
- Parallel NetCDF dataset compression
- Kokkos support
- Output of Van Genuchten variables
- Python interface updates
- Update Hypre testing to v2.18.2
- MPI runner change
- Python Interface
- Segmentation fault at end of simulation run with Van Genuchten
- Memory errors when rank contained no active cells
- PFMGOctree solver
- GFortran compilation errors
- CMake CI fixes
- CLM initialization bug
- CMake cleanup
- Fixed compilation issues in sequential amps layer
- CI has moved to Google Actions
- Sponsors acknowledgment
- PFModule extended to support output methods
- Hypre SMG and PFMG
User Visible Changes
ParFlow Google Group
ParFlow has switched to using a Google group for discussion from the
previous email list server.
https://groups.google.com/g/parflow
Default I/O mode changed to amps_sequential_io
Change the default I/O model to amps_sequential_io because this is the
most common I/O model being used.
CLM Solar Zenith Angle Calculation
Add slope and aspect when determining the solar zenith angle in CLM.
A new key was added Solver.CLM.UseSlopeAspect for the inclusion of
slopes when determining solar zenith angles
Parallel NetCDF dataset compression
Added configurable deflate (zlib) compression capabilities to the
NetCDF based writing routines. The possibilities of parallel data
compression will only work in combination with the latest NetCDF4
v4.7.4 release.
pfset NetCDF.Compression True Enable deflate based compression (default: False)
pfset NetCDF.CompressionLevel 1 Compression level (0-9) (default: 1)
This work was implemented as part of EoCoE-II project (www.eocoe.eu).
Benchmark tests show that the datasize for regular NetCDF output files
could be lowered by a factor of three, in addition, as less data is
written to disk, the reduced data size can also lower the overall I/O
footprint towards the filesystem. Therefore, depending of the selected
setup, the compression overhead can be balanced by reduced writing
times.
Kokkos support
User instructions on how to use Kokkos backend can be found from
README-GPU.md.
Add Kokkos accelerator backend support as an alternative to the
native ParFlow CUDA backend to support more accelerator devices. The
implementation does not rely on any CUDA-specific arguments but still
requires Unified Memory support from the accelerator devices. It
should be compatible with AMD GPUs when sufficient Unified Memory
support is available.
The performance of using CUDA through the Kokkos library is slightly
worse in comparison to the ParFlow native CUDA implementation. This is
because a general Kokkos implementation cannot leverage certain CUDA
features such as cudaMemset() for initialization or CUDA pinned
host/device memory for MPI buffers. Also, Kokkos determines grid and
block sizes for compute kernels differently.
The RMM pool allocator for Unified Memory can be used with Kokkos
(when using Kokkos CUDA backend) and improves the performance very
significantly. In the future, a Unified Memory pool allocator that
supports AMD cards is likely needed to achieve good performance.
Performance of the simulation initialization phase has
been improved significantly when using GPUs (with CUDA and Kokkos).
Output of Van Genuchten variables
Add output for Van Genuchten values alpha, n, sres, ssat. The new
output will be generated when the print_subsurf_data key is set.
Python interface updates
Update Hypre testing to v2.18.2
The version of Hypre used for testing was updated to v2.18.2. This
matches XSDK 0.5 version requirements.
MPI runner change
The method used to find automatically find the MPI runner (mpiexec,
srun etc) is based purely on the CMake FindMPI script. This should
be invisible to most users.
Python Interface
The Beta Python interface continues to be developed. Many
improvements and bugfixes have been made.
- Add hydrology functions
- CLM API bug fixes
- CLM ET calculation
- Allow clm_output function to return only 2D arrays
- Add irrigation to CLM variables
- dx/dy/dz support when writing PFB files
- Python testing support was added
- New feature to only show validation results for errors
- Table builder update: adding databases, cleanup
- Domain builder helper with examples, docs
Bug Fixes
Segmentation fault at end of simulation run with Van Genuchten
Segmentation fault when freeing memory at the end of simulation.
Memory errors when rank contained no active cells
The computation of the real space z vector was running beyond
temporary array (zz) resulting in memory errors.
PFMGOctree solver
PFMGOctree was not inserting the surface coefficients correctly into
the matrix with overland flow enabled.
GFortran compilaton errors
Fixed GFortran compilation errors in ifnan.F90 with later GNU releases.
Build was tested against GNU the 10.2.0 compiler suite.
CMake CI fixes
On some systems, it is necessary for any binary compiled with mpi to
be executed with the appropriate ${mpiexec} command. Setting
PARFLOW_TEST_FORCE_MPIEXEC forces sequential tests to be executed with
the ${MPIEXEC} command with 1 rank.
CLM initialization bug
Fixed CLM bug causing long initialization times.
CMake cleanup
Updated CMake to more current usage patterns and CMake minor bugfixes.
Fixed compilation issues in sequential amps layer
The AMPS sequential layer had several bugs preventing it from
compiling. Tests are passing again with a sequential build.
Internal/Developer Changes
CI has moved to Google Actions
TravisCI integration for CI has been replaced with Google Actions.
Sponsors acknowledgment
A new file has been added (SPONSORS.md) to enable acknowledgment the
sponsors of ParFlow development. Please feel free to submit a pull request
if you wish to add a sponsor.
Testing framework refactoring
The testing framework has been refactored to support Python. Directory
structure for tests has changed.
PFModule extended to support output methods
Add support in PFModule for module output. Two new methods were added
to the PFModule 'class' to output time variant and time invariant
data. This allows modules to have methods on each instance for
generating output directly from the module. Previously the approach
was to copy data to a problem data variable and output from the copy.
Hypre SMG and PFMG
Refactored common Hypre setup code to a method to keep Hypre setup consistent.
Known Issues
See https://github.com/parflow/parflow/issues for current bug/issue reports.
ParFlow Version 3.7.0
ParFlow Release Notes 3.7.0
ParFlow improvements and bug-fixes would not be possible without
contributions of the ParFlow community. Thank you for all the great
work.
Overview of Changes
- Autoconf support has been removed.
- Support for on-node parallelism using OpenMP and CUDA
- New overland flow formulations
- Utility for writing PFB file from R
- Additional solid file utilities in TCL
- NetCDF and HDF5 added to Docker instance
User Visible Changes
Autoconf support has been removed
The GNU Autoconf (e.g. configure) support has been dropped. Use CMake
to build ParFlow. See the README.md file for information on building
with CMake.
Support for on-node parallelism using OpenMP and CUDA
ParFlow now has an option so support on-node parallelism in addition to
using MPI. OpenMP and CUDA backends are currently supported.
See the README-CUDA.md and README-OPENMP.md files for information on
how to compile with support for CUDA and OpenMP.
Big thank you goes to Michael Burke and Jaro Hokkanen and the teams at
Boise State, U of Arizona, and FZ-Juelich for their hard work on
adding OpenMP and CUDA support.
CMake dependency on version 3.14
ParFlow now requires CMake version 3.14 or better.
New overland flow formulations
Overland flow saw significant work:
- OverlandKinematic and OverlandDiffusive BCs per LEC
- Add OverlandKinematic as a Module
- Adding new diffusive module
- Added TFG slope upwind options to Richards Jacobian
- Added overland eval diffusive module to new OverlandDiffusive BC condition
- Adding Jacobian terms for diffusive module
- Updating OverlandDiffusive boundary condition Jacobian
- Updated documentation for new boundary conditions
Utility for writing PFB file from R
A function to take array inputs and write them as pfbs. See the file:
pftools/prepostproc/PFB-WriteFcn.R
Additional solid file utilities in TCL
A new PF tools for creating solid files with irregular top and bottom
surfaces and conversion utilities to/from ascii or binary solid
files. See user-manual documentation on pfpatchysolid and
pfsolidfmtconver for information on the new TCL commands.
NetCDF and HDF5 added to Docker instance
The ParFlow Docker instance now includes support for NetCDF and HDF5.
Bug Fixes
Fixed compilation issue with NetCDF
CMake support for NetCDF compilation has been improved.
Memory leaks
Several memory leaks were addressed in ParFlow and PFTools.
Parallel issue with overland flow boundary conditions
Fixed bug in nl_function_eval.c that caused MPI error for some
overland BCs with processors outside computational grid.
pfdist/undist issues
Fixed pfdist/undist issues when using the sequential I/O model.
Internal Changes
Boundary condition refactoring
The loops for boundary conditions were refactored to provide a higher
level of abstraction and be more self-documenting (removed magic
numbers). ForPatchCellsPerFace is a new macro for looping over patch
faces. See nl_function_eval.c for example usage and problem_bc.h for
documentation on the new macros.
PVCopy extended to include boundary cells.
PVCopy now includes boundary cells in the copy.
DockerHub Test
A simple automated test of generated DockerHub instances was added.
Etrace support was added
Support for generating function call traces with Etrace was added. Add
-DPARFLOW_ENABLE_TRACE to CMake configure line.
See https://github.com/elcritch/etrace for additional information.
Compiler warnings treated as errors
Our development process now requires code compile cleanly with the
-Wall option on GCC. Code submissions will not be accepted that do
not cleanly compile.
Known Issues
See https://github.com/parflow/parflow/issues for current bug/issue reports.
ParFlow Version 3.6.0
ParFlow Release Notes
IMPORTANT NOTE
Support for GNU Autoconf will be removed in the next release of
ParFlow. Future releases will only support configuration using CMake.
Overview of Changes
- New overland flow boundary conditions
- Flow barrier added
- Support for metadata file
- Boundary condition refactoring
- Bug fixes
- Coding style update
User Visible Changes
New overland flow boundary conditions
Three new boundary conditions as modules - OverlandKinematic,
OverlandDiffusive and Seepage.
OverlandKinematic is similar to the original OverlandFlow boundary
condition but uses a slightly modified flux formulation that uses the
slope magnitude and it is developed to use face centered slopes (as
opposed to grid centered) and does the upwinding internally.x. See user
manual for additional information on the new boundary conditions.
New test cases were added exercising the new boundary conditions:
overland_slopingslab_DWE.tcl, overland_slopingslab_KWE.tcl,
overland_tiltedv_DWE.tcl, overland_tiltedV_KWE.tcl,
Overland_FlatICP.tcl
Two new options were added to the terrain following grid formulation
to be consistent with the upwinding approach used in the new overland
flow formulation these are specified with the new
TFGUpwindFormullation keys documented in the manual.
For both OverlandDiffusive and OverlandKinematic analytical jacobians
were implemented in the new modules and these were tested and can be
verified in the new test cases noted above.
Flow barrier added
Ability to create a flow barrier capability equivalent to the
hydraulic flow barrier (HFB) or flow and transport parameters at
interfaces. The flow barriers are placed at the fluxes as scalar
multipliers between cells (at cell interfaces).
Flow barriers are set using a PFB file, see user manual for additional
information. The flow barrier is turned off by default.
Support for metadata file
A metadata file is written in JSON format summarizing the inputs to a
run and its output files. This file provides ParaView and other
post-processing tools a simple way to aggregate data for
visualizations and analyses.
Metadata is collected during simulation startup and updated to include
timestep information with each step the simulation takes. It is
rewritten with each timestep so that separate processes may observe
simulation progress by watching the file for changes.
Bug Fixes
Fixed segmentation fault when unitialized variable was referenced in
cases with processors is outside of the active domain.
Internal Changes
Boundary condition refactoring
The framework for boundary conditions was significantly refactored to provide a
macro system to simplify adding new boundary conditions. See
bc_pressure.h for additional documentation.
Coding style update
The Uncrustify coding style was updated and code was reformated.
Known Issues
ParFlow Version 3.5.0
ParFlow version 3.5.0 release.
This release contains bug fixes.
Thank you to all the ParFlow contributors, both past and present. Recent contributors may be found here:
https://github.com/parflow/parflow/graphs/contributors
Major upcoming changes:
- Support for the autoconf configure scripts will be dropped in next release (3.6.0).
The team does not have resources to support two configuration systems.
Major new features:
- Added initial support for Docker and automated Docker builds on DockerHub.
Bug Highlights
-
Fixed two bugs causing segmentation faults.
-
Fixed memory leaks; Valgrind is running cleanly on regression tests.
Major outstanding issues:
-
MacOS builds
We continue to see issues with configuration and building on MacOS. Including issues with Clang 4.x
compiler releases; which is used in a number of XCode release. -
Clang 4.x
The regression tests are failing for us when compiling with Clang 4.x releases. This has been observed
under both MacOS and Linux. Newer versions of Clang do not exhibit this bug.
ParFlow Version 3.4.0
ParFlow version 3.4.0 release.
This release contains minor feature additions and bug fixes.
Thank you to all the ParFlow contributors, both past and present. Recent contributors since we have moved to GitHub may be found here:
https://github.com/parflow/parflow/graphs/contributors
Major upcoming changes:
- Support for the autoconf configure scripts will be dropped in next release (3.5.0).
The team does not have resources to support two configuration systems.
Major new features:
-
Removed hard coding of clm layers from fortran code and added RootZoneNZ ParFlow input value.
Users no longer need to recompile CLM to change the number of root zone layers. -
Moved docs directory from pftools/docs to docs.
This should make the source for the documentation easier to find. -
Added support for building doxygen source documentation.
This is a work-in-progress, most of the code still has original documentation. -
Added utilities for creating 3D input from 2D mask files.
-
Added R PFB reader.
Script PFB-ReadFcn.R is in pftools/prepostproc directory. -
Added new CSV file output file.
File with .out.timing.csv will be written for easier post-procssing of timing results. -
Add Galerkin option to Hypre PFMG solver.
Key name is RAPType, values allowed are Galerkin and NonGalerkin. -
Performance improvements for the Octree traversal.
This improves runtime 0-20% depending on domain and grid.
Bug Highlights
-
Fixed bug in PFMGOctree indices for non-symmetric matrix.
-
pftools Fixed memory allocation sizes
-
Removed CLM ts value in drv_dlmin.dat file which was overriding timestep supplied by Parflow.
Major outstanding issues:
-
MacOS builds
We continue to see issues with configuration and building on MacOS. Including issues with Clang 4.x
compiler releases; which is used in a number of XCode release. -
Clang 4.x
The regression tests are failing for us when compiling with Clang 4.x releases. This has been observed
under both MacOS and Linux. Newer versions of Clang do not exhibit this bug.