- Task snapshots - possibility to save current state of the task to the checkpoint buffer, and continue task processing (instead of storing the task final state)
- ICE check. If the master node founds
mechanic.ice
file, it will abort the run (the current checkpoint state will be stored in the master file)
- Runtime configuration is stored as attributes attached to the task board. No more
/Config
dataset - The
setup
object has been removed from the API. Runtime configuration may be now modified per task pool through theMReadOption
andMWriteOption
macros - New core configuration options: x/y/z-axis element, x/y/z-label, as well as common
module-like options such as:
debug
,dense
etc.
- Now, the user may create additional library with the runtime mode (this is an advanced
case of using the Mechanic, and we suggest to look into the core taskfarm mode). The
library prefix is
libmechanic_mode_${LIBRARY}
and is should containMaster()
andWorker()
functions. No core fallback is provided. This fully fits the original idea of the Mechanic. You can switch to different runtime mode with--mode
option.
- Now, inside a pool the pool stages are possible, i.e. to split the cpu-expensive part of
computation (some genetic algorithms may require such things). The pool stages are
created with
POOL_STAGE
return code. The pool stages may also be reset withPOOL_STAGE_RESET
code.
- The
BoardPrepare()
hook allows to change the number of tasks to compute during the pool reset loop. This will reduce the CPU overhead for applications that do not require computing all tasks at each reset.
- New hooks:
Send()
andReceive()
, that are invoked afterMPI_Send
andMPI_Receive
respectively, on each node
- Support for numeric and string attributes
- Initial work for python postprocessing pipeline
- Mechanic now ships with the RNGS random number library
- Full unit test coverage
- Documentation updates
- Bug fixes
The most important change is the new memory handler. It allows using different datatypes and dimensionality for datasets. The Module API had to be changed to reflect core changes. See the examples for the in-depth usage of the new Module API.
- Contiguous memory allocation for different datatypes and dimensionality
- Generic type allocation macros
MAllocate*
and corresponding functionsAllocate*
(up to rank 4)
- Support for all basic (native) datatypes
- Multidimensional datasets (min. rank 2 up to max rank
H5S_MAX_RANK
)
- HDF5 attributes fully handled through the Storage API
- Task board is now 3D. The fourth dimension is preserved for additional task information
- Libreadconfig is now shipped with the core and has been moved to Config API
- New advanced hooks:
NodePrepare()
,NodeProcess()
,LoopPrepare()
andLoopProcess()
- New public functions for reading and writing data:
ReadData()
,WriteData()
,ReadAttr()
,WriteAttr()
- Corresponding generic-type macros for reading and writing data:
MReadData()
,MWriteData()
,MReadAttr()
,MWriteAttr()
- Build system improvements
- Several bug fixes and minor improvements
- Documentation updates and new examples (i.e. fortran)
- Examples are installed in
share/mechanic/examples
- The MPI Blocking communication mode (
--blocking
option) - New hooks:
DatasetPrepare()
andDatasetProcess()
, which may be used for advanced stuff on HDF5 datasets (such as attributes) - New options:
xmin
,xmax
,ymin
,ymax
,xorigin
,yorigin
, common for many scientific modules - Better memory management and smaller memory footprint
- Pool based simulations
- MPI Nonblocking communication mode
- Support for different datasets (and memory banks)
- Support for module configuration (with command line)
- Support for different storage types (i.e. for further postprocessing in Gnuplot, Matplotlib etc.)