diff --git a/CHANGELOG.md b/CHANGELOG.md index 1b8c2427d..ca39be69d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,15 @@ Notable changes to Conduit are documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project aspires to adhere to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## Unreleased + +### Added + +#### Relay +- Added Relay HDF5 support for reading and writing to an HDF5 dataset with offset. +- Added `conduit::relay::io::hdf5::read_info` which allows you to obtain metadata from an HDF5 file. + + ## [0.7.1] - Released 2021-02-11 ### Fixed @@ -18,17 +27,17 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Changed -#### General +#### General - Conduit now requires C++11 support. - Python Node repr string construction now uses `Node.to_summary_string()` ### Added - CMake: Added extra check for include dir vs fully resolved hdf5 path. -#### General +#### General - Added a builtin sandboxed header-only version of fmt. The namespace and directory paths were changed to `conduit_fmt` to avoid potential symbol collisions with other codes using fmt. Downstream software can use by including `conduit_fmt/conduit_fmt.h`. - Added support for using C++11 initializer lists to set Node and DataArray values from numeric arrays. See C++ tutorial docs (https://llnl-conduit.readthedocs.io/en/latest/tutorial_cpp_numeric.html#c-11-initializer-lists) for more details. -- Added a Node::describe() method. This method creates a new node that mirrors the current Node, however each leaf is replaced by summary stats and a truncated display of the values. For use cases with large leaves, printing the describe() output Node is much more helpful for debugging and understanding vs wall of text from other to_string() methods. +- Added a Node::describe() method. This method creates a new node that mirrors the current Node, however each leaf is replaced by summary stats and a truncated display of the values. For use cases with large leaves, printing the describe() output Node is much more helpful for debugging and understanding vs wall of text from other to_string() methods. - Added conduit::utils::format methods. These methods use fmt to format strings that include fmt style patterns. The formatting arguments are passed as a conduit::Node tree. The `args` case allows named arguments (args passed as object) or ordered args (args passed as list). The `maps` case also supports named or ordered args and works in conjunction with a `map_index`. The `map_index` is used to fetch a value from an array, or list of strings, which is then passed to fmt. The `maps` style of indexed indirection supports generating path strings for non-trivial domain partition mappings in Blueprint. This functionality is also available in Python, via the `conduit.utils.format` method. - Added `DataArray::fill` method, which set all elements of a DataArray to a given value. - Added `Node::to_summary_string` methods, which allow you to create truncated strings that describe a node tree, control the max number of children and max number of elements shown. @@ -69,7 +78,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Added -#### General +#### General - Added support for children with names that include `/`. Since slashes are part of Conduit's hierarchical path mechanism, you must use explicit methods (add_child(), child(), etc) to create and access children with these types of names. These names are also supported in all basic i/o cases (JSON, YAML, Conduit Binary). - Added Node::child and Schema::child methods, which provide access to existing children by name. - Added Node::fetch_existing and Schema::fetch_existing methods, which provide access to existing paths or error when given a bad path. @@ -106,7 +115,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Fixed -#### General +#### General - Updated to newer BLT to resolve BLT/FindMPI issues with rpath linking commands when using OpenMPI. - Fixed internal object name string for the Python Iterator object. It used to report `Schema`, which triggered both puzzling and concerned emotions. - Fixed a bug with `Node.set` in the Python API that undermined setting NumPy arrays with sliced views and complex striding. General slices should now work with `set`. No changes to the `set_external` case, which requires 1-D effective striding and throws an exception when more complex strides are presented. @@ -120,9 +129,9 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Changed -#### General +#### General - Conduit's main git branch was renamed from `master` to `develop`. To allow time for folks to migrate, the `master` branch is active but frozen and will be removed during the `0.7.0` release. -- We recommend a C++11 (or newer) compiler, support for older C++ standards is deprecated and will be removed in a future release. +- We recommend a C++11 (or newer) compiler, support for older C++ standards is deprecated and will be removed in a future release. - Node::fetch_child and Schema::fetch_child are deprecated in favor of the more clearly named Node::fetch_existing and Schema::fetch_existing. fetch_child variants still exist, but will be removed in a future release. - Python str() methods for Node, Schema, and DataType now use their new to_string() methods. - DataArray::to_json(std::ostring &) is deprecated in favor DataArray::to_json_stream. to_json(std::ostring &) will be removed in a future release. @@ -153,14 +162,14 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Added -#### General +#### General - Added Node::parse() method, (C++, Python and Fortran) which supports common json and yaml parsing use cases without creating a generator instance. - Use FOLDER target property to group targets for Visual Studio - Added Node load(), and save() support to the C and Fortran APIs ### Changed -#### General +#### General - Node::load() and Node::save() now auto detect which protocol to use when protocol argument is an empty string - Changed Node::load() and Node::save() default protocol value to empty (default now is to auto detect) - Changed Python linking strategy to defer linking for our compiler modules @@ -169,7 +178,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Fixed -#### General +#### General - Fixed install paths for CMake exported target files to follow standard CMake find_package() search conventions. Also perserved duplicate files to support old import path structure for this release. - python: Fixed Node.set_external() to accept conduit nodes as well as numpy arrays - Fixed dll install locations for Windows @@ -179,7 +188,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Added -#### General +#### General - Added support to parse YAML into Conduit Nodes and to create YAML from Conduit Nodes. Support closely follows the "json" protocol, making similar choices related to promoting YAML string leaves to concrete data types. - Added several more Conduit Node methods to the C and Fortran APIs. Additions are enumerated here: https://github.com/LLNL/conduit/pull/426 - Added Node set support for Python Tuples and Lists with numeric and string entires @@ -188,19 +197,19 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s #### Blueprint -- Added support for a "zfparray" blueprint that holds ZFP compressed array data. +- Added support for a "zfparray" blueprint that holds ZFP compressed array data. - Added the the "specsets" top-level section to the Blueprint schema, which can be used to represent multi-dimensional per-material quantities (most commonly per-material atomic composition fractions). - Added explicit topological data generation functions for points, lines, and faces - Added derived topology generation functions for element centroids, sides, and corners - Added the basic example function to the conduit.mesh.blueprint.examples module #### Relay -- Added optional ZFP support to relay, that enables wrapping and unwraping zfp arrays into conduit Nodes. +- Added optional ZFP support to relay, that enables wrapping and unwraping zfp arrays into conduit Nodes. - Extended relay HDF5 I/O support to read a wider range of HDF5 string representations including H5T_VARIABLE strings. ### Changed -#### General +#### General - Conduit's automatic build process (uberenv + spack) now defaults to using Python 3 - Improved CMake export logic to make it easier to find and use Conduit install in a CMake-based build system. (See using-with-cmake example for new recipe) @@ -212,7 +221,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Fixed -#### General +#### General - Fixed bug that caused memory access after free during Node destruction #### Relay @@ -227,11 +236,11 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s #### General - Added Generic IO Handle class (relay::io::IOHandle) with C++ and Python APIs, tests, and docs. -- Added ``rename_child`` method to Schema and Node +- Added ``rename_child`` method to Schema and Node - Added generation and install of conduit_config.mk for using-with-make example - Added datatype helpers for long long and long double - Added error for empty path fetch -- Added C functions for setting error, warning, info handlers. +- Added C functions for setting error, warning, info handlers. - Added limited set of C bindings for DataType - Added C bindings for relay IO - Added several more functions to conduit node python interfaces @@ -250,7 +259,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s - Added a relay::mpi::io library that mirrors the API of relay::io, except that all functions take an MPI communicator. The functions are implemented in parallel for the ADIOS protocol. For other protocols, they will behave the same as the serial functions in relay::io. For the ADIOS protocol, the save() and save_merged() functions operate collectively within a communicator to enable multiple MPI ranks to save data to a single file as separate "domains". - Added an add_time_step() function to that lets the caller append data collectively to an existing ADIOS file - Added a function to query the number of time steps and the number of domains in a ADIOS file. -- Added versions of save and save_merged that take an options node. +- Added versions of save and save_merged that take an options node. - Added C API for new save, save_merged functions. - Added method to list an HDF5 group's child names - Added save and append methods to the HDF5 I/O interface @@ -259,7 +268,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ### Changed -#### General +#### General - Changed mapping of c types to bit-width style to be compatible with C++11 std bit-width types when C++11 is enabled - Several improvements to uberenv, our automated build process, and building directions @@ -268,16 +277,16 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s #### Relay - Improvements to the Silo mesh writer -- Refactor to support both relay::io and relay::mpi::io namespaces. +- Refactor to support both relay::io and relay::mpi::io namespaces. - Refactor to add support for steps and domains to I/O interfaces -- Changed to only use ``libver latest`` setting for for hdf5 1.8 to minimize compatibility issues +- Changed to only use ``libver latest`` setting for for hdf5 1.8 to minimize compatibility issues ### Fixed -#### General +#### General - Fixed bugs with std::vector gap methods -- Fixed A few C function names in conduit_node.h +- Fixed A few C function names in conduit_node.h - Fixed bug in python that was requesting unsigned array for signed cases - Fixed issue with Node::diff failing for string data with offsets - Fixes for building on BlueOS with the xl compiler @@ -314,7 +323,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s - Updated uberenv to use a newer spack and removed several custom packages - C++ ``Node::set`` methods now take const pointers for data -### Fixed +### Fixed #### General @@ -333,7 +342,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s ## [0.3.0] - Released 2017-08-21 -### Added +### Added #### General @@ -366,7 +375,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s #### General -- Fixed memory leaks in *conduit* +- Fixed memory leaks in *conduit* - Bug fixes to support building on Visual Studio 2013 - Bug fixes for `conduit::Nodes` in the List Role @@ -380,7 +389,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s #### Blueprint -- Added support to the blueprint python module for the mesh and mcarray protocol methods +- Added support to the blueprint python module for the mesh and mcarray protocol methods - Added stand alone blueprint verify executable ### Changed @@ -414,7 +423,7 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s #### General - Added const access to conduit::Node's children and a new NodeConstIterator -- Added support for building on Windows +- Added support for building on Windows - Added more Python, C, and Fortran API support #### Blueprint @@ -433,20 +442,20 @@ and this project aspires to adhere to [Semantic Versioning](https://semver.org/s #### General -- Changes to clarify concepts in the conduit::Node API +- Changes to clarify concepts in the conduit::Node API - Improved unit test coverage -- Renamed source and header files for clarity and to avoid potential conflicts with other projects +- Renamed source and header files for clarity and to avoid potential conflicts with other projects #### Relay -- Changed I/O protocol string names for clarity -- Refactored the relay::WebServer and the Conduit Node Viewer application +- Changed I/O protocol string names for clarity +- Refactored the relay::WebServer and the Conduit Node Viewer application ### Fixed #### General -- Resolved several bugs across libraries +- Resolved several bugs across libraries - Resolved compiler warnings and memory leaks ## 0.1.0 - Released 2016-03-30 diff --git a/src/docs/sphinx/index.rst b/src/docs/sphinx/index.rst index c5cecc94e..7294f49cf 100644 --- a/src/docs/sphinx/index.rst +++ b/src/docs/sphinx/index.rst @@ -115,6 +115,7 @@ Contributors - Arlie Capps (LLNL) - Mark Miller (LLNL) - Todd Gamblin (LLNL) +- Kevin Huynh (LLNL) - Brad Whitlock (Intelligent Light) - George Aspesi (Harvey Mudd) - Justin Bai (Harvey Mudd) diff --git a/src/libs/relay/conduit_relay_io_hdf5.cpp b/src/libs/relay/conduit_relay_io_hdf5.cpp index cff8a8653..8c0827d27 100644 --- a/src/libs/relay/conduit_relay_io_hdf5.cpp +++ b/src/libs/relay/conduit_relay_io_hdf5.cpp @@ -124,12 +124,12 @@ namespace io static std::string conduit_hdf5_list_attr_name = "__conduit_list"; - + //----------------------------------------------------------------------------- // Private class used to hold options that control hdf5 i/o params. -// +// // These values are read by about(), and are set by io::hdf5_set_options() -// +// // //----------------------------------------------------------------------------- @@ -147,15 +147,15 @@ class HDF5Options static int compression_level; public: - + //------------------------------------------------------------------------ static void set(const Node &opts) { - + if(opts.has_child("compact_storage")) { const Node &compact = opts["compact_storage"]; - + if(compact.has_child("enabled")) { std::string enabled = compact["enabled"].as_string(); @@ -190,23 +190,23 @@ class HDF5Options { chunking_enabled = true; } - + } - + if(chunking.has_child("threshold")) { chunk_threshold = chunking["threshold"].to_value(); } - + if(chunking.has_child("chunk_size")) { chunk_size = chunking["chunk_size"].to_value(); } - + if(chunking.has_child("compression")) { const Node &comp = chunking["compression"]; - + if(comp.has_child("method")) { compression_method = comp["method"].as_string(); @@ -232,7 +232,7 @@ class HDF5Options { opts["compact_storage/enabled"] = "false"; } - + opts["compact_storage/threshold"] = compact_storage_threshold; if(chunking_enabled) @@ -243,7 +243,7 @@ class HDF5Options { opts["chunking/enabled"] = "false"; } - + opts["chunking/threshold"] = chunk_threshold; opts["chunking/chunk_size"] = chunk_size; @@ -261,7 +261,7 @@ bool HDF5Options::compact_storage_enabled = true; int HDF5Options::compact_storage_threshold = 1024; bool HDF5Options::chunking_enabled = true; -int HDF5Options::chunk_size = 1000000; // 1 mb +int HDF5Options::chunk_size = 1000000; // 1 mb int HDF5Options::chunk_threshold = 2000000; // 2 mb std::string HDF5Options::compression_method = "gzip"; @@ -284,9 +284,9 @@ hdf5_options(Node &opts) //----------------------------------------------------------------------------- // Private class used to suppress HDF5 error messages. -// -// Creating an instance of this class will disable the current HDF5 error -// callbacks. The default HDF5 callback print error messages we probing +// +// Creating an instance of this class will disable the current HDF5 error +// callbacks. The default HDF5 callback print error messages we probing // properties of the HDF5 tree. When the instance is destroyed, the previous // error state is restored. //----------------------------------------------------------------------------- @@ -299,7 +299,7 @@ class HDF5ErrorStackSupressor { disable_hdf5_error_func(); } - + ~HDF5ErrorStackSupressor() { restore_hdf5_error_func(); @@ -307,14 +307,14 @@ class HDF5ErrorStackSupressor private: // saves current error func. - // for hdf5's default setup this disable printed error messages + // for hdf5's default setup this disable printed error messages // that occur when we are probing properties of the hdf5 tree void disable_hdf5_error_func() { H5Eget_auto(H5E_DEFAULT, - &herr_func, + &herr_func, &herr_func_client_data); - + H5Eset_auto(H5E_DEFAULT, NULL, NULL); @@ -331,7 +331,7 @@ class HDF5ErrorStackSupressor // callback used for hdf5 error interface H5E_auto2_t herr_func; // data container for hdf5 error interface callback - void *herr_func_client_data; + void *herr_func_client_data; }; //----------------------------------------------------------------------------- @@ -339,7 +339,7 @@ class HDF5ErrorStackSupressor //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- -// helpers for finding hdf5 object filename and constructing ref paths for +// helpers for finding hdf5 object filename and constructing ref paths for // errors //----------------------------------------------------------------------------- void hdf5_filename_from_hdf5_obj_id(hid_t hdf5_id, @@ -354,7 +354,7 @@ void hdf5_ref_path_with_filename(hid_t hdf5_id, //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- -// helpers for checking if compatible +// helpers for checking if compatible //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- @@ -364,24 +364,28 @@ void hdf5_ref_path_with_filename(hid_t hdf5_id, bool check_if_conduit_leaf_is_compatible_with_hdf5_obj(const DataType &dtype, const std::string &ref_path, hid_t hdf5_id, + const Node &opts, std::string &incompat_details); //----------------------------------------------------------------------------- bool check_if_conduit_object_is_compatible_with_hdf5_tree(const Node &node, const std::string &ref_path, hid_t hdf5_id, + const Node &opts, std::string &incompat_details); //----------------------------------------------------------------------------- bool check_if_conduit_node_is_compatible_with_hdf5_tree(const Node &node, const std::string &ref_path, hid_t hdf5_id, + const Node &opts, std::string &incompat_details); //----------------------------------------------------------------------------- bool check_if_conduit_list_is_compatible_with_hdf5_tree(const Node &node, const std::string &ref_path, hid_t hdf5_id, + const Node &opts, std::string &incompat_details); //----------------------------------------------------------------------------- @@ -395,7 +399,8 @@ bool check_if_hdf5_group_has_conduit_list_attribute(hid_t hdf5_group_id); hid_t create_hdf5_dataset_for_conduit_leaf(const DataType &dt, const std::string &ref_path, hid_t hdf5_group_id, - const std::string &hdf5_dset_name); + const std::string &hdf5_dset_name, + bool extendible); //----------------------------------------------------------------------------- hid_t create_hdf5_group_for_conduit_node(const Node &node, @@ -406,18 +411,21 @@ hid_t create_hdf5_group_for_conduit_node(const Node &node, //----------------------------------------------------------------------------- void write_conduit_leaf_to_hdf5_dataset(const Node &node, const std::string &ref_path, - hid_t hdf5_dset_id); + hid_t hdf5_dset_id, + const Node &opts); //----------------------------------------------------------------------------- void write_conduit_leaf_to_hdf5_group(const Node &node, const std::string &ref_path, hid_t hdf5_group_id, - const std::string &hdf5_dset_name); + const std::string &hdf5_dset_name, + const Node &opts); //----------------------------------------------------------------------------- void write_conduit_node_children_to_hdf5_group(const Node &node, const std::string &ref_path, - hid_t hdf5_group_id); + hid_t hdf5_group_id, + const Node &opts); //----------------------------------------------------------------------------- void write_conduit_hdf5_list_attribute(hid_t hdf5_group_id, @@ -434,23 +442,29 @@ void remove_conduit_hdf5_list_attribute(hid_t hdf5_group_id, //----------------------------------------------------------------------------- void read_hdf5_dataset_into_conduit_node(hid_t hdf5_dset_id, const std::string &ref_path, + bool only_get_metadata, + const Node &opts, Node &dest); //----------------------------------------------------------------------------- void read_hdf5_group_into_conduit_node(hid_t hdf5_group_id, const std::string &ref_path, + bool only_get_metadata, + const Node &opts, Node &dest); //----------------------------------------------------------------------------- void read_hdf5_tree_into_conduit_node(hid_t hdf5_id, const std::string &ref_path, + bool only_get_metadata, + const Node &opts, Node &dest); //----------------------------------------------------------------------------- -// helper used to properly create a new ref_path for a child +// helper used to properly create a new ref_path for a child std::string join_ref_paths(const std::string &parent, const std::string &child) { @@ -481,7 +495,7 @@ conduit_dtype_to_hdf5_dtype(const DataType &dt, { hid_t res = -1; - // // This code path enables writing strings in a way that is friendlier + // // This code path enables writing strings in a way that is friendlier // // to hdf5 command line tools like hd5dump and h5ls. However // // using this path we *cannot* compress that string data, so // // is currently disabled @@ -527,27 +541,27 @@ conduit_dtype_to_hdf5_dtype(const DataType &dt, case DataType::INT16_ID: res = H5T_STD_I16LE; break; case DataType::INT32_ID: res = H5T_STD_I32LE; break; case DataType::INT64_ID: res = H5T_STD_I64LE; break; - + case DataType::UINT8_ID: res = H5T_STD_U8LE; break; case DataType::UINT16_ID: res = H5T_STD_U16LE; break; case DataType::UINT32_ID: res = H5T_STD_U32LE; break; case DataType::UINT64_ID: res = H5T_STD_U64LE; break; - + case DataType::FLOAT32_ID: res = H5T_IEEE_F32LE; break; case DataType::FLOAT64_ID: res = H5T_IEEE_F64LE; break; - - case DataType::CHAR8_STR_ID: + + case DataType::CHAR8_STR_ID: CONDUIT_HDF5_ERROR(ref_path, "conduit::DataType to HDF5 Leaf DataType " << "Conversion:" - << dt.to_json() + << dt.to_json() << " needs to be handled with string logic"); break; default: CONDUIT_HDF5_ERROR(ref_path, "conduit::DataType to HDF5 Leaf DataType " << "Conversion:" - << dt.to_json() + << dt.to_json() << " is not a leaf data type"); }; } @@ -559,27 +573,27 @@ conduit_dtype_to_hdf5_dtype(const DataType &dt, case DataType::INT16_ID: res = H5T_STD_I16BE; break; case DataType::INT32_ID: res = H5T_STD_I32BE; break; case DataType::INT64_ID: res = H5T_STD_I64BE; break; - + case DataType::UINT8_ID: res = H5T_STD_U8BE; break; case DataType::UINT16_ID: res = H5T_STD_U16BE; break; case DataType::UINT32_ID: res = H5T_STD_U32BE; break; case DataType::UINT64_ID: res = H5T_STD_U64BE; break; - + case DataType::FLOAT32_ID: res = H5T_IEEE_F32BE; break; case DataType::FLOAT64_ID: res = H5T_IEEE_F64BE; break; - - case DataType::CHAR8_STR_ID: + + case DataType::CHAR8_STR_ID: CONDUIT_HDF5_ERROR(ref_path, "conduit::DataType to HDF5 Leaf DataType " << "Conversion:" - << dt.to_json() + << dt.to_json() << " needs to be handled with string logic"); break; default: CONDUIT_HDF5_ERROR(ref_path, "conduit::DataType to HDF5 Leaf DataType " << "Conversion:" - << dt.to_json() + << dt.to_json() << " is not a leaf data type"); }; } @@ -598,12 +612,12 @@ conduit_dtype_to_hdf5_dtype_cleanup(hid_t hdf5_dtype_id, { // NOTE: This cleanup won't be triggered when we use thee // based H5T_C_S1 with a data space that encodes # of elements - // (Our current path, given our logic to encode string size in the + // (Our current path, given our logic to encode string size in the // hdf5 type is disabled ) - - // if this is a string using a custom type we need to cleanup + + // if this is a string using a custom type we need to cleanup // the conduit_dtype_to_hdf5_dtype result - if( (! H5Tequal(hdf5_dtype_id, H5T_C_S1) ) && + if( (! H5Tequal(hdf5_dtype_id, H5T_C_S1) ) && (H5Tget_class(hdf5_dtype_id) == H5T_STRING ) ) { CONDUIT_CHECK_HDF5_ERROR_WITH_REF_PATH(H5Tclose(hdf5_dtype_id), @@ -615,14 +629,14 @@ conduit_dtype_to_hdf5_dtype_cleanup(hid_t hdf5_dtype_id, //----------------------------------------------------------------------------- -DataType +DataType hdf5_dtype_to_conduit_dtype(hid_t hdf5_dtype_id, index_t num_elems, const std::string &ref_path) { // TODO: there may be a more straight forward way to do this using // hdf5's data type introspection methods - + DataType res; //----------------------------------------------- // signed ints @@ -750,7 +764,7 @@ hdf5_dtype_to_conduit_dtype(hid_t hdf5_dtype_id, // extended string reps else if( H5Tget_class(hdf5_dtype_id) == H5T_STRING ) { - // for strings of this type, the length + // for strings of this type, the length // is encoded in the hdf5 type not the hdf5 data space index_t hdf5_strlen = H5Tget_size(hdf5_dtype_id); // check for variable type first @@ -825,15 +839,16 @@ bool check_if_conduit_leaf_is_compatible_with_hdf5_obj(const DataType &dtype, const std::string &ref_path, hid_t hdf5_id, + const Node& opts, std::string &incompat_details) { bool res = true; H5O_info_t h5_obj_info; herr_t h5_status = H5Oget_info(hdf5_id, &h5_obj_info); - + // make sure it is a dataset ... - if( CONDUIT_HDF5_STATUS_OK(h5_status) && + if( CONDUIT_HDF5_STATUS_OK(h5_status) && ( h5_obj_info.type == H5O_TYPE_DATASET ) ) { // get the hdf5 dataspace for the passed hdf5 obj @@ -841,7 +856,7 @@ check_if_conduit_leaf_is_compatible_with_hdf5_obj(const DataType &dtype, if( H5Sget_simple_extent_type(h5_test_dspace) == H5S_NULL ) { - // a dataset with H5S_NULL data space is only compatible with + // a dataset with H5S_NULL data space is only compatible with // conduit empty if(!dtype.is_empty()) { @@ -862,21 +877,23 @@ check_if_conduit_leaf_is_compatible_with_hdf5_obj(const DataType &dtype, // get the hdf5 datatype that matchs the conduit dtype hid_t h5_dtype = conduit_dtype_to_hdf5_dtype(dtype, ref_path); - + // get the hdf5 datatype for the passed hdf5 obj hid_t h5_test_dtype = H5Dget_type(hdf5_id); - + // we will check the 1d-properties of the hdf5 dataspace hssize_t h5_test_num_ele = H5Sget_simple_extent_npoints(h5_test_dspace); - - + + hsize_t dataset_max_dims[1]; + H5Sget_simple_extent_dims(h5_test_dspace, NULL, dataset_max_dims); + // string case is special, check it first - + // if the dataset in the file is a custom string type // check the type's size vs the # of elements if( ( ! H5Tequal(h5_test_dtype, H5T_C_S1) && ( H5Tget_class(h5_test_dtype) == H5T_STRING ) && - ( H5Tget_class(h5_dtype) == H5T_STRING ) ) && + ( H5Tget_class(h5_dtype) == H5T_STRING ) ) && // if not shorted out, we have a string w/ custom type // check length to see if compat // note: both hdf5 and conduit dtypes include null term in string size @@ -886,7 +903,7 @@ check_if_conduit_leaf_is_compatible_with_hdf5_obj(const DataType &dtype, oss << "Conduit Node (string leaf) at path '" << ref_path << "'" << " is not compatible with given HDF5 Dataset at path" << " '" << ref_path << "'" - << "\nConduit leaf String Node length (" + << "\nConduit leaf String Node length (" << dtype.number_of_elements() << ")" << " != HDF5 Dataset size (" << H5Tget_size(h5_test_dtype) << ")"; @@ -894,15 +911,29 @@ check_if_conduit_leaf_is_compatible_with_hdf5_obj(const DataType &dtype, res = false; } - else if( ! ( (H5Tequal(h5_dtype, h5_test_dtype) > 0) && - (dtype.number_of_elements() == h5_test_num_ele) ) ) + else if( ! (H5Tequal(h5_dtype, h5_test_dtype) > 0) ) { + + std::ostringstream oss; + oss << "Conduit Node (leaf) at path '" << ref_path << "'" + << " is not compatible with given HDF5 Dataset at path" + << " '" << ref_path << "'"; + + incompat_details = oss.str(); + + res = false; + } + else if( dataset_max_dims[0] != H5S_UNLIMITED && !opts.has_child("offset") + && !opts.has_child("stride") && dtype.number_of_elements() + != h5_test_num_ele ) + { + std::ostringstream oss; oss << "Conduit Node (leaf) at path '" << ref_path << "'" << " is not compatible with given HDF5 Dataset at path" << " '" << ref_path << "'" - << "\nConduit leaf Node number of elements (" - << dtype.number_of_elements() << ")" + << "\nConduit leaf Node number of elements (" + << dtype.number_of_elements() << " " << h5_test_num_ele << ")" << " != HDF5 Dataset size (" << H5Tget_size(h5_test_dtype) << ")"; incompat_details = oss.str(); @@ -946,15 +977,16 @@ bool check_if_conduit_object_is_compatible_with_hdf5_tree(const Node &node, const std::string &ref_path, hid_t hdf5_id, - std::string &incompat_details) + const Node &opts, + std::string &incompat_details) { bool res = true; - - // make sure we have a group ... - + + // make sure we have a group ... + H5O_info_t h5_obj_info; herr_t h5_status = H5Oget_info(hdf5_id, &h5_obj_info); - + // make sure it is a group ... if( CONDUIT_HDF5_STATUS_OK(h5_status) && (h5_obj_info.type == H5O_TYPE_GROUP) ) @@ -966,29 +998,30 @@ check_if_conduit_object_is_compatible_with_hdf5_tree(const Node &node, { const Node &child = itr.next(); - // check if the HDF5 group has child with same name + // check if the HDF5 group has child with same name // as the node's child - + hid_t h5_child_obj = H5Oopen(hdf5_id, itr.name().c_str(), H5P_DEFAULT); - + std::string chld_ref_path = join_ref_paths(ref_path,itr.name()); if( CONDUIT_HDF5_VALID_ID(h5_child_obj) ) { - // if a child does exist, we need to make sure the child is + // if a child does exist, we need to make sure the child is // compatible with the conduit node res = check_if_conduit_node_is_compatible_with_hdf5_tree(child, chld_ref_path, h5_child_obj, + opts, incompat_details); - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Oclose(h5_child_obj), hdf5_id, ref_path, "Failed to close HDF5 Object: " << h5_child_obj); } - // no child exists with this name, we are ok (it can be created + // no child exists with this name, we are ok (it can be created // to match) check the next child } } @@ -1005,7 +1038,7 @@ check_if_conduit_object_is_compatible_with_hdf5_tree(const Node &node, res = false; } - + return res; } @@ -1014,12 +1047,13 @@ bool check_if_conduit_list_is_compatible_with_hdf5_tree(const Node &node, const std::string &ref_path, hid_t hdf5_id, + const Node &opts, std::string &incompat_details) { bool res = true; - - // make sure we have a group ... - + + // make sure we have a group ... + H5O_info_t h5_obj_info; herr_t h5_status = H5Oget_info(hdf5_id, &h5_obj_info); @@ -1027,9 +1061,9 @@ check_if_conduit_list_is_compatible_with_hdf5_tree(const Node &node, if( CONDUIT_HDF5_STATUS_OK(h5_status) && (h5_obj_info.type == H5O_TYPE_GROUP) ) { - // TODO: should we force the group should have our att that signals a + // TODO: should we force the group should have our att that signals a // list ? - + // if(!check_if_hdf5_group_has_conduit_list_attribute(hdf5_id,ref_path)) // { // // we don't have a list @@ -1061,23 +1095,24 @@ check_if_conduit_list_is_compatible_with_hdf5_tree(const Node &node, H5_ITER_INC, itr.index(), H5P_DEFAULT); - + std::string chld_ref_path = join_ref_paths(ref_path,itr.name()); if( CONDUIT_HDF5_VALID_ID(h5_child_obj) ) { - // if a child does exist, we need to make sure the child is + // if a child does exist, we need to make sure the child is // compatible with the conduit node res = check_if_conduit_node_is_compatible_with_hdf5_tree(child, chld_ref_path, h5_child_obj, + opts, incompat_details); - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Oclose(h5_child_obj), hdf5_id, ref_path, "Failed to close HDF5 Object: " << h5_child_obj); } - // no child exists with this index, we are ok (it can be created + // no child exists with this index, we are ok (it can be created // to match) } } @@ -1094,7 +1129,7 @@ check_if_conduit_list_is_compatible_with_hdf5_tree(const Node &node, res = false; } - + return res; } @@ -1105,6 +1140,7 @@ bool check_if_conduit_node_is_compatible_with_hdf5_tree(const Node &node, const std::string &ref_path, hid_t hdf5_id, + const Node &opts, std::string &incompat_details) { bool res = true; @@ -1116,6 +1152,7 @@ check_if_conduit_node_is_compatible_with_hdf5_tree(const Node &node, res = check_if_conduit_leaf_is_compatible_with_hdf5_obj(dt, ref_path, hdf5_id, + opts, incompat_details); } else if(dt.is_object()) @@ -1123,6 +1160,7 @@ check_if_conduit_node_is_compatible_with_hdf5_tree(const Node &node, res = check_if_conduit_object_is_compatible_with_hdf5_tree(node, ref_path, hdf5_id, + opts, incompat_details); } else if(dt.is_list()) @@ -1130,6 +1168,7 @@ check_if_conduit_node_is_compatible_with_hdf5_tree(const Node &node, res = check_if_conduit_list_is_compatible_with_hdf5_tree(node, ref_path, hdf5_id, + opts, incompat_details); } else // not supported @@ -1177,7 +1216,7 @@ create_hdf5_compact_plist_for_conduit_leaf() hid_t h5_cprops_id = H5Pcreate(H5P_DATASET_CREATE); H5Pset_layout(h5_cprops_id,H5D_COMPACT); - + return h5_cprops_id; } @@ -1189,10 +1228,10 @@ create_hdf5_chunked_plist_for_conduit_leaf(const DataType &dtype) hid_t h5_cprops_id = H5Pcreate(H5P_DATASET_CREATE); // Turn on chunking - - // hdf5 sets chunking in elements, not bytes, + + // hdf5 sets chunking in elements, not bytes, // our options are in bytes, so convert to # of elems - hsize_t h5_chunk_size = (hsize_t) (HDF5Options::chunk_size / dtype.element_bytes()); + hsize_t h5_chunk_size = (hsize_t) (HDF5Options::chunk_size / dtype.element_bytes()); H5Pset_chunk(h5_cprops_id, 1, &h5_chunk_size); @@ -1212,26 +1251,35 @@ hid_t create_hdf5_dataset_for_conduit_leaf(const DataType &dtype, const std::string &ref_path, hid_t hdf5_group_id, - const std::string &hdf5_dset_name) + const std::string &hdf5_dset_name, + bool extendible) { hid_t res = -1; - + hid_t h5_dtype = conduit_dtype_to_hdf5_dtype(dtype,ref_path); hsize_t num_eles = (hsize_t) dtype.number_of_elements(); hid_t h5_cprops_id = H5P_DEFAULT; - - if( HDF5Options::compact_storage_enabled && + bool unlimited_dim = false; + + if (extendible && !HDF5Options::chunking_enabled) + { + CONDUIT_ERROR("Chunking must be enabled to create an extendible array."); + } + + // if an offset is supplied, we will default to creating an extendible array + if( !extendible && HDF5Options::compact_storage_enabled && dtype.bytes_compact() <= HDF5Options::compact_storage_threshold) { h5_cprops_id = create_hdf5_compact_plist_for_conduit_leaf(); } - else if( HDF5Options::chunking_enabled && - dtype.bytes_compact() > HDF5Options::chunk_threshold) + else if( extendible || (HDF5Options::chunking_enabled && + dtype.bytes_compact() > HDF5Options::chunk_threshold)) { h5_cprops_id = create_hdf5_chunked_plist_for_conduit_leaf(dtype); + unlimited_dim = true; } CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_cprops_id, @@ -1240,9 +1288,9 @@ create_hdf5_dataset_for_conduit_leaf(const DataType &dtype, "Failed to create HDF5 property list"); hid_t h5_dspace_id = -1; - + // string a scalar with size embedded in type is disabled - // b/c this path undermines compression + // b/c this path undermines compression // if(dtype.is_string()) // { // h5_dspace_id = H5Screate(H5S_SCALAR); @@ -1254,9 +1302,19 @@ create_hdf5_dataset_for_conduit_leaf(const DataType &dtype, // NULL); // } - h5_dspace_id = H5Screate_simple(1, - &num_eles, - NULL); + if (unlimited_dim) + { + hsize_t unlimited_dims[1] = {H5S_UNLIMITED}; + h5_dspace_id = H5Screate_simple(1, + &num_eles, + unlimited_dims); + } + else + { + h5_dspace_id = H5Screate_simple(1, + &num_eles, + NULL); + } CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_dspace_id, @@ -1276,8 +1334,8 @@ create_hdf5_dataset_for_conduit_leaf(const DataType &dtype, CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(res, hdf5_group_id, ref_path, - "Failed to create HDF5 Dataset " - << hdf5_group_id << " " + "Failed to create HDF5 Dataset " + << hdf5_group_id << " " << hdf5_dset_name); // cleanup if custom data type was used @@ -1290,7 +1348,7 @@ create_hdf5_dataset_for_conduit_leaf(const DataType &dtype, hdf5_group_id, ref_path, "Failed to close HDF5 compression " - "property list " + "property list " << h5_cprops_id); } @@ -1298,13 +1356,15 @@ create_hdf5_dataset_for_conduit_leaf(const DataType &dtype, CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Sclose(h5_dspace_id), hdf5_group_id, ref_path, - "Failed to close HDF5 Dataspace " + "Failed to close HDF5 Dataspace " << h5_dspace_id); return res; } + + //---------------------------------------------------------------------------// hid_t create_hdf5_dataset_for_conduit_empty(hid_t hdf5_group_id, @@ -1315,7 +1375,7 @@ create_hdf5_dataset_for_conduit_empty(hid_t hdf5_group_id, // for conduit empty, use an opaque data type with zero size; hid_t h5_dtype_id = H5Tcreate(H5T_OPAQUE, 1); hid_t h5_dspace_id = H5Screate(H5S_NULL); - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_dspace_id, hdf5_group_id, ref_path, @@ -1333,8 +1393,8 @@ create_hdf5_dataset_for_conduit_empty(hid_t hdf5_group_id, CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(res, hdf5_group_id, ref_path, - "Failed to create HDF5 Dataset " - << hdf5_group_id + "Failed to create HDF5 Dataset " + << hdf5_group_id << " " << hdf5_dset_name); // close our datatype CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Tclose(h5_dtype_id), @@ -1345,7 +1405,7 @@ create_hdf5_dataset_for_conduit_empty(hid_t hdf5_group_id, CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Sclose(h5_dspace_id), hdf5_group_id, ref_path, - "Failed to close HDF5 Dataspace " + "Failed to close HDF5 Dataspace " << h5_dspace_id); return res; @@ -1367,7 +1427,7 @@ create_hdf5_group_for_conduit_node(const Node &node, << " list"); // track creation order - herr_t h5_status = H5Pset_link_creation_order(h5_gc_plist, + herr_t h5_status = H5Pset_link_creation_order(h5_gc_plist, ( H5P_CRT_ORDER_TRACKED | H5P_CRT_ORDER_INDEXED) ); CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_status, @@ -1375,7 +1435,7 @@ create_hdf5_group_for_conduit_node(const Node &node, ref_path, "Failed to set group link creation property"); - // prefer compact group storage + // prefer compact group storage // https://support.hdfgroup.org/HDF5/doc/RM/RM_H5G.html#Group-GroupStyles h5_status = H5Pset_link_phase_change(h5_gc_plist, 32, // max for compact storage @@ -1432,7 +1492,7 @@ create_hdf5_group_for_conduit_node(const Node &node, hdf5_parent_group_id, ref_path, "Failed to close HDF5 H5P_GROUP_CREATE " - << "property list: " + << "property list: " << h5_gc_plist); return h5_child_id; @@ -1440,38 +1500,207 @@ create_hdf5_group_for_conduit_node(const Node &node, //---------------------------------------------------------------------------// -void +void write_conduit_leaf_to_hdf5_dataset(const Node &node, const std::string &ref_path, - hid_t hdf5_dset_id) + hid_t hdf5_dset_id, + const Node &opts) { DataType dt = node.dtype(); - + hid_t h5_dtype_id = conduit_dtype_to_hdf5_dtype(dt,ref_path); herr_t h5_status = -1; - // if the node is compact, we can write directly from its data ptr - if(dt.is_compact()) + int offset = 0; + if(opts.has_child("offset")) { - // write data - h5_status = H5Dwrite(hdf5_dset_id, - h5_dtype_id, - H5S_ALL, - H5S_ALL, - H5P_DEFAULT, - node.data_ptr()); + offset = opts["offset"].to_value(); + } + + int stride = 1; + if(opts.has_child("stride")) + { + stride = opts["stride"].to_value(); + } + + if (offset < 0) + { + CONDUIT_ERROR("Offset must be non-negative."); + } + else if (stride < 1) + { + CONDUIT_ERROR("Stride must be greater than zero."); + } + + // get dimensions of dset + hid_t dataspace = H5Dget_space(hdf5_dset_id); + hsize_t dataset_dim = H5Sget_simple_extent_npoints(dataspace); + hsize_t dataset_max_dims[1]; + H5Sget_simple_extent_dims(dataspace, NULL, dataset_max_dims); + + // if the layout is fixed and no offset/stride is supplied, + // the entire array is overwriten + if (dataset_max_dims[0] != H5S_UNLIMITED && offset == 0 && stride == 1) + { + + // if the node is compact, we can write directly from its data ptr + if(dt.is_compact()) + { + // write data + h5_status = H5Dwrite(hdf5_dset_id, + h5_dtype_id, + H5S_ALL, + H5S_ALL, + H5P_DEFAULT, + node.data_ptr()); + } + else + { + // otherwise, we need to compact our data first + Node n; + node.compact_to(n); + h5_status = H5Dwrite(hdf5_dset_id, + h5_dtype_id, + H5S_ALL, + H5S_ALL, + H5P_DEFAULT, + n.data_ptr()); + } } - else + + // otherwise, any fixed datasets are converted into extendible datasets + // and the first n_elements of the entire array are overwritten. + else { - // otherwise, we need to compact our data first - Node n; - node.compact_to(n); - h5_status = H5Dwrite(hdf5_dset_id, - h5_dtype_id, - H5S_ALL, - H5S_ALL, - H5P_DEFAULT, - n.data_ptr()); + + // get the node dset size + hsize_t node_size[1] = {(hsize_t) dt.number_of_elements()}; + hid_t nodespace = H5Screate_simple(1, node_size, NULL); + + hsize_t offsets[1] = {(hsize_t) offset}; + hsize_t strides[1] = {(hsize_t) stride}; + + // convert the fixed dataset to an extendible dataset if necessary + if (dataset_max_dims[0] != H5S_UNLIMITED) + { + + if (!HDF5Options::chunking_enabled) + { + CONDUIT_ERROR("Chunking must be enabled to create an " + << "extendible array."); + } + + // read the hdf5 dataset into memory since node may only contain + // part of the hdf5 dataset + Node dset_to_node, opts_read; + read_hdf5_dataset_into_conduit_node(hdf5_dset_id, + ref_path, + false, + opts_read, + dset_to_node); + + // get dset's name + ssize_t hdf5_i_sz = H5Iget_name(hdf5_dset_id, NULL, 0 ); + std::vectorhdf5_i_buff(hdf5_i_sz+1, 0); + H5Iget_name(hdf5_dset_id, &hdf5_i_buff[0], hdf5_i_sz+1); + std::string hdf5_dset_path = std::string(&hdf5_i_buff[0]); + + // get the hdf5 file ID containing dset + hid_t hdf5_id = H5Iget_file_id(hdf5_dset_id); + + // get dset's name and parent group name + std::string hdf5_dset_parent_name; + std::string hdf5_dset_name; + conduit::utils::rsplit_file_path(hdf5_dset_path, + hdf5_dset_name, + hdf5_dset_parent_name); + + if(hdf5_dset_parent_name.size() == 0) + { + hdf5_dset_parent_name = "/"; + } + + // get dset's parent group ID + hid_t hdf5_dset_parent_id = H5Oopen(hdf5_id, + hdf5_dset_parent_name.c_str(), H5P_DEFAULT); + + // delete old dset (space is made inaccessible, lost, + // and not reclaimed) + hdf5_remove_path(hdf5_id, hdf5_dset_path); + + // create new extendible dset + Node opts_create; + opts_create["offset"] = 0; + write_conduit_leaf_to_hdf5_group(dset_to_node, + ref_path, + hdf5_dset_parent_id, + hdf5_dset_name, + opts_create); + + // close the old dataset to prevent the old identifier from + // interfering + H5Oclose(hdf5_dset_id); + H5Dclose(hdf5_dset_parent_id); + + hdf5_dset_id = H5Oopen(hdf5_id, + hdf5_dset_path.c_str(), H5P_DEFAULT); + H5Fclose(hdf5_id); + + H5Dclose(dataspace); + dataspace = H5Dget_space(hdf5_dset_id); + } + + // get the dimensions required to fit the node in the dset + hsize_t required_dim[1] = {(hsize_t) (offset + + dt.number_of_elements() + (dt.number_of_elements() - 1) * (stride - 1))}; + + // extend the dataset if necessary + if (dataset_dim < required_dim[0]) + { + h5_status = H5Dset_extent(hdf5_dset_id, required_dim); + + // check extend result + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_status, + hdf5_dset_id, + ref_path, + "Failed to extend HDF5 Dataset " + << hdf5_dset_id); + + //get new dataspace after extending + H5Dclose(dataspace); + dataspace = H5Dget_space(hdf5_dset_id); + } + + // select indices to write to + H5Sselect_hyperslab(dataspace, H5S_SELECT_SET, offsets, + strides, node_size, NULL); + + // if the node is compact, we can write directly from its data ptr + if(dt.is_compact()) + { + // write data + h5_status = H5Dwrite(hdf5_dset_id, + h5_dtype_id, + nodespace, + dataspace, + H5P_DEFAULT, + node.data_ptr()); + } + else + { + // otherwise, we need to compact our data first + Node n; + node.compact_to(n); + h5_status = H5Dwrite(hdf5_dset_id, + h5_dtype_id, + nodespace, + dataspace, + H5P_DEFAULT, + n.data_ptr()); + } + + H5Dclose(nodespace); + H5Dclose(dataspace); } // check write result @@ -1487,11 +1716,12 @@ write_conduit_leaf_to_hdf5_dataset(const Node &node, //---------------------------------------------------------------------------// -void +void write_conduit_leaf_to_hdf5_group(const Node &node, const std::string &ref_path, hid_t hdf5_group_id, - const std::string &hdf5_dset_name) + const std::string &hdf5_dset_name, + const Node &opts) { // data set case ... @@ -1507,7 +1737,7 @@ write_conduit_leaf_to_hdf5_group(const Node &node, if( CONDUIT_HDF5_STATUS_OK(h5_info_status) ) { // if it does exist, we assume it is compatible - // (this private method will only be called after a + // (this private method will only be called after a // compatibility check) h5_child_id = H5Dopen(hdf5_group_id, hdf5_dset_name.c_str(), @@ -1525,10 +1755,16 @@ write_conduit_leaf_to_hdf5_group(const Node &node, else { // if the hdf5 dataset does not exist, we need to create it + bool extendible = false; + if (opts.has_child("offset")) + { + extendible = true; + } h5_child_id = create_hdf5_dataset_for_conduit_leaf(node.dtype(), ref_path, hdf5_group_id, - hdf5_dset_name); + hdf5_dset_name, + extendible); CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_child_id, hdf5_group_id, @@ -1539,13 +1775,14 @@ write_conduit_leaf_to_hdf5_group(const Node &node, << " name: " << hdf5_dset_name); } - + std::string chld_ref_path = join_ref_paths(ref_path,hdf5_dset_name); // write the data write_conduit_leaf_to_hdf5_dataset(node, chld_ref_path, - h5_child_id); - + h5_child_id, + opts); + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Dclose(h5_child_id), hdf5_group_id, ref_path, @@ -1554,7 +1791,7 @@ write_conduit_leaf_to_hdf5_group(const Node &node, } //---------------------------------------------------------------------------// -void +void write_conduit_empty_to_hdf5_group(hid_t hdf5_group_id, const std::string &ref_path, const std::string &hdf5_dset_name) @@ -1572,7 +1809,7 @@ write_conduit_empty_to_hdf5_group(hid_t hdf5_group_id, if( CONDUIT_HDF5_STATUS_OK(h5_info_status) ) { // if it does exist, we assume it is compatible - // (this private method will only be called after a + // (this private method will only be called after a // compatibility check) } else @@ -1589,13 +1826,13 @@ write_conduit_empty_to_hdf5_group(hid_t hdf5_group_id, << " parent: " << hdf5_group_id << " name: " << hdf5_dset_name); - CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Dclose(h5_child_id), + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Dclose(h5_child_id), hdf5_group_id, ref_path, "Failed to close HDF5 Dataset: " << h5_child_id); } - + } @@ -1624,12 +1861,13 @@ setup_hdf5_group_atts_for_conduit_node(const Node &node, //---------------------------------------------------------------------------// -// assume this is called only if we know the hdf5 state is compatible +// assume this is called only if we know the hdf5 state is compatible //---------------------------------------------------------------------------// void write_conduit_node_children_to_hdf5_group(const Node &node, const std::string &ref_path, - hid_t hdf5_group_id) + hid_t hdf5_group_id, + const Node &opts) { // make sure our special atts are setup correctly setup_hdf5_group_atts_for_conduit_node(node, @@ -1650,7 +1888,8 @@ write_conduit_node_children_to_hdf5_group(const Node &node, write_conduit_leaf_to_hdf5_group(child, ref_path, hdf5_group_id, - child_name.c_str()); + child_name.c_str(), + opts); } else if(dt.is_empty()) { @@ -1669,9 +1908,9 @@ write_conduit_node_children_to_hdf5_group(const Node &node, child_name.c_str(), &h5_obj_info, H5P_DEFAULT); - + hid_t h5_child_id = -1; - + if( CONDUIT_HDF5_STATUS_OK(h5_info_status) ) { // if the hdf5 group exists, open it @@ -1696,21 +1935,22 @@ write_conduit_node_children_to_hdf5_group(const Node &node, } - // traverse + // traverse write_conduit_node_children_to_hdf5_group(child, ref_path, - h5_child_id); + h5_child_id, + opts); CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Gclose(h5_child_id), hdf5_group_id, ref_path, - "Failed to close HDF5 Group " + "Failed to close HDF5 Group " << h5_child_id); } else { CONDUIT_HDF5_WARN(ref_path, - "DataType \'" + "DataType \'" << DataType::id_to_name(dt.id()) <<"\' not supported for relay HDF5 I/O"); } @@ -1724,22 +1964,25 @@ write_conduit_node_children_to_hdf5_group(const Node &node, void write_conduit_node_to_hdf5_tree(const Node &node, const std::string &ref_path, - hid_t hdf5_id) + hid_t hdf5_id, + const Node &opts) { DataType dt = node.dtype(); - // we support a leaf or a group + // we support a leaf or a group if( dt.is_number() || dt.is_string() ) { write_conduit_leaf_to_hdf5_dataset(node, ref_path, - hdf5_id); + hdf5_id, + opts); } else if( dt.is_object() || dt.is_list() ) { write_conduit_node_children_to_hdf5_group(node, ref_path, - hdf5_id); + hdf5_id, + opts); } else // not supported { @@ -1749,16 +1992,18 @@ write_conduit_node_to_hdf5_tree(const Node &node, "HDF5 write doesn't support EMPTY_ID nodes."); } } + + //---------------------------------------------------------------------------// void write_conduit_hdf5_list_attribute(hid_t hdf5_group_id, const std::string &ref_path) { // We really just use the presence of the attribute, we don't need - // data associated with it. + // data associated with it. // // I tried to write a null att (null hdf5 dt, etc) but that didn't work. - // H5Awrite fails with message about null data. I could't find any + // H5Awrite fails with message about null data. I could't find any // examples that demoed this either -- it may not be supported. // // So, we write a single meaningless int as the attribute data. @@ -1766,9 +2011,9 @@ write_conduit_hdf5_list_attribute(hid_t hdf5_group_id, // or find a way to eliminate it. int att_value = 1; - + hid_t h5_dspace_id = H5Screate(H5S_SCALAR); - + hid_t h5_attr_id = H5Acreate(hdf5_group_id, conduit_hdf5_list_attr_name.c_str(), H5T_NATIVE_INT, @@ -1779,20 +2024,20 @@ write_conduit_hdf5_list_attribute(hid_t hdf5_group_id, CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_attr_id, hdf5_group_id, ref_path, - "Failed to create HDF5 Attribute " - << hdf5_group_id + "Failed to create HDF5 Attribute " + << hdf5_group_id << " " << conduit_hdf5_list_attr_name.c_str()); - + hid_t h5_status = H5Awrite(h5_attr_id, H5T_NATIVE_INT, &att_value); CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_status, hdf5_group_id, ref_path, - "Failed to write HDF5 Attribute " - << hdf5_group_id + "Failed to write HDF5 Attribute " + << hdf5_group_id << " " << conduit_hdf5_list_attr_name.c_str()); @@ -1800,14 +2045,14 @@ write_conduit_hdf5_list_attribute(hid_t hdf5_group_id, CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Sclose(h5_dspace_id), hdf5_group_id, ref_path, - "Failed to close HDF5 Dataspace " + "Failed to close HDF5 Dataspace " << h5_dspace_id); // close our attribute CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Aclose(h5_attr_id), hdf5_group_id, ref_path, - "Failed to close HDF5 Attribute " + "Failed to close HDF5 Attribute " << h5_attr_id); } @@ -1824,8 +2069,8 @@ remove_conduit_hdf5_list_attribute(hid_t hdf5_group_id, CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_status, hdf5_group_id, ref_path, - "Failed to remove HDF5 Attribute " - << hdf5_group_id + "Failed to remove HDF5 Attribute " + << hdf5_group_id << " " << conduit_hdf5_list_attr_name.c_str()); } @@ -1840,21 +2085,21 @@ remove_conduit_hdf5_list_attribute(hid_t hdf5_group_id, //---------------------------------------------------------------------------// //---------------------------------------------------------------------------// //---------------------------------------------------------------------------// -/// Data structures and callbacks that allow us to read an HDF5 hierarchy -/// via H5Literate +/// Data structures and callbacks that allow us to read an HDF5 hierarchy +/// via H5Literate /// (adapted from: h5ex_g_traverse) //---------------------------------------------------------------------------// //---------------------------------------------------------------------------// //---------------------------------------------------------------------------// -/// -/// I also reviewed / partially tried the seemingly more straight forward +/// +/// I also reviewed / partially tried the seemingly more straight forward /// approach in: /// https://www.hdfgroup.org/ftp/HDF5/examples/misc-examples/h5_info.c /// /// But since H5Gget_objtype_by_idx and H5Gget_objname_by_idx are deprecated /// there in favor of H5Lget_name_by_idx & H5Oget_info_by_idx, there isn't a /// direct way to check for links which could create cycles. -/// +/// /// It appears that the H5Literate method (as demonstrated in the /// h5ex_g_traverse example) is the preferred way to read an hdf5 location /// hierarchically, even if it seems overly complex. @@ -1884,9 +2129,13 @@ struct h5_read_opdata struct h5_read_opdata *prev; /* Pointer to previous opdata */ haddr_t addr; /* Group address */ - // pointer to conduit node, anchors traversal to + // pointer to conduit node, anchors traversal to Node *node; + const Node *opts; std::string ref_path; + + // whether to only get metadata + bool metadata_only; }; //---------------------------------------------------------------------------// @@ -1938,8 +2187,8 @@ h5l_iterate_traverse_op_func_get_child(Node &node, // we need the child index, use name to index for now // not sure if it is possible to get iteration index // from h5literate - - // Either the child already exists in conduit + + // Either the child already exists in conduit // (compat case), or we need to append to add // a new child @@ -1950,7 +2199,7 @@ h5l_iterate_traverse_op_func_get_child(Node &node, if(node.number_of_children() <= child_idx ) { - node.append(); + node.append(); } chld_node_ptr = &node.child(child_idx); @@ -1961,7 +2210,7 @@ h5l_iterate_traverse_op_func_get_child(Node &node, // only be called on groups, which will correspond // to either objects or lists } - + return chld_node_ptr; } @@ -2004,11 +2253,11 @@ h5l_iterate_traverse_op_func(hid_t hdf5_id, hdf5_id, h5_od->ref_path, "Error fetching HDF5 Object info: " - << " parent: " << hdf5_id + << " parent: " << hdf5_id << " path:" << hdf5_path) ; - std::string chld_ref_path = h5_od->ref_path + - std::string("/") + + std::string chld_ref_path = h5_od->ref_path + + std::string("/") + std::string(hdf5_path); switch (h5_info_buf.type) @@ -2039,9 +2288,9 @@ h5l_iterate_traverse_op_func(hid_t hdf5_id, hdf5_id, h5_od->ref_path, "Error opening HDF5 " - << "Group: " + << "Group: " << " parent: " - << hdf5_id + << hdf5_id << " path:" << hdf5_path); @@ -2051,6 +2300,8 @@ h5l_iterate_traverse_op_func(hid_t hdf5_id, read_hdf5_group_into_conduit_node(h5_group_id, chld_ref_path, + h5_od->metadata_only, + *h5_od->opts, *chld_node_ptr); // close the group @@ -2078,22 +2329,23 @@ h5l_iterate_traverse_op_func(hid_t hdf5_id, hdf5_id, h5_od->ref_path, "Error opening HDF5 " - << " Dataset: " + << " Dataset: " << " parent: " - << hdf5_id + << hdf5_id << " path:" << hdf5_path); - read_hdf5_dataset_into_conduit_node(h5_dset_id, chld_ref_path, + h5_od->metadata_only, + *h5_od->opts, *chld_node_ptr); - + // close the dataset CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Dclose(h5_dset_id), hdf5_id, h5_od->ref_path, "Error closing HDF5 " - << " Dataset: " + << " Dataset: " << h5_dset_id); break; } @@ -2111,6 +2363,8 @@ h5l_iterate_traverse_op_func(hid_t hdf5_id, void read_hdf5_group_into_conduit_node(hid_t hdf5_group_id, const std::string &ref_path, + bool only_get_metadata, + const Node &opts, Node &dest) { // get info, we need to get the obj addr for cycle tracking @@ -2139,9 +2393,20 @@ read_hdf5_group_into_conduit_node(hid_t hdf5_group_id, h5_od.addr = h5_info_buf.addr; // attach the pointer to our node h5_od.node = &dest; + h5_od.opts = &opts; // keep ref path h5_od.ref_path = ref_path; + // whether to only get metadata + if (only_get_metadata) + { + h5_od.metadata_only = true; + } + else + { + h5_od.metadata_only = false; + } + H5_index_t h5_grp_index_type = H5_INDEX_NAME; // check for creation order index using propertylist @@ -2164,17 +2429,17 @@ read_hdf5_group_into_conduit_node(hid_t hdf5_group_id, h5_grp_index_type = H5_INDEX_CRT_ORDER; } } - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Pclose(h5_gc_plist), hdf5_group_id, ref_path, "Failed to close HDF5 " << "H5P_GROUP_CREATE " - << "property list: " + << "property list: " << h5_gc_plist); } - - + + // use H5Literate to traverse h5_status = H5Literate(hdf5_group_id, @@ -2197,13 +2462,15 @@ read_hdf5_group_into_conduit_node(hid_t hdf5_group_id, void read_hdf5_dataset_into_conduit_node(hid_t hdf5_dset_id, const std::string &ref_path, + bool only_get_metadata, + const Node &opts, Node &dest) { hid_t h5_dspace_id = H5Dget_space(hdf5_dset_id); CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_dspace_id, hdf5_dset_id, ref_path, - "Error reading HDF5 Dataspace: " + "Error reading HDF5 Dataspace: " << hdf5_dset_id); // check for empty case @@ -2214,116 +2481,174 @@ read_hdf5_dataset_into_conduit_node(hid_t hdf5_dset_id, } else { - hid_t h5_dtype_id = H5Dget_type(hdf5_dset_id); - + hid_t h5_dtype_id = H5Dget_type(hdf5_dset_id); + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_dtype_id, hdf5_dset_id, ref_path, "Error reading HDF5 Datatype: " << hdf5_dset_id); - index_t nelems = H5Sget_simple_extent_npoints(h5_dspace_id); - - // Note: string case is handed properly in hdf5_dtype_to_conduit_dtype - DataType dt = hdf5_dtype_to_conduit_dtype(h5_dtype_id, - nelems, - ref_path); + hid_t h5_status = 0; - // if the endianness of the dset in the file doesn't - // match the current machine we always want to convert it - // on read. + index_t nelems = H5Sget_simple_extent_npoints(h5_dspace_id); - // check endianness - // Note: string cases never land here b/c they are - // created with default endianness - if(!dt.endianness_matches_machine()) + int offset = 0; + if(opts.has_child("offset")) { - // if they don't match, modify the dt - // and get the proper hdf5 data type handle - dt.set_endianness(Endianness::machine_default()); - - // clean up our old handle - CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Tclose(h5_dtype_id), - hdf5_dset_id, - ref_path, - "Error closing HDF5 Datatype: " - << h5_dtype_id); - - // get ref to standard variant of this dtype - h5_dtype_id = conduit_dtype_to_hdf5_dtype(dt, - ref_path); - - CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_dtype_id, - hdf5_dset_id, - ref_path, - "Error creating HDF5 Datatype"); - - // copy since the logic after read will cleanup - h5_dtype_id = H5Tcopy(h5_dtype_id); - CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_dtype_id, - hdf5_dset_id, - ref_path, - "Error copying HDF5 Datatype"); - // cleanup our ref from conduit_dtype_to_hdf5_dtype if necessary - conduit_dtype_to_hdf5_dtype_cleanup(h5_dtype_id); + offset = opts["offset"].to_value(); } + int stride = 1; + if(opts.has_child("stride")) + { + stride = opts["stride"].to_value(); + } - hid_t h5_status = 0; + if (offset < 0) + { + CONDUIT_ERROR("Offset must be non-negative."); + } + else if (stride < 1) + { + CONDUIT_ERROR("Stride must be greater than zero."); + } - // check for string special case, H5T_VARIABLE string - if( H5Tis_variable_str(h5_dtype_id) ) + // get the number of elements in the dataset given the offset and stride + int nelems_from_offset = (nelems - offset) / stride; + if ((nelems - offset) % stride != 0) { - //special case for reading variable string data - // hdf5 reads the data onto its heap, and - // gives us a pointer to that location - - char *read_ptr[1] = {NULL}; - h5_status = H5Dread(hdf5_dset_id, - h5_dtype_id, - H5S_ALL, - H5S_ALL, - H5P_DEFAULT, - read_ptr); - - // copy the data out to the conduit node - dest.set_string(read_ptr[0]); + nelems_from_offset++; } - // check for bad # of elements - else if( dt.number_of_elements() < 0 ) + + int nelems_to_read = nelems_from_offset; + if(opts.has_child("size")) { - CONDUIT_HDF5_ERROR(ref_path, - "Cannot read dataset with # of elements < 0"); + nelems_to_read = opts["size"].to_value(); + if (nelems_to_read < 1) + { + CONDUIT_ERROR("Size must be greater than zero."); + } } - else if(dest.dtype().is_compact() && - dest.dtype().compatible(dt) ) + + // copy metadata to the node under hard-coded keys + if (only_get_metadata) { - // we can read directly from hdf5 dataset if compact - // & compatible - h5_status = H5Dread(hdf5_dset_id, - h5_dtype_id, - H5S_ALL, - H5S_ALL, - H5P_DEFAULT, - dest.data_ptr()); + dest["num_elements"] = (int) nelems_from_offset; } else { - // we create a temp Node b/c we want read to work for - // strided data - // - // the hdf5 data will always be compact, source node we are - // reading will not unless it's already compatible and compact. - Node n_tmp(dt); - h5_status = H5Dread(hdf5_dset_id, - h5_dtype_id, - H5S_ALL, - H5S_ALL, - H5P_DEFAULT, - n_tmp.data_ptr()); - - // copy out to our dest - dest.set(n_tmp); + // Note: string case is handed properly in hdf5_dtype_to_conduit_dtype + DataType dt = hdf5_dtype_to_conduit_dtype(h5_dtype_id, + nelems_to_read, + ref_path); + + // if the endianness of the dset in the file doesn't + // match the current machine we always want to convert it + // on read. + + // check endianness + // Note: string cases never land here b/c they are + // created with default endianness + if(!dt.endianness_matches_machine()) + { + // if they don't match, modify the dt + // and get the proper hdf5 data type handle + dt.set_endianness(Endianness::machine_default()); + + // clean up our old handle + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Tclose(h5_dtype_id), + hdf5_dset_id, + ref_path, + "Error closing HDF5 Datatype: " + << h5_dtype_id); + + // get ref to standard variant of this dtype + h5_dtype_id = conduit_dtype_to_hdf5_dtype(dt, + ref_path); + + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_dtype_id, + hdf5_dset_id, + ref_path, + "Error creating HDF5 Datatype"); + + // copy since the logic after read will cleanup + h5_dtype_id = H5Tcopy(h5_dtype_id); + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_dtype_id, + hdf5_dset_id, + ref_path, + "Error copying HDF5 Datatype"); + // cleanup our ref from conduit_dtype_to_hdf5_dtype if necessary + conduit_dtype_to_hdf5_dtype_cleanup(h5_dtype_id); + } + + hsize_t node_size[1] = {(hsize_t) nelems_to_read}; + hsize_t offsets[1] = {(hsize_t) offset}; + hsize_t strides[1] = {(hsize_t) stride}; + hid_t nodespace = H5Screate_simple(1,node_size,NULL); + hid_t dataspace = H5Dget_space(hdf5_dset_id); + + // select hyperslab + H5Sselect_hyperslab(dataspace, H5S_SELECT_SET, offsets, + strides, node_size, NULL); + + // check for string special case, H5T_VARIABLE string + if( H5Tis_variable_str(h5_dtype_id) ) + { + //special case for reading variable string data + // hdf5 reads the data onto its heap, and + // gives us a pointer to that location + + char *read_ptr[1] = {NULL}; + h5_status = H5Dread(hdf5_dset_id, + h5_dtype_id, + nodespace, + dataspace, + H5P_DEFAULT, + read_ptr); + + // copy the data out to the conduit node + dest.set_string(read_ptr[0]); + } + // check for bad # of elements + else if( dt.number_of_elements() < 0 ) + { + CONDUIT_HDF5_ERROR(ref_path, + "Cannot read dataset with # of elements < 0"); + } + else if(dest.dtype().is_compact() && + dest.dtype().compatible(dt) ) + { + // we can read directly from hdf5 dataset if compact + // & compatible + h5_status = H5Dread(hdf5_dset_id, + h5_dtype_id, + nodespace, + dataspace, + H5P_DEFAULT, + dest.data_ptr()); + } + else + { + // we create a temp Node b/c we want read to work for + // strided data + // + // the hdf5 data will always be compact, source node we are + // reading will not unless it's already compatible and compact. + Node n_tmp(dt); + h5_status = H5Dread(hdf5_dset_id, + h5_dtype_id, + nodespace, + dataspace, + H5P_DEFAULT, + n_tmp.data_ptr()); + + // copy out to our dest + dest.set(n_tmp); + } + + H5Dclose(nodespace); + H5Dclose(dataspace); } CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_status, @@ -2331,7 +2656,7 @@ read_hdf5_dataset_into_conduit_node(hid_t hdf5_dset_id, ref_path, "Error reading HDF5 Dataset: " << hdf5_dset_id); - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Tclose(h5_dtype_id), hdf5_dset_id, ref_path, @@ -2352,18 +2677,20 @@ read_hdf5_dataset_into_conduit_node(hid_t hdf5_dset_id, void read_hdf5_tree_into_conduit_node(hid_t hdf5_id, const std::string &ref_path, + bool only_get_metadata, + const Node &opts, Node &dest) { herr_t h5_status = 0; H5O_info_t h5_info_buf; - + h5_status = H5Oget_info(hdf5_id,&h5_info_buf); CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_status, hdf5_id, ref_path, "Error fetching HDF5 object " - << "info from: " + << "info from: " << hdf5_id); switch (h5_info_buf.type) @@ -2374,6 +2701,8 @@ read_hdf5_tree_into_conduit_node(hid_t hdf5_id, { read_hdf5_group_into_conduit_node(hdf5_id, ref_path, + only_get_metadata, + opts, dest); break; } @@ -2383,13 +2712,15 @@ read_hdf5_tree_into_conduit_node(hid_t hdf5_id, { read_hdf5_dataset_into_conduit_node(hdf5_id, ref_path, + only_get_metadata, + opts, dest); break; } // unsupported types case H5O_TYPE_UNKNOWN: { - // we only construct these strings + // we only construct these strings // when an error occurs, to avoid overhead // for healthy fetches std::string hdf5_err_ref_path; @@ -2441,26 +2772,26 @@ read_hdf5_tree_into_conduit_node(hid_t hdf5_id, hid_t create_hdf5_file_access_plist() { - // create property list and set use latest lib ver settings + // create property list and set use latest lib ver settings hid_t h5_fa_props = H5Pcreate(H5P_FILE_ACCESS); - + CONDUIT_CHECK_HDF5_ERROR(h5_fa_props, "Failed to create H5P_FILE_ACCESS " << " property list"); - + unsigned int major_num=0; unsigned int minor_num=0; unsigned int release_num=0; - + herr_t h5_status = H5get_libversion(&major_num, &minor_num,&release_num); - + CONDUIT_CHECK_HDF5_ERROR(h5_status, "Failed to fetch HDF5 library version info "); // most of our use cases are still using 1.8. // to allow hdf5 1.8 readers to read from hdf5 1.10 writers, - // we want to pin to hdf5 1.8 features for now. - // There isn't a way to select 1.8, + // we want to pin to hdf5 1.8 features for now. + // There isn't a way to select 1.8, // https://forum.hdfgroup.org/t/seconding-the-request-for-h5pset-libver-bounds-1-8-x-file-compat-option/4056 // so only enable H5F_LIBVER_LATEST if we are using hdf5 1.8 @@ -2489,7 +2820,7 @@ create_hdf5_file_create_plist() "Failed to create H5P_FILE_CREATE " << " property list"); - herr_t h5_status = H5Pset_link_creation_order(h5_fc_props, + herr_t h5_status = H5Pset_link_creation_order(h5_fc_props, ( H5P_CRT_ORDER_TRACKED | H5P_CRT_ORDER_INDEXED) ); CONDUIT_CHECK_HDF5_ERROR(h5_status, @@ -2512,7 +2843,7 @@ hdf5_create_file(const std::string &file_path) { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; - + hid_t h5_fc_plist = create_hdf5_file_create_plist(); hid_t h5_fa_plist = create_hdf5_file_access_plist(); @@ -2523,7 +2854,7 @@ hdf5_create_file(const std::string &file_path) h5_fa_plist); CONDUIT_CHECK_HDF5_ERROR(h5_file_id, - "Error opening HDF5 file for writing: " + "Error opening HDF5 file for writing: " << file_path); CONDUIT_CHECK_HDF5_ERROR(H5Pclose(h5_fc_plist), @@ -2533,9 +2864,9 @@ hdf5_create_file(const std::string &file_path) CONDUIT_CHECK_HDF5_ERROR(H5Pclose(h5_fa_plist), "Failed to close HDF5 H5P_FILE_ACCESS " << "property list: " << h5_fa_plist); - + return h5_file_id; - + // enable hdf5 error stack } @@ -2548,18 +2879,33 @@ hdf5_close_file(hid_t hdf5_id) "Error closing HDF5 file handle: " << hdf5_id); } + + //---------------------------------------------------------------------------// void hdf5_write(const Node &node, hid_t hdf5_id, const std::string &hdf5_path) +{ + Node opts; + hdf5_write(node,hdf5_id,hdf5_path,opts); +} + + + +//---------------------------------------------------------------------------// +void +hdf5_write(const Node &node, + hid_t hdf5_id, + const std::string &hdf5_path, + const Node &opts) { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; // TODO: we only want to support abs paths if hdf5_id is a file - // if ( (not hdf5 file) && - // (hdf5_path.size() > 0) && + // if ( (not hdf5 file) && + // (hdf5_path.size() > 0) && // (hdf5_path[0] == "/") ) //{ // CONDUIT_ERROR("HDF5 id must represent a file to use a HDF5 " @@ -2572,27 +2918,27 @@ hdf5_write(const Node &node, size_t pos = 0; size_t len = hdf5_path.size(); - if( (hdf5_path.size() > 0) && + if( (hdf5_path.size() > 0) && (hdf5_path[0] == '/' ) ) { pos = 1; len--; } - + // only trim right side if we are sure there is more than one char // (avoid "/" case, which would already have been trimmed ) - if( (hdf5_path.size() > 1 ) && + if( (hdf5_path.size() > 1 ) && (hdf5_path[hdf5_path.size()-1] == '/') ) { len--; } - + std::string path = hdf5_path.substr(pos,len); - - // TODO: Creating the external tree is inefficient but the compatibility + + // TODO: Creating the external tree is inefficient but the compatibility // checks and write methods handle node paths easily handle this case. // revisit if this is too slow - + Node n; if(path.size() > 0) { @@ -2612,10 +2958,11 @@ hdf5_write(const Node &node, if(check_if_conduit_node_is_compatible_with_hdf5_tree(n, "", hdf5_id, + opts, incompat_details)) { // write if we are compat - write_conduit_node_to_hdf5_tree(n,"",hdf5_id); + write_conduit_node_to_hdf5_tree(n,"",hdf5_id,opts); } else { @@ -2624,7 +2971,7 @@ hdf5_write(const Node &node, hdf5_path, hdf5_error_ref_path); - CONDUIT_ERROR("Failed to write node to " + CONDUIT_ERROR("Failed to write node to " << "\"" << hdf5_error_ref_path << "\", " << "existing HDF5 tree is " << "incompatible with the Conduit Node." @@ -2635,35 +2982,50 @@ hdf5_write(const Node &node, } + //---------------------------------------------------------------------------// void hdf5_write(const Node &node, hid_t hdf5_id) +{ + Node opts; + hdf5_write(node,hdf5_id,opts); +} + + + +//---------------------------------------------------------------------------// +void +hdf5_write(const Node &node, + hid_t hdf5_id, + const Node &opts) { // disable hdf5 error stack // TODO: we may only need to use this in an outer level variant // of check_if_conduit_node_is_compatible_with_hdf5_tree HDF5ErrorStackSupressor supress_hdf5_errors; - + std::string incompat_details; - + // check compat if(check_if_conduit_node_is_compatible_with_hdf5_tree(node, "", hdf5_id, + opts, incompat_details)) { // write if we are compat write_conduit_node_to_hdf5_tree(node, "", - hdf5_id); + hdf5_id, + opts); } else { std::string hdf5_fname; hdf5_filename_from_hdf5_obj_id(hdf5_id, hdf5_fname); - CONDUIT_ERROR("Failed to write node to " + CONDUIT_ERROR("Failed to write node to " << "\"" << hdf5_fname << "\", " << "existing HDF5 tree is " << "incompatible with the Conduit Node." @@ -2672,12 +3034,24 @@ hdf5_write(const Node &node, // restore hdf5 error stack } + //---------------------------------------------------------------------------// -void +void hdf5_save(const Node &node, const std::string &path) { - hdf5_write(node,path,false); + Node opts; + hdf5_write(node,path,opts,false); +} + + +//---------------------------------------------------------------------------// +void +hdf5_save(const Node &node, + const std::string &path, + const Node &opts) +{ + hdf5_write(node,path,opts,false); } //---------------------------------------------------------------------------// @@ -2686,16 +3060,39 @@ hdf5_save(const Node &node, const std::string &file_path, const std::string &hdf5_path) { - hdf5_write(node,file_path,hdf5_path,false); + Node opts; + hdf5_write(node,file_path,hdf5_path,opts,false); } //---------------------------------------------------------------------------// -void +void +hdf5_save(const Node &node, + const std::string &file_path, + const std::string &hdf5_path, + const Node &opts) +{ + hdf5_write(node,file_path,hdf5_path,opts,false); +} + + +//---------------------------------------------------------------------------// +void hdf5_append(const Node &node, const std::string &path) { - hdf5_write(node,path,true); + Node opts; + hdf5_write(node,path,opts,true); +} + + +//---------------------------------------------------------------------------// +void +hdf5_append(const Node &node, + const std::string &path, + const Node &opts) +{ + hdf5_write(node,path,opts,true); } //---------------------------------------------------------------------------// @@ -2704,14 +3101,37 @@ hdf5_append(const Node &node, const std::string &file_path, const std::string &hdf5_path) { - hdf5_write(node,file_path,hdf5_path,true); + Node opts; + hdf5_write(node,file_path,hdf5_path,opts,true); +} + +//---------------------------------------------------------------------------// +void +hdf5_append(const Node &node, + const std::string &file_path, + const std::string &hdf5_path, + const Node &opts) +{ + hdf5_write(node,file_path,hdf5_path,opts,true); +} + + +//---------------------------------------------------------------------------// +void +hdf5_write(const Node &node, + const std::string &path, + bool append) +{ + Node opts; + hdf5_write(node,path,opts,append); } //---------------------------------------------------------------------------// -void +void hdf5_write(const Node &node, const std::string &path, + const Node &opts, bool append) { // check for ":" split @@ -2733,6 +3153,7 @@ hdf5_write(const Node &node, hdf5_write(node, file_path, hdf5_path, + opts, append); } @@ -2743,6 +3164,19 @@ hdf5_write(const Node &node, const std::string &file_path, const std::string &hdf5_path, bool append) +{ + Node opts; + hdf5_write(node,file_path,hdf5_path,opts,append); +} + + +//---------------------------------------------------------------------------// +void +hdf5_write(const Node &node, + const std::string &file_path, + const std::string &hdf5_path, + const Node &opts, + bool append) { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; @@ -2761,7 +3195,8 @@ hdf5_write(const Node &node, hdf5_write(node, h5_file_id, - hdf5_path); + hdf5_path, + opts); // close the hdf5 file CONDUIT_CHECK_HDF5_ERROR(H5Fclose(h5_file_id), @@ -2786,22 +3221,22 @@ hdf5_open_file_for_read(const std::string &file_path) { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; - + hid_t h5_fa_plist = create_hdf5_file_access_plist(); - + // open the hdf5 file for reading hid_t h5_file_id = H5Fopen(file_path.c_str(), H5F_ACC_RDONLY, h5_fa_plist); CONDUIT_CHECK_HDF5_ERROR(h5_file_id, - "Error opening HDF5 file for reading: " + "Error opening HDF5 file for reading: " << file_path); CONDUIT_CHECK_HDF5_ERROR(H5Pclose(h5_fa_plist), "Failed to close HDF5 H5P_FILE_ACCESS " << "property list: " << h5_fa_plist); - + return h5_file_id; // restore hdf5 error stack @@ -2815,20 +3250,20 @@ hdf5_open_file_for_read_write(const std::string &file_path) HDF5ErrorStackSupressor supress_hdf5_errors; hid_t h5_fa_plist = create_hdf5_file_access_plist(); - + // open the hdf5 file for read + write hid_t h5_file_id = H5Fopen(file_path.c_str(), H5F_ACC_RDWR, h5_fa_plist); CONDUIT_CHECK_HDF5_ERROR(h5_file_id, - "Error opening HDF5 file for reading: " + "Error opening HDF5 file for reading: " << file_path); CONDUIT_CHECK_HDF5_ERROR(H5Pclose(h5_fa_plist), "Failed to close HDF5 H5P_FILE_ACCESS " << "property list: " << h5_fa_plist); - + return h5_file_id; // restore hdf5 error stack @@ -2840,15 +3275,26 @@ void hdf5_read(hid_t hdf5_id, const std::string &hdf5_path, Node &dest) +{ + Node opts; + hdf5_read(hdf5_id,hdf5_path,opts,dest); +} + +//---------------------------------------------------------------------------// +void +hdf5_read(hid_t hdf5_id, + const std::string &hdf5_path, + const Node &opts, + Node &dest) { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; - + // get hdf5 object at path, then call read_hdf5_tree_into_conduit_node hid_t h5_child_obj = H5Oopen(hdf5_id, hdf5_path.c_str(), H5P_DEFAULT); - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_child_obj, hdf5_id, hdf5_path, @@ -2857,31 +3303,46 @@ hdf5_read(hid_t hdf5_id, read_hdf5_tree_into_conduit_node(h5_child_obj, hdf5_path, + false, + opts, dest); - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Oclose(h5_child_obj), hdf5_id, hdf5_path, "Failed to close HDF5 Object: " << h5_child_obj); - + // restore hdf5 error stack } + //---------------------------------------------------------------------------// void hdf5_read(const std::string &file_path, const std::string &hdf5_path, Node &node) +{ + Node opts; + hdf5_read(file_path,hdf5_path,opts,node); +} + +//---------------------------------------------------------------------------// +void +hdf5_read(const std::string &file_path, + const std::string &hdf5_path, + const Node &opts, + Node &node) { // note: hdf5 error stack is suppressed in these calls - + // open the hdf5 file for reading hid_t h5_file_id = hdf5_open_file_for_read(file_path); hdf5_read(h5_file_id, hdf5_path, + opts, node); - + // close the hdf5 file CONDUIT_CHECK_HDF5_ERROR(H5Fclose(h5_file_id), "Error closing HDF5 file: " << file_path); @@ -2891,11 +3352,21 @@ hdf5_read(const std::string &file_path, void hdf5_read(const std::string &path, Node &node) +{ + Node opts; + hdf5_read(path,opts,node); +} + +//---------------------------------------------------------------------------// +void +hdf5_read(const std::string &path, + const Node &opts, + Node &node) { // check for ":" split std::string file_path; std::string hdf5_path; - + conduit::utils::split_file_path(path, std::string(":"), file_path, @@ -2906,10 +3377,11 @@ hdf5_read(const std::string &path, { hdf5_path = "/"; } - + // note: hdf5 error stack is suppressed in this call hdf5_read(file_path, hdf5_path, + opts, node); } @@ -2918,14 +3390,172 @@ hdf5_read(const std::string &path, void hdf5_read(hid_t hdf5_id, Node &dest) +{ + Node opts; + hdf5_read(hdf5_id,opts,dest); +} + + +//---------------------------------------------------------------------------// +void +hdf5_read(hid_t hdf5_id, + const Node &opts, + Node &dest) +{ + // disable hdf5 error stack + HDF5ErrorStackSupressor supress_hdf5_errors; + + read_hdf5_tree_into_conduit_node(hdf5_id, + "", + false, + opts, + dest); + + // restore hdf5 error stack +} + + +//---------------------------------------------------------------------------// +void +hdf5_read_info(hid_t hdf5_id, + const std::string &hdf5_path, + Node &dest) +{ + Node opts; + hdf5_read_info(hdf5_id,hdf5_path,opts,dest); +} + +//---------------------------------------------------------------------------// +void +hdf5_read_info(hid_t hdf5_id, + const std::string &hdf5_path, + const Node &opts, + Node &dest) { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; - + + // get hdf5 object at path, then call read_hdf5_tree_into_conduit_node + hid_t h5_child_obj = H5Oopen(hdf5_id, + hdf5_path.c_str(), + H5P_DEFAULT); + + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_child_obj, + hdf5_id, + hdf5_path, + "Failed to fetch HDF5 object from: " + << hdf5_id << ":" << hdf5_path); + + read_hdf5_tree_into_conduit_node(h5_child_obj, + hdf5_path, + true, + opts, + dest); + + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Oclose(h5_child_obj), + hdf5_id, + hdf5_path, + "Failed to close HDF5 Object: " + << h5_child_obj); + + // restore hdf5 error stack +} + +//---------------------------------------------------------------------------// +void +hdf5_read_info(const std::string &file_path, + const std::string &hdf5_path, + Node &node) +{ + Node opts; + hdf5_read_info(file_path,hdf5_path,opts,node); +} + +//---------------------------------------------------------------------------// +void +hdf5_read_info(const std::string &file_path, + const std::string &hdf5_path, + const Node &opts, + Node &node) +{ + // note: hdf5 error stack is suppressed in these calls + + // open the hdf5 file for reading + hid_t h5_file_id = hdf5_open_file_for_read(file_path); + + hdf5_read_info(h5_file_id, + hdf5_path, + opts, + node); + + // close the hdf5 file + CONDUIT_CHECK_HDF5_ERROR(H5Fclose(h5_file_id), + "Error closing HDF5 file: " << file_path); +} + +//---------------------------------------------------------------------------// +void +hdf5_read_info(const std::string &path, + Node &node) +{ + Node opts; + hdf5_read_info(path,opts,node); +} + +//---------------------------------------------------------------------------// +void +hdf5_read_info(const std::string &path, + const Node &opts, + Node &node) +{ + // check for ":" split + std::string file_path; + std::string hdf5_path; + + conduit::utils::split_file_path(path, + std::string(":"), + file_path, + hdf5_path); + + // We will read the root if no hdf5_path is given. + if(hdf5_path.size() == 0) + { + hdf5_path = "/"; + } + + // note: hdf5 error stack is suppressed in this call + hdf5_read_info(file_path, + hdf5_path, + opts, + node); +} + + +//---------------------------------------------------------------------------// +void +hdf5_read_info(hid_t hdf5_id, + Node &dest) +{ + Node opts; + hdf5_read_info(hdf5_id,opts,dest); +} + + +//---------------------------------------------------------------------------// +void +hdf5_read_info(hid_t hdf5_id, + const Node &opts, + Node &dest) +{ + // disable hdf5 error stack + HDF5ErrorStackSupressor supress_hdf5_errors; + read_hdf5_tree_into_conduit_node(hdf5_id, "", + true, + opts, dest); - + // restore hdf5 error stack } @@ -2937,18 +3567,18 @@ hdf5_has_path(hid_t hdf5_id, { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; - + int res = H5Lexists(hdf5_id,hdf5_path.c_str(),H5P_DEFAULT); - - + + // H5Lexists returns: // a positive value if the link exists. // // 0 if it doesn't exist // // a negative # in some cases when it doesn't exist, and in some cases - // where there is an error. - // For our cases, we treat 0 and negative as does not exist. + // where there is an error. + // For our cases, we treat 0 and negative as does not exist. return (res > 0); // restore hdf5 error stack @@ -2961,7 +3591,7 @@ hdf5_remove_path(hid_t hdf5_id, { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Ldelete(hdf5_id, hdf5_path.c_str(), H5P_DEFAULT), @@ -2982,18 +3612,18 @@ is_hdf5_file(const std::string &file_path) { // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; - + bool res = false; // open the file for read to check if it is valid hdf5 // - // don't use H5F_ACC_RDWR, b/c if we already have a file handle open + // don't use H5F_ACC_RDWR, b/c if we already have a file handle open // that is RDONLY, the open will fail // // use H5F_ACC_RDONLY b/c it will work with open file handles hid_t h5_file_id = H5Fopen(file_path.c_str(), H5F_ACC_RDONLY, H5P_DEFAULT); - + if( h5_file_id >= 0) { res = true; @@ -3008,10 +3638,10 @@ is_hdf5_file(const std::string &file_path) void hdf5_group_list_child_names(hid_t hdf5_id, const std::string &hdf5_path, std::vector &res) -{ +{ // disable hdf5 error stack HDF5ErrorStackSupressor supress_hdf5_errors; - + res.clear(); // first, hdf5_id + path must be a group in order to have children @@ -3027,39 +3657,39 @@ void hdf5_group_list_child_names(hid_t hdf5_id, hdf5_id, "", "Error fetching HDF5 Object info: " - << " parent: " << hdf5_id + << " parent: " << hdf5_id << " path:" << hdf5_path) ; if( h5_info_buf.type != H5O_TYPE_GROUP ) - { + { // not a group, child names will be empty - // we could also choose to throw an error in the future + // we could also choose to throw an error in the future return; } - // we have a group + // we have a group // we don't care about links in this case, we want // the child names regardless, so we don't have to use H5Literate // - // we can use H5Lget_name_by_idx, as demoed in + // we can use H5Lget_name_by_idx, as demoed in // https://support.hdfgroup.org/ftp/HDF5/examples/examples-by-api/hdf5-examples/1_10/C/H5G/h5ex_g_corder.c - // - + // + hid_t h5_group_id = H5Gopen(hdf5_id, hdf5_path.c_str(), H5P_DEFAULT); - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(h5_group_id, hdf5_id, "", "Error opening HDF5 " - << "Group: " + << "Group: " << " parent: " - << hdf5_id + << hdf5_id << " path:" << hdf5_path); - + // get group info so we can find the # of children H5G_info_t h5_group_info; h5_status = H5Gget_info(h5_group_id, &h5_group_info); @@ -3067,7 +3697,7 @@ void hdf5_group_list_child_names(hid_t hdf5_id, // buffer for child names, if the names are bigger than this // buffer can hold, we will fall back to a malloc char name_buff[512]; - + for (hsize_t i=0; i < h5_group_info.nlinks; i++) { char *name_buff_ptr = name_buff; @@ -3102,7 +3732,7 @@ void hdf5_group_list_child_names(hid_t hdf5_id, name_buff_tmp = (char*)malloc(sizeof(char)*name_size); name_buff_ptr = name_buff_tmp; } - + name_size = H5Lget_name_by_idx(h5_group_id, ".", H5_INDEX_CRT_ORDER, H5_ITER_INC, @@ -3110,7 +3740,7 @@ void hdf5_group_list_child_names(hid_t hdf5_id, name_buff_ptr, name_size, H5P_DEFAULT); - + res.push_back(std::string(name_buff_ptr)); if(name_buff_tmp) @@ -3119,11 +3749,11 @@ void hdf5_group_list_child_names(hid_t hdf5_id, name_buff_tmp = NULL; } } - + CONDUIT_CHECK_HDF5_ERROR_WITH_FILE_AND_REF_PATH(H5Gclose(h5_group_id), hdf5_id, "", - "Failed to close HDF5 Group " + "Failed to close HDF5 Group " << h5_group_id); // restore hdf5 error stack diff --git a/src/libs/relay/conduit_relay_io_hdf5_api.hpp b/src/libs/relay/conduit_relay_io_hdf5_api.hpp index 65ffd02a8..51f33539b 100644 --- a/src/libs/relay/conduit_relay_io_hdf5_api.hpp +++ b/src/libs/relay/conduit_relay_io_hdf5_api.hpp @@ -31,51 +31,51 @@ /// When writing a node to a HDF5 hierarchy there are two steps: /// (1) Given a destination path, find (or create) the proper HDF5 parent group /// (2) Write the node's data into the parent group -/// +/// /// /// 1) To find (or create) the parent group there are two cases to finalize /// the desired parent's HDF5 path: /// -/// A) If the input node is a leaf type, the last part of the path -/// (the portion after the last slash '/', or the entire path if +/// A) If the input node is a leaf type, the last part of the path +/// (the portion after the last slash '/', or the entire path if /// the path doesn't contain any slashes '/') is reserved for the -/// name of the HDF5 dataset that will be used to store the node's data. -/// The rest of the path will be used to find or create the proper parent +/// name of the HDF5 dataset that will be used to store the node's data. +/// The rest of the path will be used to find or create the proper parent /// group. -/// -/// B) If the input node is an Object, the full input path will be used to +/// +/// B) If the input node is an Object, the full input path will be used to /// find or create the proper parent group. /// -/// Given the desired parent path: -/// -/// If the desired parent path exists in the HDF5 tree and resolves to +/// Given the desired parent path: +/// +/// If the desired parent path exists in the HDF5 tree and resolves to /// a group, that group will be used as the parent. /// -/// If the desired parent path does not exist, or partially exists +/// If the desired parent path does not exist, or partially exists /// relay will attempt to create a hierarchy of HDF5 groups to represent it. /// -/// During this process, if any part of the path exists and is not a HDF5 +/// During this process, if any part of the path exists and is not a HDF5 /// group, an error is thrown and nothing will be modified. /// (This error check happens before anything is modified in HDF5) /// /// /// 2) For writing the data, there are two cases: /// A) If the input node is an Object, the children of the node will be -/// written to the parent group. If children correspond to existing +/// written to the parent group. If children correspond to existing /// HDF5 entires and they are incompatible (e.g. a group vs a dataset) // an error is thrown and nothing will be written. /// (This error check happens before anything is modified in HDF5) /// -/// B) If the input node is a leaf type, the last part of the path will be -/// used as the name for a HDF5 dataset that will hold the node's data. +/// B) If the input node is a leaf type, the last part of the path will be +/// used as the name for a HDF5 dataset that will hold the node's data. /// If a child with this name already exists in in the parent group, and -/// it is not compatible (e.g. a group vs a dataset) an error is thrown +/// it is not compatible (e.g. a group vs a dataset) an error is thrown /// and nothing will be written. /// //----------------------------------------------------------------------------- /// Note: HDF5 I/O is not implemented for Conduit Nodes in the List role. /// We believe this will require a non-standard HDF5 convention, and -/// we want to focus on the Object and Leave cases since they are +/// we want to focus on the Object and Leave cases since they are /// compatible with HDF5's data model. //----------------------------------------------------------------------------- @@ -94,62 +94,85 @@ void CONDUIT_RELAY_API hdf5_close_file(hid_t hdf5_id); /// Save node data to a given path. /// /// Save Semantics: Existing file will be overwritten -/// /// Calls hdf5_write(node,path,false); +/// /// Calls hdf5_write(node,path,opts,false); /// /// /// This methods supports a file system and hdf5 path, joined using a ":" /// ex: "/path/on/file/system.hdf5:/path/inside/hdf5/file" -/// +/// //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_save(const Node &node, const std::string &path); +void CONDUIT_RELAY_API hdf5_save(const Node &node, + const std::string &path, + const Node &opts); + //----------------------------------------------------------------------------- /// Save node data to given file system path and internal hdf5 path /// /// Save Semantics: Existing file will be overwritten -/// Calls hdf5_write(node,file_path,hdf5_path,false); +/// Calls hdf5_write(node,file_path,hdf5_path,opts,false); //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_save(const Node &node, const std::string &file_path, const std::string &hdf5_path); +void CONDUIT_RELAY_API hdf5_save(const Node &node, + const std::string &file_path, + const std::string &hdf5_path, + const Node &opts); + //----------------------------------------------------------------------------- /// Write node data to a given path. /// /// Append Semantics: Append to existing file -/// Calls hdf5_write(node,path,true); +/// Calls hdf5_write(node,path,opts,true); /// /// /// This methods supports a file system and hdf5 path, joined using a ":" /// ex: "/path/on/file/system.hdf5:/path/inside/hdf5/file" -/// +/// //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_append(const Node &node, const std::string &path); +void CONDUIT_RELAY_API hdf5_append(const Node &node, + const std::string &path, + const Node &opts); + //----------------------------------------------------------------------------- /// Write node data to given file system path and internal hdf5 path /// /// Append Semantics: Append to existing file -/// Calls hdf5_write(node,file_path,hdf5_path,true); +/// Calls hdf5_write(node,file_path,hdf5_path,opts,true); //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_append(const Node &node, const std::string &file_path, const std::string &hdf5_path); +void CONDUIT_RELAY_API hdf5_append(const Node &node, + const std::string &file_path, + const std::string &hdf5_path, + const Node &opts); + //----------------------------------------------------------------------------- /// Write node data to a given path in an existing file. /// /// This methods supports a file system and hdf5 path, joined using a ":" /// ex: "/path/on/file/system.hdf5:/path/inside/hdf5/file" -/// +/// //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_write(const Node &node, const std::string &path, bool append=false); +void CONDUIT_RELAY_API hdf5_write(const Node &node, + const std::string &path, + const Node &opts, + bool append=false); + //----------------------------------------------------------------------------- /// Write node data to given file system path and internal hdf5 path //----------------------------------------------------------------------------- @@ -158,23 +181,38 @@ void CONDUIT_RELAY_API hdf5_write(const Node &node, const std::string &hdf5_path, bool append=false); +void CONDUIT_RELAY_API hdf5_write(const Node &node, + const std::string &file_path, + const std::string &hdf5_path, + const Node &opts, + bool append=false); + //----------------------------------------------------------------------------- -/// Write node data to the hdf5_path relative to group represented by -/// hdf5_id +/// Write node data to the hdf5_path relative to group represented by +/// hdf5_id //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_write(const Node &node, hid_t hdf5_id, const std::string &hdf5_path); +void CONDUIT_RELAY_API hdf5_write(const Node &node, + hid_t hdf5_id, + const std::string &hdf5_path, + const Node &opts); + //----------------------------------------------------------------------------- /// Write node data to group represented by hdf5_id -/// +/// /// Note: this only works for Conduit Nodes in the Object role. /// //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_write(const Node &node, hid_t hdf5_id); +void CONDUIT_RELAY_API hdf5_write(const Node &node, + hid_t hdf5_id, + const Node &opts); + //----------------------------------------------------------------------------- /// Open a hdf5 file for reading, using conduit's selected hdf5 plists. @@ -183,8 +221,8 @@ hid_t CONDUIT_RELAY_API hdf5_open_file_for_read(const std::string &file_path); hid_t CONDUIT_RELAY_API hdf5_open_file_for_read_write(const std::string &file_path); //----------------------------------------------------------------------------- -/// Read hdf5 data from given path into the output node -/// +/// Read hdf5 data from given path into the output node +/// /// This methods supports a file system and hdf5 path, joined using a ":" /// ex: "/path/on/file/system.hdf5:/path/inside/hdf5/file" /// @@ -192,45 +230,109 @@ hid_t CONDUIT_RELAY_API hdf5_open_file_for_read_write(const std::string &file_pa void CONDUIT_RELAY_API hdf5_read(const std::string &path, Node &node); +void CONDUIT_RELAY_API hdf5_read(const std::string &path, + const Node &opts, + Node &node); + //----------------------------------------------------------------------------- -/// Read hdf5 data from given file system path and internal hdf5 path into +/// Read hdf5 data from given file system path and internal hdf5 path into /// the output node //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_read(const std::string &file_path, const std::string &hdf5_path, Node &node); +void CONDUIT_RELAY_API hdf5_read(const std::string &file_path, + const std::string &hdf5_path, + const Node &opts, + Node &node); + //----------------------------------------------------------------------------- /// Read from hdf5 path relative to the hdf5 id into the output node //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_read(hid_t hdf5_id, const std::string &hdf5_path, Node &node); + +void CONDUIT_RELAY_API hdf5_read(hid_t hdf5_id, + const std::string &hdf5_path, + const Node &opts, + Node &node); + //----------------------------------------------------------------------------- /// Read from hdf5 id into the output node //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_read(hid_t hdf5_id, Node &node); +void CONDUIT_RELAY_API hdf5_read(hid_t hdf5_id, + const Node &opts, + Node &node); + +// here change + //----------------------------------------------------------------------------- -/// Read from hdf5 id into the output node +/// Read hdf5 info from given path into the output node +/// +/// This methods supports a file system and hdf5 path, joined using a ":" +/// ex: "/path/on/file/system.hdf5:/path/inside/hdf5/file" +/// //----------------------------------------------------------------------------- -void CONDUIT_RELAY_API hdf5_read(hid_t hdf5_id, +void CONDUIT_RELAY_API hdf5_read_info(const std::string &path, + Node &node); + +void CONDUIT_RELAY_API hdf5_read_info(const std::string &path, + const Node &opts, + Node &node); + +//----------------------------------------------------------------------------- +/// Read hdf5 info from given file system path and internal hdf5 path into +/// the output node +//----------------------------------------------------------------------------- +void CONDUIT_RELAY_API hdf5_read_info(const std::string &file_path, + const std::string &hdf5_path, + Node &node); + +void CONDUIT_RELAY_API hdf5_read_info(const std::string &file_path, + const std::string &hdf5_path, + const Node &opts, + Node &node); + +//----------------------------------------------------------------------------- +/// Read info from hdf5 path relative to the hdf5 id into the output node +//----------------------------------------------------------------------------- +void CONDUIT_RELAY_API hdf5_read_info(hid_t hdf5_id, + const std::string &hdf5_path, + Node &node); + +void CONDUIT_RELAY_API hdf5_read_info(hid_t hdf5_id, + const std::string &hdf5_path, + const Node &opts, + Node &node); + +//----------------------------------------------------------------------------- +/// Read info from hdf5 id into the output node +//----------------------------------------------------------------------------- +void CONDUIT_RELAY_API hdf5_read_info(hid_t hdf5_id, + Node &node); + +void CONDUIT_RELAY_API hdf5_read_info(hid_t hdf5_id, + const Node &opts, Node &node); //----------------------------------------------------------------------------- /// Helpers for converting between hdf5 dtypes and conduit dtypes -/// +/// /// Throughout the relay hdf5 implementation, we use DataType::Empty when /// the hdf5 data space is H5S_NULL, regardless of what the hdf5 data type is. /// That isn't reflected in these helper functions, they handle /// mapping of endianness and leaf types other than empty. -/// -/// conduit_dtype_to_hdf5_dtype uses default HDF5 datatypes except for +/// +/// conduit_dtype_to_hdf5_dtype uses default HDF5 datatypes except for /// the string case. String case result needs to be cleaned up with -/// H5Tclose(). You can use conduit_dtype_to_hdf5_dtype_cleanup() to -/// properly cleanup in all cases. -/// +/// H5Tclose(). You can use conduit_dtype_to_hdf5_dtype_cleanup() to +/// properly cleanup in all cases. +/// /// You also can detect the custom string type case with: /// /// if( ! H5Tequal(hdf5_dtype_id, H5T_C_S1) && @@ -278,7 +380,7 @@ bool CONDUIT_RELAY_API hdf5_has_path(hid_t hdf5_id, const std::string &path); /// Remove a hdf5 path, if it exists /// /// Note: This does not necessarily reclaim the space used, however -/// it does allow you to write new data to this path, avoiding errors +/// it does allow you to write new data to this path, avoiding errors /// related to incompatible groups or data sets. //----------------------------------------------------------------------------- void CONDUIT_RELAY_API hdf5_remove_path(hid_t hdf5_id, const std::string &path); diff --git a/src/tests/relay/t_relay_io_hdf5.cpp b/src/tests/relay/t_relay_io_hdf5.cpp index 6fc13a8d1..c2cb676ce 100644 --- a/src/tests/relay/t_relay_io_hdf5.cpp +++ b/src/tests/relay/t_relay_io_hdf5.cpp @@ -224,7 +224,6 @@ TEST(conduit_relay_io_hdf5, write_and_read_conduit_leaf_to_hdf5_dataset_handle) // this should succeed io::hdf5_write(n,h5_dset_id); - // this should also succeed vals[1] = 16; @@ -246,9 +245,442 @@ TEST(conduit_relay_io_hdf5, write_and_read_conduit_leaf_to_hdf5_dataset_handle) H5Dclose(h5_dset_id); H5Fclose(h5_file_id); +} + + +//----------------------------------------------------------------------------- +TEST(conduit_relay_io_hdf5, write_and_read_conduit_leaf_to_extendible_hdf5_dataset_handle_with_offset) +{ + std::string ofname = "tout_hdf5_wr_conduit_leaf_to_hdf5_extendible_dataset_handle_with_offset.hdf5"; + + hid_t h5_file_id = H5Fcreate(ofname.c_str(), + H5F_ACC_TRUNC, + H5P_DEFAULT, + H5P_DEFAULT); + + // create a dataset for a 16-bit signed integer array with 2 elements + + + hid_t h5_dtype = H5T_NATIVE_SHORT; + + hsize_t num_eles = 2; + hsize_t dims[1] = {H5S_UNLIMITED}; + + hid_t h5_dspace_id = H5Screate_simple(1, + &num_eles, + dims); + + /* + * Modify dataset creation properties, i.e. enable chunking. + */ + hid_t cparms; + hsize_t chunk_dims[1] = {1}; + + cparms = H5Pcreate (H5P_DATASET_CREATE); + H5Pset_chunk(cparms, 1, chunk_dims); + + + // create new dataset + hid_t h5_dset_id = H5Dcreate1(h5_file_id, + "mydata", + h5_dtype, + h5_dspace_id, + cparms); + + Node n, opts; + n.set(DataType::c_short(2)); + short_array vals = n.value(); + + vals[0] = -16; + vals[1] = -15; + + // this should succeed + io::hdf5_write(n,h5_dset_id); + + vals[0] = 1; + vals[1] = 2; + opts["offset"] = 2; + opts["stride"] = 1; + + io::hdf5_write(n,h5_dset_id,opts); + + Node n_read, opts_read; + + io::hdf5_read_info(h5_dset_id,opts_read,n_read); + EXPECT_EQ(4,(int) n_read["num_elements"].to_value()); + + io::hdf5_read(h5_dset_id,opts_read,n_read); + + // check values of data + short_array read_vals = n_read.value(); + EXPECT_EQ(-16,read_vals[0]); + EXPECT_EQ(-15,read_vals[1]); + EXPECT_EQ(1,read_vals[2]); + EXPECT_EQ(2,read_vals[3]); + + opts_read["offset"] = 2; + opts_read["stride"] = 1; + io::hdf5_read(h5_dset_id,opts_read,n_read); + + // check values of data + read_vals = n_read.value(); + EXPECT_EQ(1,read_vals[0]); + EXPECT_EQ(2,read_vals[1]); + + vals[0] = -1; + vals[1] = -3; + opts["offset"] = 0; + opts["stride"] = 2; + + io::hdf5_write(n,h5_dset_id,opts); + + opts_read["offset"] = 0; + opts_read["stride"] = 1; + io::hdf5_read(h5_dset_id,opts_read,n_read); + + // check values of data + read_vals = n_read.value(); + EXPECT_EQ(-1,read_vals[0]); + EXPECT_EQ(-15,read_vals[1]); + EXPECT_EQ(-3,read_vals[2]); + EXPECT_EQ(2,read_vals[3]); + + vals[0] = 5; + vals[1] = 6; + opts["offset"] = 7; + opts["stride"] = 1; + + io::hdf5_write(n,h5_dset_id,opts); + + opts_read["offset"] = 0; + opts_read["stride"] = 1; + io::hdf5_read(h5_dset_id,opts_read,n_read); + // check values of data + read_vals = n_read.value(); + EXPECT_EQ(-1,read_vals[0]); + EXPECT_EQ(-15,read_vals[1]); + EXPECT_EQ(-3,read_vals[2]); + EXPECT_EQ(2,read_vals[3]); + EXPECT_EQ(0,read_vals[4]); + EXPECT_EQ(0,read_vals[5]); + EXPECT_EQ(0,read_vals[6]); + EXPECT_EQ(5,read_vals[7]); + EXPECT_EQ(6,read_vals[8]); + + opts["offset"] = -1; + opts["stride"] = 2; + opts_read["offset"] = -1; + opts_read["stride"] = 2; + + //this should fail + EXPECT_THROW(io::hdf5_write(n,h5_dset_id,opts),Error); + EXPECT_THROW(io::hdf5_read(h5_dset_id,opts_read,n_read),Error); + + opts["offset"] = 0; + opts["stride"] = 0; + opts_read["offset"] = 0; + opts_read["stride"] = 0; + + //this should fail + EXPECT_THROW(io::hdf5_write(n,h5_dset_id,opts),Error); + EXPECT_THROW(io::hdf5_read(h5_dset_id,opts_read,n_read),Error); + + H5Sclose(h5_dspace_id); + H5Dclose(h5_dset_id); + H5Fclose(h5_file_id); + + +} + + +//----------------------------------------------------------------------------- +TEST(conduit_relay_io_hdf5, write_and_read_conduit_leaf_to_fixed_hdf5_dataset_handle_with_offset) +{ + std::string ofname = "tout_hdf5_wr_conduit_leaf_to_fixed_hdf5_dataset_handle_with_offset.hdf5"; + + hid_t h5_file_id = H5Fcreate(ofname.c_str(), + H5F_ACC_TRUNC, + H5P_DEFAULT, + H5P_DEFAULT); + + // create a dataset for a 16-bit signed integer array with 2 elements + + + hid_t h5_dtype = H5T_NATIVE_SHORT; + + hsize_t num_eles = 2; + + hid_t h5_dspace_id = H5Screate_simple(1, + &num_eles, + NULL); + + // create new dataset + hid_t h5_dset_id = H5Dcreate(h5_file_id, + "mydata", + h5_dtype, + h5_dspace_id, + H5P_DEFAULT, + H5P_DEFAULT, + H5P_DEFAULT); + + Node n, opts; + n.set(DataType::c_short(2)); + short_array vals = n.value(); + + vals[0] = -16; + vals[1] = -15; + + // this should succeed + io::hdf5_write(n,h5_dset_id); + + vals[0] = 1; + vals[1] = 2; + opts["offset"] = 2; + opts["stride"] = 1; + + io::hdf5_write(n,h5_dset_id,opts); + + Node n_read, opts_read; + io::hdf5_read_info(h5_dset_id,opts_read,n_read); + EXPECT_EQ(4,(int) n_read["num_elements"].to_value()); + + io::hdf5_read(h5_dset_id,opts_read,n_read); + + // check values of data + short_array read_vals = n_read.value(); + EXPECT_EQ(-16,read_vals[0]); + EXPECT_EQ(-15,read_vals[1]); + EXPECT_EQ(1,read_vals[2]); + EXPECT_EQ(2,read_vals[3]); + + opts_read["offset"] = 2; + opts_read["stride"] = 1; + io::hdf5_read(h5_dset_id,opts_read,n_read); + + // check values of data + read_vals = n_read.value(); + EXPECT_EQ(1,read_vals[0]); + EXPECT_EQ(2,read_vals[1]); + + vals[0] = -1; + vals[1] = -3; + opts["offset"] = 0; + opts["stride"] = 2; + + io::hdf5_write(n,h5_dset_id,opts); + + opts_read["offset"] = 0; + opts_read["stride"] = 1; + io::hdf5_read(h5_dset_id,opts_read,n_read); + + // check values of data + read_vals = n_read.value(); + EXPECT_EQ(-1,read_vals[0]); + EXPECT_EQ(-15,read_vals[1]); + EXPECT_EQ(-3,read_vals[2]); + EXPECT_EQ(2,read_vals[3]); + + vals[0] = 5; + vals[1] = 6; + opts["offset"] = 7; + opts["stride"] = 1; + + io::hdf5_write(n,h5_dset_id,opts); + + opts_read["offset"] = 0; + opts_read["stride"] = 1; + io::hdf5_read(h5_dset_id,opts_read,n_read); + // check values of data + read_vals = n_read.value(); + EXPECT_EQ(-1,read_vals[0]); + EXPECT_EQ(-15,read_vals[1]); + EXPECT_EQ(-3,read_vals[2]); + EXPECT_EQ(2,read_vals[3]); + EXPECT_EQ(0,read_vals[4]); + EXPECT_EQ(0,read_vals[5]); + EXPECT_EQ(0,read_vals[6]); + EXPECT_EQ(5,read_vals[7]); + EXPECT_EQ(6,read_vals[8]); + + opts["offset"] = -1; + opts["stride"] = 2; + opts_read["offset"] = -1; + opts_read["stride"] = 2; + + //this should fail + EXPECT_THROW(io::hdf5_write(n,h5_dset_id,opts),Error); + EXPECT_THROW(io::hdf5_read(h5_dset_id,opts_read,n_read),Error); + + opts["offset"] = 0; + opts["stride"] = 0; + opts_read["offset"] = 0; + opts_read["stride"] = 0; + + //this should fail + EXPECT_THROW(io::hdf5_write(n,h5_dset_id,opts),Error); + EXPECT_THROW(io::hdf5_read(h5_dset_id,opts_read,n_read),Error); + + H5Sclose(h5_dspace_id); + H5Dclose(h5_dset_id); + H5Fclose(h5_file_id); + } + + +//----------------------------------------------------------------------------- +TEST(conduit_relay_io_hdf5, write_conduit_object_to_hdf5_group_handle_with_offset) +{ + std::string ofname = "tout_hdf5_wr_conduit_object_to_hdf5_group_handle_with_offset.hdf5"; + + hid_t h5_file_id = H5Fcreate(ofname.c_str(), + H5F_ACC_TRUNC, + H5P_DEFAULT, + H5P_DEFAULT); + + hid_t h5_group_id = H5Gcreate(h5_file_id, + "mygroup", + H5P_DEFAULT, + H5P_DEFAULT, + H5P_DEFAULT); + + + Node n, opts; + n["a/b"].set(DataType::int16(2)); + int16_array vals = n["a/b"].value(); + vals[0] =-16; + vals[1] =-16; + + // this should succeed + io::hdf5_write(n,h5_group_id); + + n["a/c"] = "mystring"; + + // this should also succeed + vals[1] = 16; + + io::hdf5_write(n,h5_group_id); + + Node n_read; + io::hdf5_read(h5_group_id,n_read); + + n["a/b"].set(DataType::int16(10)); + // this should fail + EXPECT_THROW(io::hdf5_write(n,h5_group_id),Error); + + n["a/b"].set(DataType::int16(10)); + vals = n["a/b"].value(); + opts["offset"] = 5; + for (int i = 0; i < 10; i++) { + vals[i] = i + 1; + } + + io::hdf5_write(n,h5_group_id,opts); + + io::hdf5_read(h5_group_id,n_read); + + // check values of data with offset + int16_array read_vals = n_read["a/b"].value(); + for (int i = 0; i < 10; i++) { + EXPECT_EQ(i + 1, read_vals[i + 5]); + } + + // this is also offset + EXPECT_EQ("mystrmystring",n_read["a/c"].as_string()); + + opts["offset"] = 20; + opts["stride"] = 2; + for (int i = 0; i < 10; i++) { + vals[i] = i + 1; + } + n["a/d"].set(DataType::int16(5)); + int16_array vals2 = n["a/d"].value(); + for (int i = 0; i < 5; i++) { + vals2[i] = (i + 1) * -1; + } + + io::hdf5_write(n,h5_group_id,opts); + + io::hdf5_read(h5_group_id,n_read); + + // check values of data + read_vals = n_read["a/b"].value(); + for (int i = 0; i < 10; i++) { + EXPECT_EQ(i + 1, read_vals[i + 5]); + } + for (int i = 0; i < 10; i++) { + EXPECT_EQ(i + 1, read_vals[2*i + 20]); + } + read_vals = n_read["a/d"].value(); + for (int i = 0; i < 5; i++) { + EXPECT_EQ((i + 1) * -1, read_vals[2*i + 20]); + } + + Node n_read_info; + io::hdf5_read_info(h5_group_id,n_read_info); + EXPECT_EQ(39,(int) n_read_info["a/b/num_elements"].to_value()); + EXPECT_EQ(37,(int) n_read_info["a/c/num_elements"].to_value()); + EXPECT_EQ(29,(int) n_read_info["a/d/num_elements"].to_value()); + + // this doesn't change because the null-terminated character + // wasn't overwritten + EXPECT_EQ("mystrmystring",n_read["a/c"].as_string()); + + Node opts_read; + opts_read["offset"] = 5; + io::hdf5_read_info(h5_group_id,opts_read,n_read_info); + io::hdf5_read(h5_group_id,opts_read,n_read); + + // check values of data + read_vals = n_read["a/b"].value(); + for (int i = 0; i < 10; i++) { + EXPECT_EQ(i + 1, read_vals[i]); + } + EXPECT_EQ("mystring",n_read["a/c"].as_string()); + EXPECT_EQ(34,(int) n_read_info["a/b/num_elements"].to_value()); + EXPECT_EQ(32,(int) n_read_info["a/c/num_elements"].to_value()); + + opts_read["offset"] = 20; + opts_read["stride"] = 2; + io::hdf5_read_info(h5_group_id,opts_read,n_read_info); + io::hdf5_read(h5_group_id,opts_read,n_read); + + // check values of data + read_vals = n_read["a/b"].value(); + for (int i = 0; i < 10; i++) { + EXPECT_EQ(i + 1, read_vals[i]); + } + read_vals = n_read["a/d"].value(); + for (int i = 0; i < 5; i++) { + EXPECT_EQ((i + 1) * -1, read_vals[i]); + } + EXPECT_EQ(10,(int) n_read_info["a/b/num_elements"].to_value()); + EXPECT_EQ(5,(int) n_read_info["a/d/num_elements"].to_value()); + + opts_read["offset"] = 20; + opts_read["stride"] = 3; + io::hdf5_read_info(h5_group_id,opts_read,n_read_info); + io::hdf5_read(h5_group_id,opts_read,n_read); + + // check values of data + read_vals = n_read["a/b"].value(); + EXPECT_EQ(1, read_vals[0]); + EXPECT_EQ(4, read_vals[2]); + EXPECT_EQ(7, read_vals[4]); + EXPECT_EQ(10, read_vals[6]); + read_vals = n_read["a/d"].value(); + EXPECT_EQ(-1, read_vals[0]); + EXPECT_EQ(-4, read_vals[2]); + EXPECT_EQ(7,(int) n_read_info["a/b/num_elements"].to_value()); + EXPECT_EQ(3,(int) n_read_info["a/d/num_elements"].to_value()); + + H5Gclose(h5_group_id); + H5Fclose(h5_file_id); +} + + + //----------------------------------------------------------------------------- TEST(conduit_relay_io_hdf5, write_conduit_object_to_hdf5_group_handle) { @@ -1348,7 +1780,7 @@ TEST(conduit_relay_io_hdf5, conduit_hdf5_list) TEST(conduit_relay_io_hdf5, conduit_hdf5_compat_with_empty) { std::string tout_std = "tout_hdf5_empty_compat.hdf5"; - + Node n; n["myval"] = 42; io::save(n,tout_std); @@ -1361,8 +1793,6 @@ TEST(conduit_relay_io_hdf5, conduit_hdf5_compat_with_empty) Node n_load, n_diff_info; io::load(tout_std,"hdf5",n_load); n_load.print(); - + EXPECT_FALSE(n.diff(n_load,n_diff_info)); } - - diff --git a/src/tests/relay/t_relay_io_hdf5_slab.cpp b/src/tests/relay/t_relay_io_hdf5_slab.cpp index 8a9141de3..01cb87b58 100644 --- a/src/tests/relay/t_relay_io_hdf5_slab.cpp +++ b/src/tests/relay/t_relay_io_hdf5_slab.cpp @@ -243,4 +243,48 @@ TEST(conduit_relay_io_hdf5, hdf5_dset_slab_read_test) } +//----------------------------------------------------------------------------- +TEST(conduit_relay_io_hdf5, hdf5_dset_slab_read_test_using_opts) +{ + // create a simple buffer of doubles + Node n; + + n["full_data"].set(DataType::c_double(20)); + + double *vin = n["full_data"].value(); + + for(int i=0;i<20;i++) + { + vin[i] = i; + } + + CONDUIT_INFO("Example Full Data") + + n.print(); + io::hdf5_write(n,"tout_hdf5_slab_opts.hdf5"); + + // read 10 [1->11) entries (as above test, but using hdf5 read options) + + Node n_res; + Node opts; + opts["offset"] = 1; + opts["stride"] = 2; + opts["size"] = 10; + + Node nload; + io::hdf5_read("tout_hdf5_slab_opts.hdf5:full_data",opts,nload); + nload.print(); + + CONDUIT_INFO("Load Result"); + nload.print(); + + double *vload = nload.value(); + for(int i=0;i<10;i++) + { + EXPECT_NEAR(vload[i],1.0 + i * 2.0,1e-3); + } +} + + +