Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merging a partitioned ascent blueprint extract to single partition #1270

Closed
mlohry opened this issue Apr 15, 2024 · 4 comments
Closed

Merging a partitioned ascent blueprint extract to single partition #1270

mlohry opened this issue Apr 15, 2024 · 4 comments

Comments

@mlohry
Copy link

mlohry commented Apr 15, 2024

I have a blueprint hdf5 extract made using ascent in MPI which writes partitioned HDF5 of the pattern

mesh.root
mesh/
mesh/domain_000.hdf5
mesh/domain_001.hdf5
...
mesh/domain_XYZ.hdf5

and I would like to merge these into a single domain. Reading the .root file like so:

  auto h5_id = conduit::relay::io::hdf5_open_file_for_read("mesh.root");
  conduit::Node node;
  conduit::relay::io::hdf5_read(h5_id, node);
  conduit::relay::io::hdf5_close_file(h5_id);
  node.print();

shows field names and partition maps as expected. Then I try to merge these into one partition:

  conduit::Node partition_options;
  partition_options["target"] = 1;
  conduit::Node merged_mesh;
 // conduit::blueprint::mesh::partition(node, partition_options, merged_mesh);
 // conduit::Error Cannot access non-existent child "topologies" from Node(blueprint_index),
 // needs to be passed a subnode apparently.  
  conduit::blueprint::mesh::partition(node["blueprint_index/mesh"], partition_options, merged_mesh);

and it fails with

terminate called after throwing an instance of 'conduit::Error'
message:
Cannot access non-existent child "elements" from Node(blueprint_index/mesh/topologies/topo)

It seems to not be parsing the subfolder HDF5 files so it can't see the "elements" of the individual domain HDF5 files. What am I doing wrong here?

If I open one of the individual HDF5 files directly, I see the "real" topology of that domain as expected, so conceptually I could manually load all these partitions independently and merge them.

@cyrush
Copy link
Member

cyrush commented Apr 15, 2024

Hi @mlohry here is what you need:

You can load a mesh using the info from the root file and all other needed data files using the method:

conduit::relay::io::blueprint::read_mesh

https://llnl-conduit.readthedocs.io/en/latest/blueprint_mesh.html#loading-meshes-from-files

conduit::relay::io::blueprint::read_mesh(const std::string &root_file_path,
                                         conduit::Node &mesh);

After this, the node passed as mesh will contain a verified blueprint mesh that you can feed to re-partition.

@cyrush
Copy link
Member

cyrush commented Apr 15, 2024

For more context, the root file has metadata the reflects the larger structure of the domain-decomposed mesh and that info points to locations in other hdf5 files. The read_mesh method handles all of the bookkeeping to read things back in.

There is also a mpi version that allows you to read the mesh, but have the domains distributed across MPI tasks.

@mlohry
Copy link
Author

mlohry commented Apr 15, 2024

Thank you @cyrush , got it.

@mlohry mlohry closed this as completed Apr 15, 2024
@mlohry
Copy link
Author

mlohry commented Apr 30, 2024

@cyrush , got that aspect working, but does this partition merging know about ascent_ghosts? I'm getting holes in the mesh along partition lines when merging to one partition. The partitioned extract is the result of giving blueprint+ascent each partition in 0-based local ordering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants