Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add function to build mesh from imported connectivity data #135

Merged

Conversation

fmahebert
Copy link
Contributor

For use in applications like JCSDA's JEDI where a mesh connectivity has been constructed externally to atlas (e.g. by a forecast model that does not use atlas to construct its mesh), and we want to reuse this connectivity to build an atlas NodeColumns FunctionSpace.

This function takes local (per MPI task) data about the nodes and cell-to-node connectivities, and sets up the corresponding fields in a Mesh.

@FussyDuck
Copy link

FussyDuck commented May 9, 2023

CLA assistant check
All committers have signed the CLA.

@fmahebert
Copy link
Contributor Author

@wdeconinck Here's the mesh-building function I've been working with. Let me know if you think this is the right approach or if the functionality should be repackaged in a different way...

@wdeconinck
Copy link
Member

Hi @fmahebert thanks that looks like a good approach.

As a first comment I'm not so keen on std::vector<QuadConnectivityData> or std::vector<TriConnectivityData>.
Perhaps we could have instead extra arguments like:
const std::vector<std::array<idx_t,4>>& quad_nodes, const std::vector<gidx_t>& quad_global_index, const std::vector<idx_t>& quad_remote_index
and similar for triangle_nodes, triangle_global_index, triangle_remote_index.

Also, how come you provide remote_index for cells but not for nodes? Nodes' remote_index can also be computed after. Is that what you rely on? In case of halo==0 there should also not be the need for cells_remote_index as it matches the local numbering.

@codecov-commenter
Copy link

codecov-commenter commented May 9, 2023

Codecov Report

Patch coverage: 100.00% and project coverage change: +0.03 🎉

Comparison is base (63005d9) 77.53% compared to head (b48c753) 77.57%.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #135      +/-   ##
===========================================
+ Coverage    77.53%   77.57%   +0.03%     
===========================================
  Files          808      807       -1     
  Lines        58119    51418    -6701     
===========================================
- Hits         45065    39888    -5177     
+ Misses       13054    11530    -1524     
Impacted Files Coverage Δ
src/atlas/mesh/MeshBuilder.cc 100.00% <100.00%> (ø)
src/tests/mesh/helper_mesh_builder.h 100.00% <100.00%> (ø)
src/tests/mesh/test_mesh_builder.cc 100.00% <100.00%> (ø)
src/tests/mesh/test_mesh_builder_parallel.cc 100.00% <100.00%> (ø)

... and 578 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@fmahebert
Copy link
Contributor Author

As a first comment I'm not so keen on std::vector or std::vector.

Sounds good, I will split the structs and pass extra arguments as you suggest.

Also, how come you provide remote_index for cells but not for nodes?

I don't remember how I got here, most likely ignorance?

What do you think of this plan:

  1. Add a new argument passing in the remote_index for the nodes.
  2. Remove the remote_index for the cells, because as you say, this code is assuming halo==0 anyway right now. (In the future the interface could be expanded to allow the user to pass more nodes and more cell connectivities, some of which could be marked as halo...)

Does this match your suggestion or have I misunderstood?

Finally, an unrelated question: what is the atlas policy on copyright headers? should I put UCAR's header?

@wdeconinck
Copy link
Member

That matches the suggestions indeed.
Yes please add the copyright header of UCAR in new files or files that have a significant contribution.

@fmahebert
Copy link
Contributor Author

Thanks for the feedback @wdeconinck! I've made some updates here, and will check how the new version integrates with fv3-jedi tomorrow. I'll report back then.

@fmahebert
Copy link
Contributor Author

I've tested importing the fv3-jedi grid into an atlas Mesh via the updated interface, and it works just as well (and is easier). Happy on my end.

* Some limitations of the current implementation (but not inherent):
* - can only set up triangle and quad cells.
* - cannot import halos, i.e., cells owned by other MPI tasks; halos can still be subsequently
* computed by calling the BuildMesh action.
Copy link
Contributor Author

@fmahebert fmahebert May 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ytremolet the current design of this PR means we'll ask the JEDI models to call the build_halo action after creating the mesh but before constructing the functionspace. It's one very simple line of code, so not a big deal.

We could streamline a little either by...

  1. adding a bool option here to call build_halo internally to this function
  2. expanding the interface so models can import their halos directly

Personally I like the current design, because it keeps the responsibilities clear and makes the halo-building step easy to spot. I think we could support (2) in the future if the teams are interested in reusing their own halo structure. Do you have a different opinion as to whether either of these options is significantly better than the current state?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems fine. The test doesn't cover that part (or I missed it). That's ok because it's not part of the new function, but do you have an example (in fv3 or other?) to get a better idea what it looks like from the user point of view?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ytremolet I don't have something very clean, but on fv3-jedi branch feature/fmahebert_test_atlas_mesh_builder you can see (note to others, this is not a public repo) a code block in Geometry.cc that sets up the mesh, builds a halo, and tests some halo exchange; in the fortran you can see how the connectivity is inferred from the grid structure; all the above is mixed in which other small changes I was testing but didn't take the time to clean out (sorry).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a look at your example. It looks reasonable given the information we want to get from the models. And for now the halo is not shocking, this is code that is only needed once for each model, it won't be repeated all over the place so I would be happy with the current implementation.

@wdeconinck
Copy link
Member

Perfect.

Note that we also need to add a "base" argument for remote_indices.
It is "1" if the field is referring to Fortran numbering.
Subtract the base from each value before assigning it to the ridx index view.

Now fine-tuning the API.

How about creating a class with operator() in the atlas::mesh namespace.

Also let's make the implementation using C-style arrays, and have the previous argument list using std::vector's delegate to it. This way we can possibly support different container types, and create Fortran interfaces using Fortran arrays without incurring extra copies.

I mean something like following.

class MeshBuilder {
public:
    MeshBuilder(const eckit::Configuration& = util::NoConfig()) {
        // Cater for extra configuration if needed, e.g. MPI communicator in coming development
    }
    
    Mesh operator()(size_t nb_nodes,
                    const double lons[], const double lats[],
                    const int ghosts[], const gidx_t global_indices[],
                    const idx_t remote_indices[], const idx_t base, const int partitions[],
                    size_t nb_tris,  const gidx_t tri_boundary_nodes[],  const gidx_t tri_global_indices[],
                    size_t nb_quads, const gidx_t quad_boundary_nodes[], const gidx_t quad_global_indices[]) {
        // THE IMPLEMENTATION
    }

    Mesh operator()(const std::vector<double>& lons, const std::vector<double>& lats,
                    const std::vector<int>& ghosts, const std::vector<gidx_t>& global_indices,
                    const std::vector<idx_t>& remote_indices, const idx_t base, const std::vector<int>& partitions,
                    const std::vector<std::array<gidx_t, 3>>& tri_boundary_nodes,
                    const std::vector<gidx_t>& tri_global_indices,
                    const std::vector<std::array<gidx_t, 4>>& quad_boundary_nodes,
                    const std::vector<gidx_t>& quad_global_indices) {
         // CALL C-STYLE OPERATOR
             
         size_t nb_nodes = global_indices.size();
         size_t nb_tris  = tri_global_indices.size();
         size_t nb_quads = quad_global_indices.size();
         // assert all related vectors have expected size.
         return operator()(nb_nodes, lons.data(), lats.data(),
                           ghosts.data(), global_indices.data(),
                           remote_indices.data(), partitions.data(),
                           nb_tris,
                           reinterpret_cast<const double*>(tri_boundary_nodes.data()),
                           tri_global_indices.data(),
                           nb_quads,
                           reinterpret_cast<const double*>(quad_boundary_nodes.data()),
                           quad_global_indices.data());
    }
};

@wdeconinck wdeconinck self-requested a review May 12, 2023 08:59
@fmahebert
Copy link
Contributor Author

fmahebert commented May 12, 2023

Thanks @wdeconinck — these suggestions all make sense; I'll make the changes.

One clarifying question though about your suggested interface: in the C++ operator(), what is the reason to reinterpret the vector<array<gidx_t>> as a double* when passing to the C-style interface?

Edit to clarify: I would expect reinterpreting as a gidx_t* as this would better match the intput and output argument types... could this be what you intended?

@fmahebert
Copy link
Contributor Author

Changes made (but changing the reinterpret_cast from double* to gidx_t*), and tested in the fv3-jedi testbed.

@wdeconinck
Copy link
Member

Thanks @wdeconinck — these suggestions all make sense; I'll make the changes.

One clarifying question though about your suggested interface: in the C++ operator(), what is the reason to reinterpret the vector<array<gidx_t>> as a double* when passing to the C-style interface?

Edit to clarify: I would expect reinterpreting as a gidx_t* as this would better match the intput and output argument types... could this be what you intended?

Indeed, just mistake on my part :)

Copy link
Member

@wdeconinck wdeconinck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @fmahebert looks great, except I'd like it in the atlas::mesh namespace please.

@fmahebert
Copy link
Contributor Author

Done!

@wdeconinck wdeconinck force-pushed the feature/build_mesh_from_connectivities branch from b48c753 to 14dc611 Compare May 22, 2023 07:42
@wdeconinck wdeconinck merged commit 1859b46 into ecmwf:develop May 22, 2023
20 checks passed
wdeconinck added a commit to JCSDA-internal/atlas that referenced this pull request Jun 6, 2023
* develop: (56 commits)
  Implement field::for_each capabilities (ecmwf#139)
  Avoid harmless FE_INVALID with nvhpc build
  Avoid harmless FE_INVALID with nvhpc build
  Remove ATLAS_FPE=0 environment variable as not needed anymore now
  Avoid harmless FE_DIVBYZERO with nvhpc build
  Optimize modules and dependencies for caching
  Add HPC build options matrix
  Workaround compiler bug in nvhpc-22.11-release build
  Update GHA nvidia-21.9 to nvidia-22.11
  Avoid and suppress some compiler warnings with nvhpc
  Fix bug where DelaunayMeshGenerator with 1 partition was not setting the grid in the mesh (ecmwf#143)
  Use Eigen 3.4
  Disable floating point signals for tests on nvidia
  Add nvidia compiler specific HPC build config
  Add function to build mesh from imported connectivity data (ecmwf#135)
  Disable GHA "linux gnu-12" OpenMP for CXX as "omp.h" header is not found :(
  Add gnu-12 ci to github actions (github-hosted runners)
  Add gnu-7 ci to github actions with github-hosted runners (ecmwf#136)
  Setup horizontal_dimensions() for BlockStructuredColumns fields
  Add function Field::horizontal_dimension() -> std::vector<idx_t>
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants