Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OMP parallelization for cusp correction #1643

Merged
merged 10 commits into from
Jun 19, 2019

Conversation

markdewing
Copy link
Contributor

Add timers for cusp correction.

Parallelize over both the MO's and centers.

The diff view is more complicated that necessary because of an indentation change to a large section of code from adding the pragma omp parallel scope. The changes in body should be

  1. Add timer start/stop around minimizeForRc
  2. Declare and initialize local (private) versions of targetPtcl, sourcePtrc, phi, and eta.
  3. Change targetPtcl, sourcePtcl, phi, and eta to local variable equivalents (localTargetPtcl, localSourcePtrc, local_phi, and local_eta)

In theory, targetPtcl and sourcePtcl could be declared firstprivate and the compiler should use the copy constructor to generate the private versions. Works with clang, but causes an internal compiler error with Intel 19.0.0.120.

The MO and center loops are interchanged again. In theory splitting the MO's into eta and phi only needs to be done once per center. By putting the MO loop on the inside, hopefully some data might still be in cache when doing the same split again. The time spent splitting into eta and phi isn't enough to be worth any more complicated scheme to avoid recomputing it.

Parallelize over centers and MO's on each MPI rank.
Left over from experiments in manually flattening the MO/center loops.
@markdewing
Copy link
Contributor Author

test this please

@prckent
Copy link
Contributor

prckent commented Jun 17, 2019

@PDoakORNL do you have time to look at this?

@PDoakORNL
Copy link
Contributor

I will look this today.

@PDoakORNL
Copy link
Contributor

test this please

Copy link
Contributor

@PDoakORNL PDoakORNL left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At minimum of more documentation, use const, do not log from multiple threads and profile at the same time.

Does the cache argument combined with dynamic scheduling really make sense?

src/QMCWaveFunctions/lcao/CuspCorrectionConstruction.cpp Outdated Show resolved Hide resolved
@@ -211,55 +223,84 @@ void generateCuspInfo(int orbital_set_size,
int start_mo = offset[Comm.rank()];
int end_mo = offset[Comm.rank() + 1];
app_log() << " Number of molecular orbitals to compute correction on this rank: " << end_mo - start_mo << std::endl;
for (int mo_idx = start_mo; mo_idx < end_mo; mo_idx++)

#pragma omp parallel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to see us move toward abstract or C++ concurrency constructions. The implicit capture of variables by the omp parallel pragma contributes to hazy scope and fears about race conditions. It makes it very convenient to write monster methods that build up a huge local namespace.
Is the potential confusion about thread scope worth it just to have what I think is initialization code be faster?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Background info: The Cusp Correction construction/calculation is distressingly slow, so maintainable improvements are definitely welcome imo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Integrated the parallel scope with the for loop scope. This will result in additional allocations/deallocations with the objects, but it shouldn't affect the run time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I still have question with respect to the last change.

#pragma omp parallel
{
  ParticleSet localTargetPtcl(targetPtcl);
  #pragma omp for
  for(..)
  {
    do something on localTargetPtcl;
  }
}

This is a valid OpenMP code and this is a recommended optimization to avoid allocation and deallocation. I had a couple times corrected by OpenMP experts that my expectation of "#omp for" of concurrent execution of the loop iterations was wrong. "#omp for" is a workshare construct and indicating the loop can be distributed among the threads spanned by the parallel region. It didn't say that the loop iterations are independent and can be executed concurrently. Instead in OpenMP 5.0, the loop construct is introduced indicating that the loop iterations are independent and this definition aligns with the concept of the concurrent loop in C++ and Fortran.

Going back to the example, the thread scope is the parallel region and the localTargetPtcl is defined in the right scope.
It should have no difference with a code without OpenMP.

{
  ParticleSet localTargetPtcl;
  for(..) // The effect of "omp for" is changing the lower and upper bound of the loop, although it depends on the scheduling
  {
    do something on localTargetPtcl;
  }
}

The actual code did suffer from allcation/deallication and the imbalance between iterations make all the overhead amplified. I remember that when I profiled the cusp correction, the constructor and destructor took a lot of time. For this reason, I think the old way from @markdewing of doing parallel region is preferred.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PDoakORNL can we be a bit more flexible allowing this optimization to reduce overhead? it does introduce more data in the thread scope but scope and lifetime is quite clear.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did some profiling on my laptop, and there doesn't seem to be significant overhead in creating the LCAOrbitalSet copies.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ye-luo I would rather leave the code as is less dependent on openmp syntax. Is that acceptable?

I also think that we'd be better off refactoring egregious design issues when they are the cause of performance issues. Although according to Mark this isn't really the hotspot. @markdewing Can you tell what it is?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @markdewing confirming that overhead is small. Putting object within the innermost scope is the cleanest way. If this really impact a workload, we can revisit this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bulk of time (85%) is in DGEMM called from LCAOrbitalSet::evaluate, called from OneMolecularOrbital::phi_vgl, called from getCurrentLocalEnergy.
This is for a larger system, and I killed it part way through the cusp correction (so pretty much all the time in the run was spent doing cusp correction)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the type of overhead I worried. I mean measuring the time of cusp construction with and without the optimization of copy.

src/QMCWaveFunctions/lcao/CuspCorrectionConstruction.cpp Outdated Show resolved Hide resolved
src/QMCWaveFunctions/lcao/CuspCorrectionConstruction.cpp Outdated Show resolved Hide resolved
markdewing and others added 5 commits June 18, 2019 15:02
Simplifies the number of scopes.
The added allocation/deallocations for each iteration (vs. once per thread) should
have minimal performance impact.
Copy link
Contributor

@ye-luo ye-luo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hold a second. I have a few questions.

@PDoakORNL PDoakORNL merged commit ede0ab1 into QMCPACK:develop Jun 19, 2019
@markdewing markdewing deleted the cusp_omp2 branch August 19, 2019 16:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants