Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OMP parallelization for cusp correction #1643

Merged
merged 10 commits into from Jun 19, 2019

Conversation

@markdewing
Copy link
Contributor

commented Jun 17, 2019

Add timers for cusp correction.

Parallelize over both the MO's and centers.

The diff view is more complicated that necessary because of an indentation change to a large section of code from adding the pragma omp parallel scope. The changes in body should be

  1. Add timer start/stop around minimizeForRc
  2. Declare and initialize local (private) versions of targetPtcl, sourcePtrc, phi, and eta.
  3. Change targetPtcl, sourcePtcl, phi, and eta to local variable equivalents (localTargetPtcl, localSourcePtrc, local_phi, and local_eta)

In theory, targetPtcl and sourcePtcl could be declared firstprivate and the compiler should use the copy constructor to generate the private versions. Works with clang, but causes an internal compiler error with Intel 19.0.0.120.

The MO and center loops are interchanged again. In theory splitting the MO's into eta and phi only needs to be done once per center. By putting the MO loop on the inside, hopefully some data might still be in cache when doing the same split again. The time spent splitting into eta and phi isn't enough to be worth any more complicated scheme to avoid recomputing it.

markdewing added some commits Jun 17, 2019

Add OMP parallelization for cusp correction
Parallelize over centers and MO's on each MPI rank.
Remove some unused variables.
Left over from experiments in manually flattening the MO/center loops.
@markdewing

This comment has been minimized.

Copy link
Contributor Author

commented Jun 17, 2019

test this please

@prckent

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2019

@PDoakORNL do you have time to look at this?

@PDoakORNL

This comment has been minimized.

Copy link
Contributor

commented Jun 18, 2019

I will look this today.

@PDoakORNL

This comment has been minimized.

Copy link
Contributor

commented Jun 18, 2019

test this please

@PDoakORNL
Copy link
Contributor

left a comment

At minimum of more documentation, use const, do not log from multiple threads and profile at the same time.

Does the cache argument combined with dynamic scheduling really make sense?

@@ -211,55 +223,84 @@ void generateCuspInfo(int orbital_set_size,
int start_mo = offset[Comm.rank()];
int end_mo = offset[Comm.rank() + 1];
app_log() << " Number of molecular orbitals to compute correction on this rank: " << end_mo - start_mo << std::endl;
for (int mo_idx = start_mo; mo_idx < end_mo; mo_idx++)

#pragma omp parallel

This comment has been minimized.

Copy link
@PDoakORNL

PDoakORNL Jun 18, 2019

Contributor

I'd like to see us move toward abstract or C++ concurrency constructions. The implicit capture of variables by the omp parallel pragma contributes to hazy scope and fears about race conditions. It makes it very convenient to write monster methods that build up a huge local namespace.
Is the potential confusion about thread scope worth it just to have what I think is initialization code be faster?

This comment has been minimized.

Copy link
@prckent

prckent Jun 18, 2019

Contributor

Background info: The Cusp Correction construction/calculation is distressingly slow, so maintainable improvements are definitely welcome imo.

This comment has been minimized.

Copy link
@markdewing

markdewing Jun 18, 2019

Author Contributor

Integrated the parallel scope with the for loop scope. This will result in additional allocations/deallocations with the objects, but it shouldn't affect the run time.

This comment has been minimized.

Copy link
@ye-luo

ye-luo Jun 19, 2019

Contributor

Sorry I still have question with respect to the last change.

#pragma omp parallel
{
  ParticleSet localTargetPtcl(targetPtcl);
  #pragma omp for
  for(..)
  {
    do something on localTargetPtcl;
  }
}

This is a valid OpenMP code and this is a recommended optimization to avoid allocation and deallocation. I had a couple times corrected by OpenMP experts that my expectation of "#omp for" of concurrent execution of the loop iterations was wrong. "#omp for" is a workshare construct and indicating the loop can be distributed among the threads spanned by the parallel region. It didn't say that the loop iterations are independent and can be executed concurrently. Instead in OpenMP 5.0, the loop construct is introduced indicating that the loop iterations are independent and this definition aligns with the concept of the concurrent loop in C++ and Fortran.

Going back to the example, the thread scope is the parallel region and the localTargetPtcl is defined in the right scope.
It should have no difference with a code without OpenMP.

{
  ParticleSet localTargetPtcl;
  for(..) // The effect of "omp for" is changing the lower and upper bound of the loop, although it depends on the scheduling
  {
    do something on localTargetPtcl;
  }
}

The actual code did suffer from allcation/deallication and the imbalance between iterations make all the overhead amplified. I remember that when I profiled the cusp correction, the constructor and destructor took a lot of time. For this reason, I think the old way from @markdewing of doing parallel region is preferred.

This comment has been minimized.

Copy link
@ye-luo

ye-luo Jun 19, 2019

Contributor

@PDoakORNL can we be a bit more flexible allowing this optimization to reduce overhead? it does introduce more data in the thread scope but scope and lifetime is quite clear.

This comment has been minimized.

Copy link
@markdewing

markdewing Jun 19, 2019

Author Contributor

I did some profiling on my laptop, and there doesn't seem to be significant overhead in creating the LCAOrbitalSet copies.

This comment has been minimized.

Copy link
@PDoakORNL

PDoakORNL Jun 19, 2019

Contributor

@ye-luo I would rather leave the code as is less dependent on openmp syntax. Is that acceptable?

I also think that we'd be better off refactoring egregious design issues when they are the cause of performance issues. Although according to Mark this isn't really the hotspot. @markdewing Can you tell what it is?

This comment has been minimized.

Copy link
@ye-luo

ye-luo Jun 19, 2019

Contributor

As @markdewing confirming that overhead is small. Putting object within the innermost scope is the cleanest way. If this really impact a workload, we can revisit this.

This comment has been minimized.

Copy link
@markdewing

markdewing Jun 19, 2019

Author Contributor

Bulk of time (85%) is in DGEMM called from LCAOrbitalSet::evaluate, called from OneMolecularOrbital::phi_vgl, called from getCurrentLocalEnergy.
This is for a larger system, and I killed it part way through the cusp correction (so pretty much all the time in the run was spent doing cusp correction)

This comment has been minimized.

Copy link
@ye-luo

ye-luo Jun 19, 2019

Contributor

This is not the type of overhead I worried. I mean measuring the time of cusp construction with and without the optimization of copy.

markdewing and others added some commits Jun 18, 2019

Merge parallel scope with for loop scope
Simplifies the number of scopes.
The added allocation/deallocations for each iteration (vs. once per thread) should
have minimal performance impact.
@ye-luo
Copy link
Contributor

left a comment

Hold a second. I have a few questions.

@ye-luo

ye-luo approved these changes Jun 19, 2019

@PDoakORNL PDoakORNL merged commit ede0ab1 into QMCPACK:develop Jun 19, 2019

3 checks passed

rhea-cpu
Details
rhea-cuda-experimental
Details
rhea-gpu
Details

@markdewing markdewing deleted the markdewing:cusp_omp2 branch Aug 19, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.