Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use std::max and std::min #2521

Merged
merged 2 commits into from
Nov 7, 2022
Merged

Use std::max and std::min #2521

merged 2 commits into from
Nov 7, 2022

Conversation

med-ayssar
Copy link
Contributor

@med-ayssar med-ayssar commented Nov 3, 2022

Replace the while-loop in the VPManager::get_end_rank_per_thread with std::max

VPManager::get_end_rank_per_thread( const thread rank_start, const thread num_assigned_ranks_per_thread ) const
{
  thread rank_end = rank_start + num_assigned_ranks_per_thread;

  // if we have more threads than ranks, or if ranks can not be
  // distributed evenly on threads, we need to make sure, that all
  // threads care only about existing ranks
   while ( rank_end > kernel().mpi_manager.get_num_processes() )
  {
    --rank_end;
    // we use rank_end == rank_start, as a sign, that this thread
    // does not do any work
    if ( rank_end == rank_start )
    {
      break;
    }
  }

  return rank_end;
}

Here, first we check if the rank_end is greater than the number of MPI processes (MPI). If it is the case, we keep decreasing rank_end by one until either reaching the MPI or rank_start , else we do nothing, and we use the initial value of the rank_end.

If rank_start is greater than the MPI, then rank_end will be also greater than MPI. Therefore, by decreasing rank_end by one, we will first reach the value rank_start . Otherwise, when rank_start is less than MPI, but the value of rank_end is greater than MPI, then we will first reach the value of MPI.

Therefore, as long as rank_end is greater than the MPI, the new value of rank_end will be the maximum between rank_start and MPI, and no while-loop will be needed.

Replace the if-else in the SimulationManager::run with std::min

  delay end_sim = from_step_ + to_do_;
  if ( kernel().connection_manager.get_min_delay() < end_sim )
  {
    to_step_ = kernel().connection_manager.get_min_delay(); // update to end of time slice
  }
  else
  {
    to_step_ = end_sim; // update to end of simulation time
  }

Here, if end_sim is larger than the min-delay, then to_step is set to min-delay, otherwise to_step is set to end_sim. In short, to_step is set to the minimum between end_sim and the min-delay.

@med-ayssar med-ayssar marked this pull request as ready for review November 3, 2022 11:15
@heplesser heplesser added S: Normal Handle this with default priority T: Maintenance Work to keep up the quality of the code and documentation. I: No breaking change Previously written code will work as before, no one should note anything changing (aside the fix) labels Nov 4, 2022
@heplesser heplesser added this to PRs in progress in Kernel via automation Nov 4, 2022
Copy link
Contributor

@hakonsbm hakonsbm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@hakonsbm hakonsbm merged commit d6474fb into nest:master Nov 7, 2022
Kernel automation moved this from PRs in progress to Done (PRs and issues) Nov 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I: No breaking change Previously written code will work as before, no one should note anything changing (aside the fix) S: Normal Handle this with default priority T: Maintenance Work to keep up the quality of the code and documentation.
Projects
Kernel
  
Done
Development

Successfully merging this pull request may close these issues.

None yet

3 participants