-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exchanger and MPI Compile Flag #50
base: master
Are you sure you want to change the base?
Conversation
seed generation to the point before the seed is actually placed in the program to ensure that the different mpi processes are actually doing something different.
and out of the logic of other classes
Codecov Report
@@ Coverage Diff @@
## master #50 +/- ##
==========================================
- Coverage 87.1% 58.6% -28.51%
==========================================
Files 178 134 -44
Lines 16266 10621 -5645
Branches 7849 3300 -4549
==========================================
- Hits 14168 6224 -7944
- Misses 2097 4397 +2300
+ Partials 1 0 -1
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please address the following issues.
@@ -380,7 +381,7 @@ struct EvolutionaryParameters { | |||
double gamma; | |||
size_t edge_frequency_amount; | |||
bool dynamic_population_size; | |||
double dynamic_population_amount_of_time; | |||
float dynamic_population_amount_of_time; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why float?
equal_sequential_time, | ||
equal_to_the_number_of_mpi_processes, | ||
as_usual | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need better names for these 3 strategies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as_usual = 15_percent_of_total_time
equal_sequential_time = 15_percent_of_total_time_times_num_procs
equal_to_the_number_of_mpi_processes = equal_to_the_number_of_mpi_processes
- add some comments that briefly describe the strategies
kahypar/partition/evo_partitioner.h
Outdated
DBG << " "; | ||
} | ||
DBG << ""; | ||
inline std::string preface(const Context& context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be a method of the communicator.
@@ -55,12 +63,12 @@ class Population { | |||
} | |||
} | |||
inline size_t forceInsert(Individual&& individual, const size_t position) { | |||
DBG << V(position) << V(individual.fitness()); | |||
DBG << " MPIRank " << _mpi_rank << ":" << V(position) << V(individual.fitness()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you make preface a method of the communicator, it can also be used here.
evo_part.partition(hypergraph, context); | ||
std::vector<double> times = Timer::instance().evolutionaryResult().evolutionary; | ||
double total_time = Timer::instance().evolutionaryResult().total_evolutionary; | ||
ASSERT_GT(total_time, context.partition.time_limit); | ||
ASSERT_LT(total_time - times.at(times.size() - 1), context.partition.time_limit); | ||
// TODO verify |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about this todo?
} | ||
inline void broadcastPopulationSize(Context& context) { | ||
MPI_Bcast(&context.evolutionary.population_size, 1, MPI_INT, 0, MPI_COMM_WORLD); | ||
MPI_Barrier(MPI_COMM_WORLD); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need this barrier?
} | ||
|
||
|
||
inline int getRandomOpenTarget() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This methods seems to be overkill. Why don't you just choose a random number and check whether or not _individual_already_sent_to[current_target] is false?
|
||
|
||
// Permutates the vector so that no element is left on the original position in the vector. | ||
inline void degenerate(std::vector<int>& vector) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method needs a better name. Also, why didn't you just use: https://en.cppreference.com/w/cpp/algorithm/random_shuffle
int sending_to = permutation_of_mpi_process_numbers[_rank]; | ||
int receiving_from = findCorrespondingSender(permutation_of_mpi_process_numbers); | ||
|
||
std::vector<PartitionID> outgoing_partition = population.individualAt(population.randomIndividual()).partition(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use std::vector& to prevent unnecessary vector copy - or do you really need a copy here because of async send?
&received_partition_vector[0], 1, _MPI_Partition, receiving_from, 0, _communicator, &st); | ||
|
||
|
||
hg.reset(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
again, reset is not necessary because it is called in setPartition
std::vector<PartitionID> outgoing_partition = population.individualAt(population.randomIndividual()).partition(); | ||
std::vector<PartitionID> received_partition_vector; | ||
received_partition_vector.resize(hg.initialNumNodes()); | ||
DBG << preface() << "receiving from " << receiving_from << " sending to " << sending_to << "quick_start"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
allocate vector directly with initialNumNode entries
… in command line options
…omize, introduced a local partition buffer
and out of the logic of other classes
… in command line options
…omize, introduced a local partition buffer
Introduces the MPI exchange protocol and the necessary stuff to encapsulate MPI