Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exchanger and MPI Compile Flag #50

Open
wants to merge 69 commits into
base: master
Choose a base branch
from
Open

Conversation

MalleusMalleficarum
Copy link
Collaborator

Introduces the MPI exchange protocol and the necessary stuff to encapsulate MPI

@codecov
Copy link

codecov bot commented Jan 12, 2020

Codecov Report

Merging #50 into master will decrease coverage by 28.5%.
The diff coverage is 78.21%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #50       +/-   ##
==========================================
- Coverage    87.1%   58.6%   -28.51%     
==========================================
  Files         178     134       -44     
  Lines       16266   10621     -5645     
  Branches     7849    3300     -4549     
==========================================
- Hits        14168    6224     -7944     
- Misses       2097    4397     +2300     
+ Partials        1       0        -1
Impacted Files Coverage Δ
tests/partition/evolutionary/combine_test.cc 100% <ø> (ø) ⬆️
kahypar/partition/evolutionary/combine.h 100% <ø> (ø) ⬆️
kahypar/datastructure/hypergraph.h 75.56% <ø> (-18.02%) ⬇️
kahypar/partition/partitioner.h 63.2% <ø> (ø) ⬆️
kahypar/partition/context_enum_classes.h 41.81% <0%> (-6.82%) ⬇️
...efinement/flow/policies/flow_region_build_policy.h 0% <0%> (-93.34%) ⬇️
...tition/refinement/flow/most_balanced_minimum_cut.h 0% <0%> (-99.45%) ⬇️
kahypar/partition/fixed_vertices.h 0% <0%> (-66.67%) ⬇️
kahypar/utils/randomize.h 88.88% <0%> (-11.12%) ⬇️
...n/refinement/flow/quotient_graph_block_scheduler.h 0% <0%> (-100%) ⬇️
... and 131 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5b57e34...3133c74. Read the comment docs.

Copy link
Member

@SebastianSchlag SebastianSchlag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please address the following issues.

@@ -380,7 +381,7 @@ struct EvolutionaryParameters {
double gamma;
size_t edge_frequency_amount;
bool dynamic_population_size;
double dynamic_population_amount_of_time;
float dynamic_population_amount_of_time;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why float?

equal_sequential_time,
equal_to_the_number_of_mpi_processes,
as_usual
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need better names for these 3 strategies.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as_usual = 15_percent_of_total_time
equal_sequential_time = 15_percent_of_total_time_times_num_procs
equal_to_the_number_of_mpi_processes = equal_to_the_number_of_mpi_processes

  • add some comments that briefly describe the strategies

DBG << " ";
}
DBG << "";
inline std::string preface(const Context& context) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be a method of the communicator.

@@ -55,12 +63,12 @@ class Population {
}
}
inline size_t forceInsert(Individual&& individual, const size_t position) {
DBG << V(position) << V(individual.fitness());
DBG << " MPIRank " << _mpi_rank << ":" << V(position) << V(individual.fitness());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you make preface a method of the communicator, it can also be used here.

evo_part.partition(hypergraph, context);
std::vector<double> times = Timer::instance().evolutionaryResult().evolutionary;
double total_time = Timer::instance().evolutionaryResult().total_evolutionary;
ASSERT_GT(total_time, context.partition.time_limit);
ASSERT_LT(total_time - times.at(times.size() - 1), context.partition.time_limit);
// TODO verify
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about this todo?

}
inline void broadcastPopulationSize(Context& context) {
MPI_Bcast(&context.evolutionary.population_size, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need this barrier?

}


inline int getRandomOpenTarget() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This methods seems to be overkill. Why don't you just choose a random number and check whether or not _individual_already_sent_to[current_target] is false?



// Permutates the vector so that no element is left on the original position in the vector.
inline void degenerate(std::vector<int>& vector) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method needs a better name. Also, why didn't you just use: https://en.cppreference.com/w/cpp/algorithm/random_shuffle

int sending_to = permutation_of_mpi_process_numbers[_rank];
int receiving_from = findCorrespondingSender(permutation_of_mpi_process_numbers);

std::vector<PartitionID> outgoing_partition = population.individualAt(population.randomIndividual()).partition();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use std::vector& to prevent unnecessary vector copy - or do you really need a copy here because of async send?

&received_partition_vector[0], 1, _MPI_Partition, receiving_from, 0, _communicator, &st);


hg.reset();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, reset is not necessary because it is called in setPartition

std::vector<PartitionID> outgoing_partition = population.individualAt(population.randomIndividual()).partition();
std::vector<PartitionID> received_partition_vector;
received_partition_vector.resize(hg.initialNumNodes());
DBG << preface() << "receiving from " << receiving_from << " sending to " << sending_to << "quick_start";
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

allocate vector directly with initialNumNode entries

Robin Andre and others added 30 commits January 25, 2020 11:21
…omize, introduced a local partition buffer
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants