Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuous release #17

Open
wants to merge 15 commits into
base: master
Choose a base branch
from
Open

Continuous release #17

wants to merge 15 commits into from

Conversation

keksklauer4
Copy link
Member

No description provided.

Splines and others added 8 commits November 7, 2021 15:16
TL;DR:
- Refactor graph python module
- Init Sphinx documentation: https://majorminer.readthedocs.io/

* [Py] Implement BFS and DFS for initialization

BFS: Breadth-first search
DFS: Depth-first search

* Refactor undirected Graph and add doc strings.

* Init Sphinx ReadTheDocs documentation

* Add documentation badge

* Add docstrings to graph

* Init Chimera graph docs

* Add ReadTheDocs configuration to fix doc errors

Use Python 3.9 instead of version 3.7 for building.

* Add doc strings to embedding graph

This partially breaks with some chain logic, so we will need to adjust
some functions one abstraction layer above.

* Fix get_embedded_nodes

* Add overview section to docs including diagrams

* Add more docs

* Add warning for outdated Readme
* Extend from Chimera cell to whole Chimera lattice

- Also outsource test graphs
- Do evolution with multiple mutation steps

* Simplify connected test graph generation

* Try to embed endlessly until embedding found

Also save intermediate results as SVG

* Add more chain colors and SVG export option

* Init loggin instead of prints

* Save intermediate SVG graphs

* Do not return multiple chains (instead use set)

* Add typing to util function

* Only use from_nodes which are not in a chain

* Tighten error handling

* Allow mutation for to_nodes that are in a chain

* Include max_total as bound for main_loop passes

* Export step-by-step SVG

For this, we outsourced the local maximum technique to not be included
inside a mutation. It now has to be called from outside (evolution.py).

* Change draw direction to horizontal
* ⚠ Overcomplicate logic for random chains

We want to allow for more chain scenarios, e.g. add a random chain where
both source and target node are in chains themselves. This led to
overcomplicated code having to deal with chains that are placed on edges
instead of nodes. Thus, this commit is a working example for better
chain placement (although not perfect), but definitely needs to be
redone in the next commits (e.g. using the already-used mapping from H
to G instead of essentially redundant chains on edges).

* Rename from_node->source, to_node->target

This only affects the embedding and solver module.

* Shift from edge to node supernode encoding

Beforehand, we encoded supernodes using edge costs, e.g. edge with
cost 0 was no chain, an edge with cost 1 indicated that the two nodes
mapped to the same node_H, thus they belong to the same supernode.

However, this was essentially redundant as we also stored the mapping
between input graph H and hardware graph G in the mapping class. We
leverage this class now and encode supernodes directly, thus getting rid
of edge chains (remnants are still there, will clean up later). It might
be worth to keep the term "chain" to indicate supernodes that consist
of more than one node.

This change makes the embedding itself worse, e.g. K4 does not work
anymore. This is since we now also allow to merge random nodes to
supernodes where both source and target are in chains. We need to
implement the evolutionary strategy where we select the mutation
that yields the best result, in our case where the most edges can be
embedded (to achieve a "real" local maximum).

Also changed:
- Changed lists to sets in various places
- Adapted drawing class to the new changes (to properly draw supernodes)
- Made logging output more precise

* Rewrite extend_random_supernode and fix bugs

Split the method into multiple pieces for better readability and fixed
several bugs that occurred due to wrong order of execution. It is still
quirky that we can't embed edges freely since all nodes on that edge
must have supernode (in the current implementation). That's why there
are some special constructs, e.g. we need to discard nodes from sets
to make sure not the change results after having embedded them
in supernodes. There is room for improvement here.

Fixed a really annoying bug: Sets in Python are returned by reference,
so we got weird results since we used discard on them assuming we just
altered the local variable. Now we copy the set.

Fixed another, potential bug where we accidentally added selfloops to
the embedding view graph.

Also improved error messages.

* Get rid of chain remnants

Note that at this point, we still sometimes use the word "chain",
however we don't use chains encoded on edges anymore, but now on
nodes themselves.

TODO: Unify usage of words "chain" and "supernode" throughout the code.

* Explicitly use sets instead of lists

* Draw supernode colors on nodes (not only on edges)

Also slightly shifted labels of Chimera graph for better readability.

* Change color brightness instead of transparency

* Fix bug not operating on playground embedding

* Fix bug forgot removing target from prev supernode

* Init basic hill climbing (evolution)

After each generation we also try to remove unnecessary edges.

* Move supernode connectiveness check to embedding

* Remove redundant nodes probabilistically

* Implement articulation point algorithm (cut node)

This way, we don't remove the wrong nodes in our graph that would
disconnect the respective subgraph of supernodes (if removed).

See https://www.geeksforgeeks.org/articulation-points-or-cut-vertices-in-a-graph/
and https://youtu.be/jFZsDDB0-vo as examples

* Fix removal of redundant nodes

Previously, we removed all nodes that were not articulation points
for every supernode in one go. However, as we remove, the articulation
points might change, so we need to recalculate them every time a node
is removed.

* Add strategy to extend supernode without shifting

We don't shift the target node in this strategy.

* Fix articulation point (don't include removed nodes)

* Remove redundant nodes as last resort in the end

Also switched to 5x5 Chimera grid

At this point K6 embedding on a 3x3 cell is working. Tried K12 on a
5x5 Chimera grid, which is not yet yielding good valid embeddings.
TODO: Code cleanup, especially the embedding solver that got
kind of cluttered.
* [Cpp] Implemented visualizer for generic graphs.

* [Cpp] Started reworking parallelization concept for iterative local improvement.

* [Cpp] Adjusted extend operator.

* [Cpp] Continued with mutation manager.

* [Cpp] Fundamental rework of the structure

* [Cpp] Continued refactoring.

* [Cpp] Still refactoring.

* [Cpp] Refactoring...

* [Cpp] Refactored imports.

* [Cpp] Code running again - at least a bit.

* [Cpp] Continued fixing problems

* [Cpp] Refactoring.

* [Cpp] Refactored embedding state and added generic iteration methods.

* [Cpp] Some more loop replacements

* [Cpp] Fixing shifting operator.

* [Cpp] Implemented random gen and started working on shifting.

* [Cpp] Working on shifting.

* [Cpp] Fixed mutations.

* [Cpp] Started implementing annealing-based super vertex reducer.

* [Cpp] Continuing super vertex reducer.

* [Cpp] Fixed embedding invalidating bug.

* [Cpp] Working on extend.

* [Cpp] Implemented reducer as mutation.

* [Cpp] Started implementing evolutionary csc reducer.

* [Cpp] Continuing csc reducer.

* [Cpp] Added K15 for testing.

* [Cpp] Implemented evolutionary CSC reducer.

* [Cpp] Fixed some csc evo bugs.

* [Cpp] Added super vertex replacer.

* [Cpp] Fixed bugs.
* [Cpp] Mistakes were made...

* [Cpp] Adding image feature for last iteration.

* [Cpp] Improving reducer.

* [Cpp] Fixed minor bug.

* [Cpp] Fixing reducer bugs.

* [Cpp] Measuring execution time.

* [Cpp] Integrating evo reducer into main suite.
* [Cpp] Mistakes were made...

* [Cpp] Adding image feature for last iteration.

* [Cpp] Improving reducer.

* [Cpp] Fixed minor bug.

* [Cpp] Fixing reducer bugs.

* [Cpp] Measuring execution time.

* [Cpp] Integrating evo reducer into main suite.

* [Cpp] Testing times and starting to improve ns reduction.

* [Cpp] Improved ns reduction.

* [Cpp] Added thread pool

* [Cpp] Better stats.

* [Cpp] More parallelization.

* [Cpp] Minor changes.

* [Cpp] Replace through network simplex.

* [Cpp] Started LMRP heuristic.

* [Cpp] Working on LMRP heuristic.

* [Cpp] Still working on LMRP heuristic.

* [Cpp] Progressing with LMRP heuristic.

* [Cpp] LMRP heuristic

* [Cpp] Working on LMRP heuristic and related chimera subgraph locking.

* [Cpp] Starting to test LMRP heuristic.

* [Cpp] Debugging LMRP heuristic.

* [Cpp] Fixed most LMRP bugs.
Splines and others added 7 commits May 10, 2022 18:06
* ⚠ Overcomplicate logic for random chains

We want to allow for more chain scenarios, e.g. add a random chain where
both source and target node are in chains themselves. This led to
overcomplicated code having to deal with chains that are placed on edges
instead of nodes. Thus, this commit is a working example for better
chain placement (although not perfect), but definitely needs to be
redone in the next commits (e.g. using the already-used mapping from H
to G instead of essentially redundant chains on edges).

* Rename from_node->source, to_node->target

This only affects the embedding and solver module.

* Shift from edge to node supernode encoding

Beforehand, we encoded supernodes using edge costs, e.g. edge with
cost 0 was no chain, an edge with cost 1 indicated that the two nodes
mapped to the same node_H, thus they belong to the same supernode.

However, this was essentially redundant as we also stored the mapping
between input graph H and hardware graph G in the mapping class. We
leverage this class now and encode supernodes directly, thus getting rid
of edge chains (remnants are still there, will clean up later). It might
be worth to keep the term "chain" to indicate supernodes that consist
of more than one node.

This change makes the embedding itself worse, e.g. K4 does not work
anymore. This is since we now also allow to merge random nodes to
supernodes where both source and target are in chains. We need to
implement the evolutionary strategy where we select the mutation
that yields the best result, in our case where the most edges can be
embedded (to achieve a "real" local maximum).

Also changed:
- Changed lists to sets in various places
- Adapted drawing class to the new changes (to properly draw supernodes)
- Made logging output more precise

* Rewrite extend_random_supernode and fix bugs

Split the method into multiple pieces for better readability and fixed
several bugs that occurred due to wrong order of execution. It is still
quirky that we can't embed edges freely since all nodes on that edge
must have supernode (in the current implementation). That's why there
are some special constructs, e.g. we need to discard nodes from sets
to make sure not the change results after having embedded them
in supernodes. There is room for improvement here.

Fixed a really annoying bug: Sets in Python are returned by reference,
so we got weird results since we used discard on them assuming we just
altered the local variable. Now we copy the set.

Fixed another, potential bug where we accidentally added selfloops to
the embedding view graph.

Also improved error messages.

* Get rid of chain remnants

Note that at this point, we still sometimes use the word "chain",
however we don't use chains encoded on edges anymore, but now on
nodes themselves.

TODO: Unify usage of words "chain" and "supernode" throughout the code.

* Explicitly use sets instead of lists

* Draw supernode colors on nodes (not only on edges)

Also slightly shifted labels of Chimera graph for better readability.

* Change color brightness instead of transparency

* Fix bug not operating on playground embedding

* Fix bug forgot removing target from prev supernode

* Init basic hill climbing (evolution)

After each generation we also try to remove unnecessary edges.

* Move supernode connectiveness check to embedding

* Remove redundant nodes probabilistically

* Implement articulation point algorithm (cut node)

This way, we don't remove the wrong nodes in our graph that would
disconnect the respective subgraph of supernodes (if removed).

See https://www.geeksforgeeks.org/articulation-points-or-cut-vertices-in-a-graph/
and https://youtu.be/jFZsDDB0-vo as examples

* Fix removal of redundant nodes

Previously, we removed all nodes that were not articulation points
for every supernode in one go. However, as we remove, the articulation
points might change, so we need to recalculate them every time a node
is removed.

* Add strategy to extend supernode without shifting

We don't shift the target node in this strategy.

* Fix articulation point (don't include removed nodes)

* Remove redundant nodes as last resort in the end

Also switched to 5x5 Chimera cell

* Outsource Initialization to own class

* Refactor evolution and outsource mutation algorithms

* Modularize supernode extension mutation algorithm

Split into multiple methods for easier understanding.

* Establish "extend supernode without shift" as own mutation

Beforehand, this mutation was only a surrogate if the other strategy
didn't work. We now establish it as full-value mutation and switch
between the two mutations based on a random value.

This also means that we now make sure in the shifting/bumping
mutation that source and target are not from the same supernode
since this would undermine the other strategy which is essentially this.

* Fix wrong reset logic

* Add "before all fails" strategy (remove redundancy)

This will remove redundant supernode nodes if no mutation could be found
at all in a generation. We will then try to generate new children again.

* Adjust drawing of graph and disable logging

We disable logging to achieve a better performance and also save less
SVG images (every x image instead of every generation).
Also tried to tune evolutionary params to get K12 embedded
(with no success currently).

* Add bias to random node selection

We now favor nodes where the supernode has the least number of edges
emedded in G compared to how many edges to other supernodes should
be embedded. These percentages (actual divided by expected) change
the uniform distribution of numpy's random choice function.

We now use this new method for both strategies:
- extend_random_supernode_to_free_neighbors as well as
- extend_random_supernode (by "bumping")

* Fix illegal choice of target node

We didn't check whether the target is an articulation point in its own
supernode for the "extend_random_supernode (by "bumping")" strategy.
This led to a "break-up" of the graphs induced by the respective
supernodes of the input graph H.

* Favor smaller supernodes when extending

Also plot the chances with matplotlib

* Simplify remove redundancy
* Save and plot degree percentages

* Keep plot open after program exit

* Avoid overlapping lines by shifting them a bit

* Draw DP lines and super vertices in same color

* Outsource color utils
* Create logging dir if not already present

* Outsource params for chimera cell sizes

m, n and shore size t

* Add crosses house test graph

In Germany, this graph is known as "Haus vom Nikolaus"

* Beautify and fix drawing

e.g. lighter colors for nodes that are not embedded
also fix logic where colors were assigned incorrectly

* Keep plot open also when we found an embedding

* Add vscode settings (autopep args)

* Fix non-viable mutation detection logic

Forgot to return None
* Fix degree percentages bigger than 1.0

We just use a min function between the actual value and 1.0.
However, this is only a temporary fix, should investigate
why degree percentages are bigger than 1.0 in the first place
(unnecessary edges embedded that are not present in H?)

* Fix div by 0 in selection chances calculation

* Add "how many generations needed" plot

Using the "Haus vom Nikolaus" (crossed house puzzle)
Inclucded: scripts & results

* Draw K graphs

* Add 2x2 chimera cell results and plot subplots

* Add multiprocessing version of evolution

This is just used to process 1000 iterations of the same graph faster,
which is necessary to get quicker results for plotting.

* Add more data (how many generations) for 5x5 grid

* Dockerize plotting tasks

Note that you need to disable the logger for all the plot output,
otherwise you will get errors. Just enable the line "disable_logger()"
in python/src/util/logging.py (line 32).

* Add 2x2 Chimera "How many generations?" data

(including execution times)

* Add results for different k8 grid sizes

* Add different population sizes data (all for K8)

* Evaluate different population sizes

Also init plot of how many valid embeddings were found including the
average number of generations needed. (Per population size)
Always referring to K8 with max_generations=600 and a 5x5 Chimera grid.

* Evaluate K8 with different probabilites

(for the "extend to free neighbor" mutation)

* Docker: Make more params adjustable

* ❗ Fix important bug with overwritten playground

* Pretify drawing

- Make color selection deterministic
- Better filenames for graph output
- Use LaTeX for graphs and adjust font sizes

* Fix node colors (did not save them in dict)

* Add new data after bug fix (K8 popsizes)

* Plot "two axes" for "How many embedded"

* Add script for probability testing

* Add probability testing results

* Add scripts for evaluation on K graphs

* Prettify selection chances plot

* Add data for K embeddings on 2x2 Chimera grid

* Add data for 5x5 population sizes (1 to 12)

* Add data for k graphs (5x5 K6-K14 & 16x16 K6-K11)

* Add more data for K graph embedding on 2x2 grid

* Double the dataset for 5x5 K-graph embeddings

* Add more 16x16 grid data for K graphs

* Add results from server (probabilities and population size)
Bumps [numpy](https://github.com/numpy/numpy) from 1.19.3 to 1.21.0.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst.txt)
- [Commits](numpy/numpy@v1.19.3...v1.21.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants