Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Routing table tools #211

Merged
merged 5 commits into from
Jan 13, 2016
Merged

Routing table tools #211

merged 5 commits into from
Jan 13, 2016

Conversation

mundya
Copy link
Member

@mundya mundya commented Dec 16, 2015

Adds some tools for working with routing tables.

  • Method for finding common Xs in routing table entries
  • Methods for expanding routing table entries to remove Xs
  • Tool for checking whether one routing table is a functional subset of another
  • Routing table minimisation
  • Add to documentation
  • Update SC&MP to return routing-table availability in system info
  • Update ChipInfo object to include routing table usage
  • Add utility function in routing_table.utils place_and_route.utils which converts from SystemInfo to table-availability lookup.
  • Integrate into P&R wrapper(?)
  • Merge in Improve minimisation API #213 with improvements to this API

@mundya mundya force-pushed the routing-table-tools branch 5 times, most recently from 83376dc to f120051 Compare December 18, 2015 17:12
merge = get_best_merge(routing_table, aliases)

# If there is no merge then stop
if merge.goodness < 0:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be if merge.goodness <= 0

@mundya
Copy link
Member Author

mundya commented Dec 19, 2015

@mossblaser I think this is ready for review/would benefit from your thoughts. Phase 2 of routing table minimisation may come along in a later PR.

@mundya
Copy link
Member Author

mundya commented Dec 20, 2015

@mossblaser - I'm tempted to rewrite the minimiser to allow it to work with routing trees as well so that I can make use of default entries.

Scrap that, for the moment I'm going to argue that the likelihood of using default entries having any serious benefit when also using minimisation is slim. I'll re-address this later if necessary.

@mundya
Copy link
Member Author

mundya commented Dec 20, 2015

Some stats for anyone following this:

Table name # entries # unique routes Time to minimise (target length=0) / s # minimised entries
pynn1 2005 220 14.74 566
pynn2 2007 210 15.98 527
nengo_1739 1738 28 0.54 38

These are just some routing tables that I had lying around - I'll go hunting for a larger dataset when this floats back to the top of my priority queue.

That said, this appears to minimise routing tables somewhat effectively.

@mundya
Copy link
Member Author

mundya commented Dec 20, 2015

With target_length=1023 rather than 0.

Table name # entries # unique routes Time to minimise (target length=1023) / s # minimised entries
pynn1 2005 220 2.71 1017
pynn2 2007 210 3.40 1013
nengo_1739 1738 28 0.29 955

@mossblaser
Copy link
Member

Scrap that, for the moment I'm going to argue that the likelihood of using default entries having any serious benefit when also using minimisation is slim.

I'd certainly be interested in seeing some "ball-park" numbers as to what you lose from not using default routes. I vaguely remember the southampton guys mentioning this sort of thing too...

These are just some routing tables that I had lying around - I'll go hunting for a larger dataset when this floats back to the top of my priority queue.

I think this would be worth a look for more up-to-date nengo things since you've changed a lot of late...

That said, this appears to minimise routing tables somewhat effectively.

Yes indeed! Particularly like the performance numbers when you just say "stop at 1024". On this note we should really work out what this number is by asking the machine... It does seem to me, though, that there is little incentive to do so since "sharing" a routing table is such a liability...

On this note, would it be worth including this tool as part of the P&R wrapper?

@@ -277,6 +277,10 @@ def build_routing_tables(routes, net_keys, omit_default_routes=True):
If a routing tree has a terminating vertex whose route is set to None,
that vertex is ignored.

.. note::
:py:func:`~.build_and_minimise_routing_tables` can be used to get
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"can be used to minimise routing tables produced by this function."

I'd include an example too (especially if you need to turn off default-route omission!)

I am, of course, an idiot who should read more carefully...

@mundya
Copy link
Member Author

mundya commented Dec 21, 2015

I'd certainly be interested in seeing some "ball-park" numbers as to what you lose from not using default routes. I vaguely remember the southampton guys mentioning this sort of thing too...

I have no numbers, but as this algorithm generates entries with large numbers of Xs my gut feeling is that even if a routing entry which could be default routed wasn't merged with anything else you probably wouldn't be able to remove it from the table anyway.

I think this would be worth a look for more up-to-date nengo things since you've changed a lot of late...

Again, no numbers, but this is sufficiently effective that the larger circular convolution models actually fit on the machine.

On this note we should really work out what this number is by asking the machine... It does seem to me, though, that there is little incentive to do so since "sharing" a routing table is such a liability...

I'd agree that sharing routing tables is a Bad Idea (:tm:) unless you can guarantee that the entries which you have to share with are higher priority than the entries you want to minimise and that the unminimised table (including the other entries) is orthogonal.

+1 for asking the machine how any entries are available! Hard-coding 1023 makes me feel very unhappy.

On this note, would it be worth including this tool as part of the P&R wrapper?

I'm happy to do that provided we have a flag to turn it off and document thoroughly... but of course these are things you'd do anyway ;)

@mossblaser
Copy link
Member

Again, no numbers, but this is sufficiently effective that the larger circular convolution models actually fit on the machine.

Already got a nengo prototype running? Super; I'd call that better than numbers ;).

+1 for asking the machine how any entries are available! Hard-coding 1023 makes me feel very unhappy.

How should this information propagate? Via the P&R infrastructure (e.g. constraints) or via a side channel (e.g. directly passing in a system-info or a raw dictionary?)

could not be produced.
"""
# Build the tables and then minimise them
tables = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to the CPU-bound nature of the problem and trivial parallelism available here it may be worth just dropping in: https://docs.python.org/2/library/multiprocessing.html#using-a-pool-of-workers as a quick and easy speedup.

@mundya
Copy link
Member Author

mundya commented Dec 21, 2015

How should this information propagate? Via the P&R infrastructure (e.g. constraints) or via a side channel (e.g. directly passing in a system-info or a raw dictionary?)

I think it feels somewhat similar to the SDRAM constraint, so I guess via the P&R infrastructure... But I have no strong preference. I was secretly hoping you'd push the API for this into shape ;)

"Ordered Covering"
==================

The algorithm implemented here, "Ordered Covering", provides the following
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're likely to have multiple algorithms available for minimisation (just as we have them available for P&R) it may be a good idea to rename this file to reflect the name of the algorithm and then provide an alias (as you have done) in the top level. Will the different algorithms be sufficiently overlapping in compatibility that this would be a reasonable idea?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is probably a reasonable idea.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And is done.

@mossblaser
Copy link
Member

How should this information propagate? Via the P&R infrastructure (e.g. constraints) or via a side channel (e.g. directly passing in a system-info or a raw dictionary?)

I think it feels somewhat similar to the SDRAM constraint, so I guess via the P&R infrastructure... But I have no strong preference. I was secretly hoping you'd push the API for this into shape ;)

Yeah, the other difficulty is that no vertex is realistically going to consume routing table entries so putting this as a resource seems slightly awkward. Doing it via constraints also seems a good idea but only if this module were a part of the P&R module. Since I'm still unsure whether this belongs in P&R or separately... Doing it via a dictionary seems like by-far the cleanest solution when you look at the function in isolation though...

I'll continue to think about this...

==================

The algorithm implemented here, "Ordered Covering", provides the following
rules:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, these rules are written from the perspective of something generating routing table entries sequentially(?) rather than as a property of a complete routing table. This may be worth stating explicitly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I'd use the term "generating", but if the table you receive is in order of increasing generality and you apply these rules when you remove and add entries to the table then the final table, and every intermediate table, is guaranteed to be functionally equivalent to the starting table.

(Note if the starting table is ordered but is non-orthogonal then following these rules will still guarantee a functionally equivalent table but it may not be a small as it could be if you removed the non-orthogonal entries.)

# for where there are common Xs in the second table.
common_xs = get_common_xs(entries_b)
try:
return all(get_first_match(entry.key) == entry.route for entry in
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As much as I like this style; this algorithm would certainly be clearer as a for-loop over the keys with the above function simply being inlined as a nested loop.

@mundya
Copy link
Member Author

mundya commented Jan 4, 2016

Not a big change in SC&MP: https://bitbucket.org/spinnaker-low-level-software/spinnaker_tools/branch/include-rtr_free-in-cmd_info

I'll compile and try to get Rig to play along shortly.

@mundya
Copy link
Member Author

mundya commented Jan 5, 2016

Would it just be neater to allow minimise to accept a SystemInfo or a dict or an int or None for target_length?

@mossblaser
Copy link
Member

Would it just be neater to allow minimise to accept a SystemInfo or a dict or an int or None for target_length?

Yes and no. Certainly would be neat but the SystemInfo construct is definitely not something I want leaking everywhere... How strong are your feelings on this matter?

@mundya
Copy link
Member Author

mundya commented Jan 5, 2016

How strong are your feelings on this matter?

Not very!

 - Moves routing table constructs into a package
 - Adds tools for working with routing tables
 - Adds implementation of ordered covering routing table minimisation
   algorithm.

Includes work and suggestions by @mossblaser
@mundya mundya force-pushed the routing-table-tools branch 2 times, most recently from 618ce40 to 4f0435e Compare January 7, 2016 10:09
Adds largest_free_mc_block to the ChipInfo (and thus SystemInfo) tuple. To
enable backward compatibility ChipInfo now has default argument values.

Also adds build_target_lengths function which builds a dictionary of target
routing table lengths from a SystemInfo object.

This commit includes a new build of SC&MP built from commit b8cdcc2. This
commit also fixes a preliminary test implementation and also changes the
location of the routing table entry count within the arg1 bits.
Also adds an optional argument to build_and_minimise_routing_tables which
specifies which algorithm to use.
mundya and others added 2 commits January 12, 2016 16:49
 - Add `opposite` property to Routes
 - Add traversal to RoutingTree
 - Add routing table minimisation to place and route wrapper

Includes work by @mossblaser
mossblaser added a commit that referenced this pull request Jan 13, 2016
@mossblaser mossblaser merged commit 80f35dc into master Jan 13, 2016
@mundya
Copy link
Member Author

mundya commented Jan 13, 2016

Woohoo!

@mossblaser mossblaser deleted the routing-table-tools branch January 13, 2016 09:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants