-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Routing table tools #211
Routing table tools #211
Conversation
83376dc
to
f120051
Compare
merge = get_best_merge(routing_table, aliases) | ||
|
||
# If there is no merge then stop | ||
if merge.goodness < 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be if merge.goodness <= 0
@mossblaser I think this is ready for review/would benefit from your thoughts. Phase 2 of routing table minimisation may come along in a later PR. |
Scrap that, for the moment I'm going to argue that the likelihood of using default entries having any serious benefit when also using minimisation is slim. I'll re-address this later if necessary. |
Some stats for anyone following this:
These are just some routing tables that I had lying around - I'll go hunting for a larger dataset when this floats back to the top of my priority queue. That said, this appears to minimise routing tables somewhat effectively. |
With
|
I'd certainly be interested in seeing some "ball-park" numbers as to what you lose from not using default routes. I vaguely remember the southampton guys mentioning this sort of thing too...
I think this would be worth a look for more up-to-date nengo things since you've changed a lot of late...
Yes indeed! Particularly like the performance numbers when you just say "stop at 1024". On this note we should really work out what this number is by asking the machine... It does seem to me, though, that there is little incentive to do so since "sharing" a routing table is such a liability... On this note, would it be worth including this tool as part of the P&R wrapper? |
@@ -277,6 +277,10 @@ def build_routing_tables(routes, net_keys, omit_default_routes=True): | |||
If a routing tree has a terminating vertex whose route is set to None, | |||
that vertex is ignored. | |||
|
|||
.. note:: | |||
:py:func:`~.build_and_minimise_routing_tables` can be used to get |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"can be used to minimise routing tables produced by this function."
I'd include an example too (especially if you need to turn off default-route omission!)
I am, of course, an idiot who should read more carefully...
I have no numbers, but as this algorithm generates entries with large numbers of Xs my gut feeling is that even if a routing entry which could be default routed wasn't merged with anything else you probably wouldn't be able to remove it from the table anyway.
Again, no numbers, but this is sufficiently effective that the larger circular convolution models actually fit on the machine.
I'd agree that sharing routing tables is a Bad Idea (:tm:) unless you can guarantee that the entries which you have to share with are higher priority than the entries you want to minimise and that the unminimised table (including the other entries) is orthogonal. +1 for asking the machine how any entries are available! Hard-coding 1023 makes me feel very unhappy.
I'm happy to do that provided we have a flag to turn it off and document thoroughly... but of course these are things you'd do anyway ;) |
Already got a nengo prototype running? Super; I'd call that better than numbers ;).
How should this information propagate? Via the P&R infrastructure (e.g. constraints) or via a side channel (e.g. directly passing in a system-info or a raw dictionary?) |
could not be produced. | ||
""" | ||
# Build the tables and then minimise them | ||
tables = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to the CPU-bound nature of the problem and trivial parallelism available here it may be worth just dropping in: https://docs.python.org/2/library/multiprocessing.html#using-a-pool-of-workers as a quick and easy speedup.
I think it feels somewhat similar to the SDRAM constraint, so I guess via the P&R infrastructure... But I have no strong preference. I was secretly hoping you'd push the API for this into shape ;) |
"Ordered Covering" | ||
================== | ||
|
||
The algorithm implemented here, "Ordered Covering", provides the following |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're likely to have multiple algorithms available for minimisation (just as we have them available for P&R) it may be a good idea to rename this file to reflect the name of the algorithm and then provide an alias (as you have done) in the top level. Will the different algorithms be sufficiently overlapping in compatibility that this would be a reasonable idea?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is probably a reasonable idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And is done.
Yeah, the other difficulty is that no vertex is realistically going to consume routing table entries so putting this as a resource seems slightly awkward. Doing it via constraints also seems a good idea but only if this module were a part of the P&R module. Since I'm still unsure whether this belongs in P&R or separately... Doing it via a dictionary seems like by-far the cleanest solution when you look at the function in isolation though... I'll continue to think about this... |
================== | ||
|
||
The algorithm implemented here, "Ordered Covering", provides the following | ||
rules: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, these rules are written from the perspective of something generating routing table entries sequentially(?) rather than as a property of a complete routing table. This may be worth stating explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I'd use the term "generating", but if the table you receive is in order of increasing generality and you apply these rules when you remove and add entries to the table then the final table, and every intermediate table, is guaranteed to be functionally equivalent to the starting table.
(Note if the starting table is ordered but is non-orthogonal then following these rules will still guarantee a functionally equivalent table but it may not be a small as it could be if you removed the non-orthogonal entries.)
# for where there are common Xs in the second table. | ||
common_xs = get_common_xs(entries_b) | ||
try: | ||
return all(get_first_match(entry.key) == entry.route for entry in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As much as I like this style; this algorithm would certainly be clearer as a for-loop over the keys with the above function simply being inlined as a nested loop.
Not a big change in SC&MP: https://bitbucket.org/spinnaker-low-level-software/spinnaker_tools/branch/include-rtr_free-in-cmd_info I'll compile and try to get Rig to play along shortly. |
Would it just be neater to allow |
Yes and no. Certainly would be neat but the SystemInfo construct is definitely not something I want leaking everywhere... How strong are your feelings on this matter? |
Not very! |
- Moves routing table constructs into a package - Adds tools for working with routing tables - Adds implementation of ordered covering routing table minimisation algorithm. Includes work and suggestions by @mossblaser
618ce40
to
4f0435e
Compare
Adds largest_free_mc_block to the ChipInfo (and thus SystemInfo) tuple. To enable backward compatibility ChipInfo now has default argument values. Also adds build_target_lengths function which builds a dictionary of target routing table lengths from a SystemInfo object. This commit includes a new build of SC&MP built from commit b8cdcc2. This commit also fixes a preliminary test implementation and also changes the location of the routing table entry count within the arg1 bits.
Also adds an optional argument to build_and_minimise_routing_tables which specifies which algorithm to use.
4f0435e
to
920ae2d
Compare
- Add `opposite` property to Routes - Add traversal to RoutingTree - Add routing table minimisation to place and route wrapper Includes work by @mossblaser
Improve minimisation API
Woohoo! |
Adds some tools for working with routing tables.
X
s in routing table entriesX
srouting_table.utilsplace_and_route.utils which converts from SystemInfo to table-availability lookup.