[Parallel intervals] Frontend implementation#227
[Parallel intervals] Frontend implementation#227jdahm wants to merge 86 commits intoGridTools:masterfrom jdahm:regions-gdp-frontend
Conversation
| parallel_intervals.append( | ||
| [ | ||
| gt_ir.utils.parse_interval_node( | ||
| axis, name, loc=gt_ir.Location.from_ast_node(call_node) | ||
| ) | ||
| for axis, name in zip(axes_slices, axis_names) | ||
| ] | ||
| ) |
There was a problem hiding this comment.
This is now able to use the same function for parsing parallel intervals as the sequential axis!
egparedes
left a comment
There was a problem hiding this comment.
I just reviewed the changes since last time I checked and they look fine to me. Anyway, I consider @tehrengruber the main reviewer here and I think he should take a look again to this PR (ideally this week) to give a final round of feedback or approve it.
Co-authored-by: Enrique G. Paredes <18477+egparedes@users.noreply.github.com>
…gions-gdp-frontend
There was a problem hiding this comment.
The parsing of the with statement's is to complicated for my taste, but we can tackle that when the new frontend will land at some point. The syntax should be changed slightly so that regions slices are always 2D, not 1D, rest are only minor things.
| assert len(annotations) == 6 | ||
|
|
||
|
|
||
| class TestParallelIntervals: |
There was a problem hiding this comment.
Your GDP mentions specifying multiple regions at once, e.g. parallel(region[I[0], :], region[I[-1], :]). I couldn't find a test for this, neither here nor in one of the other PRs.
|
|
||
| @staticmethod | ||
| def single_index_slice(node: ast.Expr): | ||
| def slice_from_value(node: ast.Expr) -> ast.Slice: |
There was a problem hiding this comment.
If I get this right, this is responsible for transforming something like I[0], e.g. part of region[I[0], :], into I[0]:I[0]+1. A region is a 2-dimensional entity, while region[I[0], :] suggests it to be one dimensional. I believe what you are doing here should actually be reflected directly in the syntax, i.e. region[I[0]:I[0]+1, :].
There was a problem hiding this comment.
I see your point, @tehrengruber, but I'm not so sure about your proposal. The region specification here is still 2D, even if one of the axes only contains one element, and although your proposal is slightly more correct from a mathematical point of view, it's also more verbose and therefore harder to read. Personally, I'd accept the current succinct specification for these cases, since changing the implementation here also involves changing the syntax in the GDP, but I might be wrong...
There was a problem hiding this comment.
I pondered with the syntax a little bit more and I'm ok with it now. My reasoning here is/was two-fold. First the syntax adds additional complexity to the backend, which I would like to avoid until it is overhauled at some point. I still believe this is a valid point, but a matter of taste. Secondly I understood the syntax (symbolically) means region[...] returns the slices of the domain to run on, in which case it should not change its dimension. However one can also see it as returning the indices of the domain to run on, where the dimension doesn't matter. I'm not in favor of this perspective, but it's valid one and the verbosity is also a good point.
| elif isinstance(node.value, int): | ||
| offset = node.value | ||
| if isinstance(value, int): | ||
| level = gt_ir.LevelMarker.END if value < 0 else gt_ir.LevelMarker.START |
There was a problem hiding this comment.
nitpicking
| level = gt_ir.LevelMarker.END if value < 0 else gt_ir.LevelMarker.START | |
| level = gt_ir.LevelMarker.START if value >= 0 else gt_ir.LevelMarker.END |
| level = gt_ir.LevelMarker.START if value.index >= 0 else gt_ir.LevelMarker.END | ||
| offset = value.index + value.offset | ||
| elif value is None: | ||
| LARGE_NUM = np.iinfo(np.int32).max |
There was a problem hiding this comment.
I would actually use math.inf or something symbolic here. It's clearer while debugging and you use the value symbolically anyway. Otherwise the correct data type is np.uintc.
| LARGE_NUM = np.iinfo(np.int32).max | |
| LARGE_NUM = np.iinfo(np.uintc).max |
| field = func(field) | ||
|
|
||
| module = f"TestParallelIntervals_func_and_externals_{id_version}" | ||
| externals = {"ext": I[0] - np.iinfo(np.int32).max} |
There was a problem hiding this comment.
| externals = {"ext": I[0] - np.iinfo(np.int32).max} | |
| externals = {"ext": I[0] - np.iinfo(np.uintc).max} |
| level=left.level, offset=bin_op(left.offset, right), loc=self.loc | ||
| ) | ||
| elif isinstance(left, numbers.Number) and isinstance(right, numbers.Number): | ||
| return bin_op(left, right) |
There was a problem hiding this comment.
If no condition applies an exception should be raised.
| ast.copy_location(assignment, node) | ||
| return self.visit_Assign(assignment) | ||
|
|
||
| def _unroll_computations(self, node: ast.With): |
There was a problem hiding this comment.
| def _unroll_computations(self, node: ast.With): | |
| def _flatten_computations(self, node: ast.With): |
I see the similarity with loop unrolling, but I would call this flattening.
|
|
||
| parallel_intervals = [] | ||
| for node in call_node.args: | ||
| if isinstance(node.slice, ast.ExtSlice): |
There was a problem hiding this comment.
It would greatly increase readability if you could add a comment for each of the cases with a syntax example.
| COMPUTATION = 2 | ||
|
|
||
|
|
||
| class IRMaker(ast.NodeVisitor): |
There was a problem hiding this comment.
With the addition of the parallel intervals, it's very hard to understand how with statements are parsed. Creating an additional cannonicalization from python ast -> python ast that transforms into a flat representation would make sense. However since the frontend is to be refactored anyway at some point I'm fine with this right now.
| # Parse computation specification, i.e. `withItems` nodes | ||
| iteration_order = None | ||
| interval = None | ||
| parallel_intervals = [None] |
There was a problem hiding this comment.
For computations without a parallel interval specification there should still be parallel intervals, but from -inf to inf right?
Description
Frontend implementation for parallel intervals.
Changes:
ComputationBlocksextendparameter that defaults to False. If this is True, the parallel interval extends infinitely far.Second PR in the parallel intervals series.
Blocked by:
To do:
Requirements
Before submitting this PR, please make sure:
new features
Additionally, if this PR contains code authored by new contributors:
signed by the employer if needed, provided to ETH Zurich
version of the AUTHORS.rst file included in the PR