-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use new CV capabilities via arraycontext #355
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for taking this on, this is great! Two thoughts from a quick scroll below.
mirgecom/fluid.py
Outdated
@@ -313,6 +312,30 @@ def join_conserved(dim, mass, energy, momentum, | |||
return result | |||
|
|||
|
|||
def create_conserved(dim, mass, energy, momentum, species_mass=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed? Couldn't we just call the ConservedVars
constructor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we overload the constructor and retain the shape and structure validation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope this helps a little. (d012fba)
And: I'm super supportive of trying to get this in super ASAP, to avoid a huge conflict nightmare. |
mirgecom/flux.py
Outdated
return flux_avg.join() @ normal - 0.5*lam*(cv_tpair.ext.join() | ||
- cv_tpair.int.join()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These joins are after:
return flux_avg @ normal - 0.5*lam*(cv_tpair.ext - cv_tpair.int)
gave this error:
E ValueError: matmul: Input operand 0 does not have enough dimensions (has 0, gufunc core with signature (n?,k),(k,m?)->(n?,m?) requires 1)
After using the "join" of flux_avg:
return flux_avg.join() @ normal - 0.5*lam*(cv_tpair.ext - cv_tpair.int)
this error:
E TypeError: unsupported operand type(s) for *: 'DOFArray' and 'ConservedVars'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question! inducer/arraycontext#12
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As of inducer/arraycontext#15, this should now be doable. Note that you may need to tweak some of the arguments to with_container_arithmetic
. See the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(bc09408)
Ok, I've spent some time rolling these changes all the way up the Y1 tower - and everything only improves with it. I still regret this PR is so giant, but the changes are mostly trivial - and where they are not trivial they improve the code. Hopefully not a terribly painful review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this looks great! A few notes below, otherwise good to go. FWIW, this was pretty painless to review.
Narrowly avoided a gitastrophy. |
It is regretful this is such a giant haul, but it turns out this was something real hard to do incrementally.