New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use broadcast_to
instead of broadcast_like
#288
Conversation
Hmm - more tests are failing than I anticipated. Running The failures don't actually call
|
It looks like they're failing because those unit tests make unnecessarily strong assumptions, aren't isolated/self-contained enough, don't If you could fix those issues as you address this PR's issue, you would be doing significantly more for this project than the change requested in the issue itself. |
Aside from strong test assumptions, there are some optimization errors that are probably worth addressing first; those may actually involve bugs. For instance, the first optimization error is implying that the the type generated by the newly introduced |
6262bd0
to
a00ee11
Compare
@brandonwillard the latest commit pymc-devs/pytensor@a00ee11 gets rid of all However, I suspect I'm doing something ugly by adding the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we simply cast the result of broadcast_to
to the desired dtype (e.g. using TensorVariable.astype
)?
Adding a dtype argument to the Op
doesn't add much in the way of functionality, and it causes the Op
to deviate from the underlying numpy.broadcast_to
that it models/wraps.
8953168
to
24d2dd1
Compare
Sure, that actually makes it easier. I assumed we wanted to mimic the The previous two commits revert the Also, do we want to remove the
|
@brandonwillard I think I've fixed all tests but two. I'm confused about the remaining failures - I've attached the output of
|
That's probably referring to the stack traces carried by each variable in their |
Those looks like brittle test conditions. Instead of coming up with a direct check for something specific and relevant in the transformed graph, many tests will simply In almost all cases, these kinds of |
I'll try running these tests locally within the next few days and see if I can spot any genuine issues (i.e. ones that aren't due to brittle/overly-restrictive tests). |
Thanks for helping @brandonwillard! Sorry for being so slow-moving on this PR - something has popped up in the real world (and not a lack of interest in finishing this work). I've loosened the test conditions on I'll wait for the test suite to finish, but I'd also appreciate a quick triage of whether there are other real test failures - otherwise I'll assume that everything that fails is a flaky test. |
I only commented so that you knew I hadn't forgotten about this PR; there's absolutely no rush, though! |
@brandonwillard https://github.com/pymc-devs/aesara/pull/288/checks?check_run_id=1884844321 |
Yes, that does look like a real issue. My first impression was that it had to do with in-place operations. The new Unfortunately, the evaluated graph doesn't contain any Also, with |
InconsistencyError('Attempting to destroy indestructible variables: [TensorConstant{(1,) of 0.0}]') Actually, those aren't the exact problem. We probably need to find out how the |
All right, I believe that the source of this issue is the change from In this case, a |
450e24a
to
725c933
Compare
Now that I think about it, we could probably use a simple optimization that removes useless |
Looks like there are a few more brittle tests (e.g. |
Also, some of these optimizations might be having trouble because In the case of |
broadcast_to
instead of broadcast_like
3307a5d
to
4211281
Compare
Just rebased this PR |
A lot of these tests test for strict equality of the toposorted graph. We should only test that the toposorted graph _contains_ expected nodes/ops.
This change allows `get_scalar_constant_value` to "dig through" `BroadcastTo` `Op`s.
This change removes an assertion on `len(topo)`, and by loosens the strict node type requirement of the last node in `topo`.
Closes #159. WIP.
Two questions:
broadcast_like
tobroadcast_to
inaesara/tensor/math_opt.py
gives me a lot ofBadOptimization
andAssertionErrors
, which I'm not knowledgeable enough to debug - can someone help me understand what's happening?I've also changedbroadcast_like
tobroadcast_to
inaesara/tensor/basic_opt.py
- is this something we want to do?