-
Couldn't load subscription status.
- Fork 72
Pytensor 2.35 Compatibility Fixes #597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Need to bump pymc dep. Also we should be able to take out that windows warning filter |
2bab2c5 to
3753483
Compare
|
Preliz also needs to update u_u |
|
I don't know why the warning is so verbose, should only be emitted the first time |
|
Anyway preliz you can filter the warning or is it actually failing? |
7dbfadc to
2999be1
Compare
|
Trivial issues are resolved. Looks like we have some new and exciting errors cropping up, would appreciate help digging into those. |
|
I can check the marginal model stuff |
| subgraph_batch_dim_connection(inp, [invalid_out]) | ||
|
|
||
| out = (inp[:, :, None, None] + pt.zeros((2, 3))) @ pt.ones((2, 3)) | ||
| out = (inp[:, :, None, None] + pt.zeros((2, 3))) @ pt.ones((3, 2)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static shape check now reveals this error in the test
| test_point = {"emission_1": test_value_emission1, "emission_2": test_value_emission2} | ||
| res_logp, dummy_logp = logp_fn(test_point) | ||
| assert res_logp.shape == ((1, 3) if batch_chain else ()) | ||
| assert res_logp.shape == ((3, 1) if batch_chain else ()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "first dependent RV" that gets the full logp changed with our new toposort algo
| # Test initial_point | ||
| ips = make_initial_point_expression( | ||
| free_rvs=marginal_m.free_RVs, | ||
| free_rvs=[marginal_m["sigma"], marginal_m["dep"], marginal_m["sub_dep"]], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order of free_RVs changed for similar reason
| # Test initial_point | ||
| ips = make_initial_point_expression( | ||
| free_rvs=marginal_m.free_RVs, | ||
| free_rvs=[marginal_m["x"], marginal_m["y"]], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
| ips = make_initial_point_expression( | ||
| # Test that order does not matter | ||
| free_rvs=marginal_m.free_RVs[::-1], | ||
| free_rvs=[marginal_m["y"], marginal_m["x"]], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
|
The divide by zero warning will be fixed by pymc-devs/pytensor#1681 For now we can filter it in the CI |
|
No idea why one of the windows jobs fails to install dependencies but the other Windows jobs are fine?? CC @maresb |
|
The JAX failure seems to be because the jax QR dispatch is returning a tuple instead of a single variable |
jax.scipy.linalg.qr(jax.numpy.eye(3), mode="r") # (JitTracer<float64[3,3]>,)We changed from using |
|
I pushed a change to use fixture in the |
|
Yeah there's a lot of room for improvement in those test files. Thanks for cleaning a bit. |
d2156ea to
8de112c
Compare
Yes, that's all that's left |
|
@fonnesbeck @aphc14 the pathfinder |
|
There's probably no benefit on having It Might be safe to remove this concurrent option in pathfinder, and the |
For the purposes of this PR, do you object to the test being removed? |
|
For the warning let's just filter it for now and open an issue. It needs more investigation, but changes are not directly caused by this package |
56b3b91 to
0be0a33
Compare
0be0a33 to
34b2e9a
Compare
|
CI is still failing? |
|
Re: Pathfinder doing multhithread, that was never safe to begin with. PyTensor functions are not thread-safe. At the very least you need to copy them for each thread. I'm not sure what it's doing with multi-processing but that may also be done unsafely, given the CI is still hanging in that parametrization. This could be related to the openmp warnings, if we were seeing those on ubuntu as well? |
We're not, but my guess is that the warning is caused by something happening on import from pathfinder. I am keeping the warning filter for now and skipping the pathfinder tests, pending an issue to clean that codebase up. |
Update imports following pytensor 2.35 to clear noisy warnings.