Skip to content

Conversation

@jessegrabowski
Copy link
Member

Update imports following pytensor 2.35 to clear noisy warnings.

@ricardoV94
Copy link
Member

Need to bump pymc dep. Also we should be able to take out that windows warning filter

@jessegrabowski jessegrabowski force-pushed the update-pytensor branch 2 times, most recently from 2bab2c5 to 3753483 Compare October 17, 2025 16:44
@jessegrabowski
Copy link
Member Author

Preliz also needs to update u_u

@ricardoV94
Copy link
Member

I don't know why the warning is so verbose, should only be emitted the first time

@ricardoV94
Copy link
Member

Anyway preliz you can filter the warning or is it actually failing?

@jessegrabowski jessegrabowski force-pushed the update-pytensor branch 5 times, most recently from 7dbfadc to 2999be1 Compare October 17, 2025 19:11
@jessegrabowski
Copy link
Member Author

Trivial issues are resolved. Looks like we have some new and exciting errors cropping up, would appreciate help digging into those.

@ricardoV94
Copy link
Member

I can check the marginal model stuff

subgraph_batch_dim_connection(inp, [invalid_out])

out = (inp[:, :, None, None] + pt.zeros((2, 3))) @ pt.ones((2, 3))
out = (inp[:, :, None, None] + pt.zeros((2, 3))) @ pt.ones((3, 2))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

static shape check now reveals this error in the test

test_point = {"emission_1": test_value_emission1, "emission_2": test_value_emission2}
res_logp, dummy_logp = logp_fn(test_point)
assert res_logp.shape == ((1, 3) if batch_chain else ())
assert res_logp.shape == ((3, 1) if batch_chain else ())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "first dependent RV" that gets the full logp changed with our new toposort algo

# Test initial_point
ips = make_initial_point_expression(
free_rvs=marginal_m.free_RVs,
free_rvs=[marginal_m["sigma"], marginal_m["dep"], marginal_m["sub_dep"]],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order of free_RVs changed for similar reason

# Test initial_point
ips = make_initial_point_expression(
free_rvs=marginal_m.free_RVs,
free_rvs=[marginal_m["x"], marginal_m["y"]],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

ips = make_initial_point_expression(
# Test that order does not matter
free_rvs=marginal_m.free_RVs[::-1],
free_rvs=[marginal_m["y"], marginal_m["x"]],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

@ricardoV94
Copy link
Member

The divide by zero warning will be fixed by pymc-devs/pytensor#1681

For now we can filter it in the CI

@ricardoV94
Copy link
Member

No idea why one of the windows jobs fails to install dependencies but the other Windows jobs are fine?? CC @maresb

@ricardoV94
Copy link
Member

The JAX failure seems to be because the jax QR dispatch is returning a tuple instead of a single variable

@ricardoV94
Copy link
Member

ricardoV94 commented Oct 20, 2025

jax.scipy.linalg.qr(jax.numpy.eye(3), mode="r")  # (JitTracer<float64[3,3]>,)

We changed from using jax.numpy.qr to jax.scipy.qr, so that's why. Will need a fix upstream in PyTensor

@ricardoV94
Copy link
Member

I pushed a change to use fixture in the test_kalman_filter file. It was compiling functions that AFAICT are not used in every test, and particularly in the one I was trying to debug.

@jessegrabowski
Copy link
Member Author

Yeah there's a lot of room for improvement in those test files. Thanks for cleaning a bit.

@jessegrabowski
Copy link
Member Author

Hey, sorry, I've been slammed with other stuff. Are you still stuck on the OpenMP error on Windows?

Yes, that's all that's left

@jessegrabowski
Copy link
Member Author

@fonnesbeck @aphc14 the pathfinder test_concurrent_results[thread] test is running for 6 hours then being shut down -- see here. I cannot reproduce the behavior locally. I'm on a Mac though, the CI runner is ubuntu-latest.

@aphc14
Copy link
Contributor

aphc14 commented Oct 27, 2025

There's probably no benefit on having concurrent="thread" for pathfinder anyways since the algorithm isn't I/O bound.

It Might be safe to remove this concurrent option in pathfinder, and the test_concurrent_results[thread] test. @fonnesbeck, would you be opposed to having the redundant feature concurrent="thread" removed?

@jessegrabowski
Copy link
Member Author

There's probably no benefit on having concurrent="thread" for pathfinder anyways since the algorithm isn't I/O bound.

For the purposes of this PR, do you object to the test being removed?

@ricardoV94
Copy link
Member

For the warning let's just filter it for now and open an issue. It needs more investigation, but changes are not directly caused by this package

@ricardoV94
Copy link
Member

CI is still failing?

@ricardoV94
Copy link
Member

ricardoV94 commented Oct 28, 2025

Re: Pathfinder doing multhithread, that was never safe to begin with. PyTensor functions are not thread-safe. At the very least you need to copy them for each thread. I'm not sure what it's doing with multi-processing but that may also be done unsafely, given the CI is still hanging in that parametrization.

This could be related to the openmp warnings, if we were seeing those on ubuntu as well?

@jessegrabowski
Copy link
Member Author

This could be related to the openmp warnings, if we were seeing those on ubuntu as well?

We're not, but my guess is that the warning is caused by something happening on import from pathfinder. I am keeping the warning filter for now and skipping the pathfinder tests, pending an issue to clean that codebase up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants