Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate nnet and signal modules #1188

Merged
merged 12 commits into from Oct 17, 2022
Merged

Conversation

rlouf
Copy link
Member

@rlouf rlouf commented Sep 15, 2022

In this PR we begin phasing the nnet and signal modules outside of Aesara, following discussion in #674.

  • Disable auto-importing from aesara.tensor
  • Remove aesara.tensor.nnet and aesara.tensor.signal tests
  • Add deprecation warnings in nnet module
  • Add deprecation warnings in signal module
  • Move Softmax out of aesara.tensor.nnet.basic -> aesara.tensor.math
  • Move LogSoftmax out of aesara.tensor.nnet.basic -> aesara.tensor.math
  • Move SoftmaxGrad out of aesara.tensor.nnet -> aesara.tensor.math
  • Move tests for Softmax, LogSoftmax, SoftmaxGrad
  • Move rewrites related to Softmax, LogSoftmax, SoftmaxGrad
  • Rename logsoftmax -> log_softmax
  • Remove nnet and signal from the documentation, and any mention of anything NN-related

Anything else we want to salvage ?

@codecov
Copy link

codecov bot commented Sep 16, 2022

Codecov Report

Merging #1188 (aed5986) into main (739bd49) will decrease coverage by 5.05%.
The diff coverage is 86.40%.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1188      +/-   ##
==========================================
- Coverage   79.15%   74.09%   -5.06%     
==========================================
  Files         173      174       +1     
  Lines       48536    48605      +69     
  Branches    10322    10340      +18     
==========================================
- Hits        38417    36014    -2403     
- Misses       7628    10305    +2677     
+ Partials     2491     2286     -205     
Impacted Files Coverage Δ
aesara/tensor/signal/conv.py 0.00% <0.00%> (-75.68%) ⬇️
aesara/tensor/rewriting/special.py 77.77% <77.77%> (ø)
aesara/tensor/special.py 90.64% <90.64%> (ø)
aesara/link/jax/dispatch/elemwise.py 80.59% <100.00%> (ø)
aesara/link/numba/dispatch/elemwise.py 97.12% <100.00%> (ø)
aesara/scan/op.py 85.07% <100.00%> (-0.34%) ⬇️
aesara/tensor/nnet/basic.py 32.61% <100.00%> (-47.67%) ⬇️
aesara/tensor/signal/pool.py 28.30% <100.00%> (-40.97%) ⬇️
aesara/tensor/nnet/neighbours.py 0.00% <0.00%> (-91.27%) ⬇️
aesara/tensor/nnet/conv3d2d.py 0.00% <0.00%> (-81.16%) ⬇️
... and 25 more

@twiecki
Copy link
Contributor

twiecki commented Sep 16, 2022

I would vote to also keep conv2d around, as convolutions are super common in all kinds of areas. I'd actually love to have a conv1d also, as there were too many cases where we had to roll our own.

@brandonwillard
Copy link
Member

Anything else we want to salvage ?

Looks good to me.

@rlouf
Copy link
Member Author

rlouf commented Sep 26, 2022

Anything else we want to salvage ?

Looks good to me.

Just to be clear, do you mean that we only keep Softmax, LogSoftmax (which should be expressed as an Aesara graph + rewrites instead but that's another PR) and SoftmaxGrad and remove everything else?

@ricardoV94
Copy link
Contributor

Agree with @twiecki to keep the conv2d Op until we find some place to move it to. Everything else seems very specialized, easy to recreate.

@rlouf
Copy link
Member Author

rlouf commented Sep 26, 2022

We have the deprecation period to move it somewhere else, and it's better to raise a warning while we're considering moving it so downstream callers are not surprised the day it disappears.

(In this case we'll also need to update the warning when it is moved elsewhere)

@rlouf
Copy link
Member Author

rlouf commented Sep 29, 2022

I have moved Softmax, LogSoftmax and SoftmaxGrad to aesara.tensor.math, but it may be better to create a new aesara.tensor.special module for these functions to have a familiar import path.

@rlouf
Copy link
Member Author

rlouf commented Sep 29, 2022

Alright went through the checklist, will take another good look (especially at the documentation) tomorrow and move softmax-related Ops and functions to aesara.tensor.special. Then it should be good to merge once tests pass here.

Also, I indicated rel-2.9.0 as the deadline after which everything will be deleted. Does that sound right?

@rlouf rlouf force-pushed the deprecate-nn branch 2 times, most recently from 3c091da to aff3ddd Compare September 29, 2022 21:32
@brandonwillard
Copy link
Member

Also, I indicated rel-2.9.0 as the deadline after which everything will be deleted. Does that sound right?

Yeah, that sounds fine.

@twiecki
Copy link
Contributor

twiecki commented Sep 30, 2022

So it seems like conv2d is being dropped?

@rlouf
Copy link
Member Author

rlouf commented Sep 30, 2022

I'd actually love to have a conv1d also, as there were too many cases where we had to roll our own.

We can open an issue to implement the equivalent of np.convolve. That's in scope.

As for conv_2d the question is more: why should we keep it in Aesara?

@rlouf
Copy link
Member Author

rlouf commented Sep 30, 2022

Opened an issue for 1d convolution: #1223

@twiecki
Copy link
Contributor

twiecki commented Sep 30, 2022

conv_2d the question is more: why should we keep it in Aesara?

Don't have a good reason other than that it's useful to some PyMC models. The alternative is to add this to pymc/aesaraf.py, which is a fine solution too.

@twiecki twiecki closed this Sep 30, 2022
@twiecki twiecki reopened this Sep 30, 2022
@rlouf rlouf force-pushed the deprecate-nn branch 2 times, most recently from 2b385e0 to 01f121c Compare September 30, 2022 18:09
@brandonwillard
Copy link
Member

conv_2d the question is more: why should we keep it in Aesara?

Don't have a good reason other than that it's useful to some PyMC models. The alternative is to add this to pymc/aesaraf.py, which is a fine solution too.

I believe the original idea was to create a separate project and move all this DL-specific code there. We can always do that. We can also keep conv2d for the time being, especially if removing it will break things in PyMC.

@rlouf rlouf force-pushed the deprecate-nn branch 2 times, most recently from 3420c24 to 799ba2f Compare September 30, 2022 20:14
@rlouf
Copy link
Member Author

rlouf commented Sep 30, 2022

I believe the original idea was to create a separate project and move all this DL-specific code there.

Yeah that would be the ideal. You will still be able to import conv_2d after this PR is merged, and we can use the deprecation period to find a solution that works for everyone.

@rlouf rlouf force-pushed the deprecate-nn branch 2 times, most recently from b526917 to 56f2639 Compare September 30, 2022 21:09
Copy link
Member

@brandonwillard brandonwillard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there anything left on this one? If not, we can probably merge this.

@brandonwillard
Copy link
Member

I am currently excluding all the test related to aesara.tensor.nnet and aesara.tensor.signal. I can either include the tests related to conv2d individually, or use pytests.mark.skip on the others. I think refactoring the nnet and signal test suite is out of scope for this PR, and even for Aesara now that they're getting deprecated.

What is your preferred solution?

We can include them individually, if that's not too involved.

@rlouf
Copy link
Member Author

rlouf commented Oct 12, 2022

@brandonwillard I included the conv2d tests individidually, I think we can merge this if the tests pass.

@twiecki I could not find where conv2d is used in PyMC. Would you mind opening an issue there regarding the deprecation of this Op in Aesara so you can start porting the code?

@rlouf
Copy link
Member Author

rlouf commented Oct 13, 2022

Codecov warnings are about tests in nnet and signal, I think this can be reviewed one last time and merged.

@@ -3440,7 +3440,7 @@ def profile_printer(
)


@op_debug_information.register(Scan)
@op_debug_information.register(Scan) # type: ignore[has-type]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like it's in response to #1221.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
important refactor This issue involves refactoring
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Move Softmax and related Ops out of nnet module Remove/rename aesara.tensor.nnet sub-package
4 participants