Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Equivalence of conv implementions #1

Closed
ToucheSir opened this issue Aug 26, 2021 · 1 comment
Closed

Equivalence of conv implementions #1

ToucheSir opened this issue Aug 26, 2021 · 1 comment

Comments

@ToucheSir
Copy link

ToucheSir commented Aug 26, 2021

Thanks for the blog post! If you wouldn't mind, I have a couple of comments/suggestions:

  1. For PyTorch-like convolutions, the library to use is NNlib for an interface like torch.nn.functional or Flux for one like torch.nn. These should be tuned for batched, multithreaded CPU + GPU workfloads. I would be surprised if an implementation using PyTorch's (I)FFT functionality could beat FFTW.jl, because the latter wraps an optimized C library!
  2. The name "Julia" wasn't meant to be anthropromorphized. See https://stackoverflow.com/a/29292465 and the FAQ for more. So I think the language creators agree with you :)
@riveSunder
Copy link
Owner

Thanks for reading and for the tips and links. I've used Zygote in the past and I'd like to get more involved with the FluxML ecosystem, in particular for experiments with neural cellular automata. I've updated my performance tests and I'm going to be writing a follow-up in the next few days.

What's new:

  • I wrote a NumPy implementation that is much closer to what my Julia implementation is doing, namely using FFTs for convolutions. Unsurprisingly, Juilia is quite a bit faster, especially considering ...
  • I noticed and fixed an issue in my Julia implementation that was performing two FFT convolutions for each update (and only using one of them). Fixing this makes the Julia implementation much faster, and it is now faster than the PyTorch implementation for small grid dimensions.
  • I also upgraded the PyTorch version I am using to 1.9.0 from 1.5.1, so CARLE is faster now too.

I'm not too worried about the name. I thought it was a little off-putting at first and now I'm happy to make fun of myself for overthinking it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants