Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding ReLU example #168

Closed
JinraeKim opened this issue Dec 7, 2021 · 1 comment
Closed

Understanding ReLU example #168

JinraeKim opened this issue Dec 7, 2021 · 1 comment

Comments

@JinraeKim
Copy link
Contributor

Hi, developers!
Thank you for your good package as always. I'm really interested in DiffOpt.jl, but it's a bit hard to understand so I'm glad to see more examples.

My observations are:

  1. The ReLU example realises a ReLU layer by a (convex) optimisation problem
  2. rrule in the example corresponds to the usual backpropagation rule, typically observed from ML packages.

I wonder if I understood the ReLU example correctly.
Thanks!

@joaquimg
Copy link
Member

joaquimg commented Dec 7, 2021

You are correct!
Feel free to suggest improvements through PR's.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants