You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, only the activation functions defined in nn/utils.py can be used, which limits the customizability of activation functions.
Discussion
We can provide the flexibility to accept both Activation and nn.Module objects as inputs for activation functions (floowing the format like this). Additionally, we can add an activation attribute to the neural network, similar to what PyTorch Geometric does (link). I am open to any suggestions.
The text was updated successfully, but these errors were encountered:
I don't like the design pattern specifying both the initializer function (or an alias) as well as the parameters as inputs into a separate class. At that point, you're using composition without actually designing around it, so it's an all-around loss. For example, compare the following two design patterns using the Bar class:
class Foo:
def __init__(self, bar):
self.bar = bar
I think (2) is significantly more clear of what's going on, as it doesn't require users to cross reference any functions or initializers to see what's going on. All of this is to say, that I'm fine with moving away from the usage of ActivationType inside core module code and favoring explicitly passing activation: nn.Module. We can move get_activation_function into the CLI, as there's really no need to restrict the activation types used inside our core modules. This was a reimplementation of v1 logic, but I don't agree with it (even if I chose to write it).
Thanks for sharing your thoughts. I agree explicitly passing the activation function into the MessagePassing and Predictor blocks is a clear way to improve both customizability and code readability. Do you disagree with this idea because you think passing a callable function into a neural network builder as an attribute is not good practice?
Is your feature request related to a problem? Please describe.
Currently, only the activation functions defined in nn/utils.py can be used, which limits the customizability of activation functions.
Discussion
We can provide the flexibility to accept both
Activation
andnn.Module
objects as inputs for activation functions (floowing the format like this). Additionally, we can add an activation attribute to the neural network, similar to what PyTorch Geometric does (link). I am open to any suggestions.The text was updated successfully, but these errors were encountered: