-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[stall]Layer auto detect input size #688
Conversation
This pull request introduces 1 alert when merging dd0dc99 into e4082c6 - view on LGTM.com new alerts:
|
changes:
|
For the last point, it will break the compatibility..
In V4, we can update the API completely. |
This pull request introduces 1 alert when merging 2b16b37 into e4082c6 - view on LGTM.com new alerts:
|
This pull request introduces 2 alerts when merging cd289d6 into e4082c6 - view on LGTM.com new alerts:
|
ok thanks. Updated: |
This pull request introduces 2 alerts and fixes 1 when merging cd289d6 into 536f7e4 - view on LGTM.com new alerts:
fixed alerts:
|
This pull request introduces 2 alerts when merging cd289d6 into db1846d - view on LGTM.com new alerts:
|
This pull request introduces 3 alerts when merging e538fef into db1846d - view on LGTM.com new alerts:
|
This pull request introduces 3 alerts when merging 4a98351 into db1846d - view on LGTM.com new alerts:
|
return {"W": self.W, "b": self.b} | ||
else: | ||
return {"W": self.W} | ||
|
||
def set_params_initializer(self, **initializers): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pass initializers as argus of __init__
.
The following APIs should be backward compatible. Please test. class Linear(Layer):
def __init__(self, num_output, *args, bias=True, **kwargs):
# the following block is for backward compatibility.
# the old code will all Linear(2, 3), or (2, 3, False)
if len(args) > 0:
num_input = num_output
num_output = args[0]
if len(args) > 1:
bias = args[1]
self.num_output = num_output
self.bias = bias
class Conv2d(Layer):
def __init__(self,
out_channels,
kernel_size,
*args,
stride=1,
padding=0,
dilation=1,
group=1,
bias=True,
pad_mode="NOTSET",
**kwargs):
# the old code create the layer like: Conv2d(8, 16, 3), or Conv2d(8, 16, 3, stride=1)
# the following code block is for backward compatibility
if len(args) >0:
in_channel=out_channel
out_channel = kernel
kernel = args[0]
if len(args) > 1:
stride = args[1]
if len(args) > 2:
padding = args[2] |
update linear constructor, tested ok |
This pull request introduces 3 alerts when merging 3d835a0 into db1846d - view on LGTM.com new alerts:
|
Hi, @dcslin , can I use this PR right now or which operation I can use now? since I need to let the soonx to support the new autograd api. |
I guess you are building the model? then by convention we use |
I suggest to use layer.ReLU() to avoid mixing operators and layers in constructing the model. |
some of the work is merged into #697 |
No description provided.