Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConvLayer Filters #54

Closed
LinuxIsCool opened this issue Feb 10, 2018 · 7 comments
Closed

ConvLayer Filters #54

LinuxIsCool opened this issue Feb 10, 2018 · 7 comments

Comments

@LinuxIsCool
Copy link

Figure 2 in Paper (Attached image):
Shouldn't the convolutional filters be 3 dimensional? I mean, in the original convolution how do we go from 3 feature maps to 2 feature maps. I believe this would make sense if the filter was of dimension 2x1x3 (same as described but with additional depth of 2). And then the second convolution would be 2x48 to get the 20 11x1 feature maps.

net_config.json:
In ConvLayer, I don't understand how {"filter_shape":[1,2],"filter_number":3} corresponds to the filters outlined in the paper as described in my above question. (Excuse my ignorance of tflearn, but the params to conv2d() are not well explained in the documentation)

image

@dexhunter
Copy link
Collaborator

dexhunter commented Feb 11, 2018

Shouldn't the convolutional filters be 3 dimensional?

It always is 3-dimensional if you count the filter number in.

with additional depth of 2

The depth is actually the filter number.

I took a screenshot from cs231n, maybe this will help a bit:
screenshot from 2018-02-11 14-13-13

edit:
However I think there is a discrepancy between the code and paper on how many feature_maps does the first ConvLayer produces? (In paper is 2 but in code is 3) @ZhengyaoJiang(version upgrade)

@ZhengyaoJiang
Copy link
Owner

Shouldn't the convolutional filters be 3 dimensional? I mean, in the original convolution how do we go from 3 feature maps to 2 feature maps. I believe this would make sense if the filter was of dimension 2x1x3 (same as described but with additional depth of 2).

Yes, it is actually 3 dimensional.
While I think it's a convention that the one of the dimension is the same as the number of input channels(features) and can be omitted when it comes to conv2d.
If we are using conv3d, then it is necessary to specify another dimension.
You can refer to tensorflow.layers for example:
https://www.tensorflow.org/api_docs/python/tf/layers/conv2d

However I think there is a discrepancy between the code and paper on how many feature_maps does the first ConvLayer produces?

Number of feature maps is equal to the number of filters.

@LinuxIsCool
Copy link
Author

Thank you, this clears things up. From my understanding it appears there is still a discrepancy from the paper in {"filter_shape":[1,2]}. The paper implies it should be {"filter_shape":[1,3]}, doesn't it?

@joostbr
Copy link

joostbr commented Feb 17, 2018

You can freely play around with the filter size in the first ConvLayer 2, 3 or 5 seem to be values giving reasonable results.

@ZhengyaoJiang
Copy link
Owner

From my understanding it appears there is still a discrepancy from the paper in {"filter_shape":[1,2]}. The paper implies it should be {"filter_shape":[1,3]}, doesn't it?

Yes, as we mentioned in README, the hyper-parameters are different as that listed in the article.
The new hyper-parameters are tuned by an automatic parameters search algorithm, which saves much training time and preserves(or improve in some time span) final performance.

@LinuxIsCool
Copy link
Author

Thank you for the information @ZhengyaoJiang. It is a great paper, and well written code, I really appreciate it. I am a CS master's student at Simon Fraser University, I am basing my current projects off of this paper.

@ZhengyaoJiang
Copy link
Owner

@LinuxIsCool

Thanks for your compliment.
Hope your project goes well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants