Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input filter dimension for the first convolution layer. #2

Closed
wwang2 opened this issue Mar 26, 2021 · 5 comments
Closed

Input filter dimension for the first convolution layer. #2

wwang2 opened this issue Mar 26, 2021 · 5 comments

Comments

@wwang2
Copy link
Contributor

wwang2 commented Mar 26, 2021

Hi!
Thanks for this implementation!

I have a question about the conv1d layer setting in your implementation here:

self.conv1d1 = nn.Conv1d(120, 9, kernel_size=9)

where you specify an input filter size = 120. However, 120 is the length of the padded SMILES strings. The one-hot encoded SMILES string has a dimension of 120 x 35, the filter input filter should be 35 instead of 120 because you are convolving along the SMILES sequence.

This also means that you need to transpose your bached sequence data for the convolution operation to correctly operate on the sequence along the right dimension.

Just want to confirm this detail. If I am right, I can submit a PR to correct this after benchmarking.

@wwang2
Copy link
Contributor Author

wwang2 commented Mar 26, 2021

update: i have been consistently getting lower loss after this fix.

@topazape
Copy link
Owner

Hi, @wwang2.

Thank you for your suggestion. Yes, you are correct. I misunderstood that line, please submit a PR.

Best regards.

@wwang2
Copy link
Contributor Author

wwang2 commented Mar 28, 2021

@topazape PR submitted

@topazape
Copy link
Owner

@wwang2
Thank you for your contribution!!

@wwang2
Copy link
Contributor Author

wwang2 commented Mar 29, 2021

no problem. That is my pleasure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants