-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
week02_classification/seminar: How to solve the different length of title and description? #25
Comments
Hi! short: long:
The model we want you to build is able to process sequences of arbitrary length greater than conv filter size, so the only reason is (1.). Instead of padding all data to a fixed length, we pad sequences when the batch is formed (in The default network architecture is applicable to a batch of arbitrary length:
The only catch is that input must be long enough to apply convolutions. This should not be a problem if you use filter sizes of 2 or 3. For larger filters, please pad shorter sequences by either setting |
Thank you very much for your detailed answer. "This should not be a problem if you use filter sizes of 2 or 3."-Did you mean use square filter of the conv layer? If use a square filter then I know how to solve the problem. However, I read the paper A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification. And in section 2.1, the author said "Because rows represent discrete symbols (namely, words), it is reasonable to use filters with widths equal to the dimensionality of the word vectors (i.e., d)." I think it's reasonable to use filters with widths equal to the dimensionality of the word vectors. Will it reduce the interpretability of the network, if we use suqare filters? Thank you again for your answer! |
In this particular case we use 1D convolutions, so our filters are not squares, they're just 1d "stripes". You can also consider this to be equivalent to using filters of dimension [vector_size x width] where vector size equals length of word embedding vectors and width is something around 2 or 3. For this task it doesn't make much sense to use filter width equal to vector size because
|
Thank you for your nice answer, and I know how to do now. The key is 1D
convolutions.
justheuristic <notifications@github.com> 于2018年12月16日周日 下午5:08写道:
… In this particular case we use 1D convolutions, so our filters are not
squares, they're just 1d "stripes".
You can also consider this to be equivalent to using filters of dimension
[vector_size x width] where vector size equals length of word embedding
vectors and width is something around 2 or 3.
For this task it doesn't make much sense to use filter width equal to
vector size because
- vector size can be large, several hundreds
- job title may contain 2-20 tokens, which is much smaller than vector
size
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#25 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AqG8zhmwoWcwNxwc8EUempWOom048ewMks5u5g2JgaJpZM4ZTIhe>
.
|
I have some problem to solve the dimension in the network architecture:
In our seminar, the 'title' and the 'description' are always have different dimensions. In the paper 'Convolutional Neural Networks for Sentence Classification' the author padded the dimensions. But the code in the 'seminar.ipynb' doesn't pad all batches to the same length. Do I need to modify the code in the '.ipnb' or there're some solutions to handle the different dimensions.
I would be very grateful if I could get someone's help!
The text was updated successfully, but these errors were encountered: