Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

week02_classification/seminar: How to solve the different length of title and description? #25

Closed
LB-Yu opened this issue Dec 14, 2018 · 4 comments

Comments

@LB-Yu
Copy link

LB-Yu commented Dec 14, 2018

I have some problem to solve the dimension in the network architecture:
In our seminar, the 'title' and the 'description' are always have different dimensions. In the paper 'Convolutional Neural Networks for Sentence Classification' the author padded the dimensions. But the code in the 'seminar.ipynb' doesn't pad all batches to the same length. Do I need to modify the code in the '.ipnb' or there're some solutions to handle the different dimensions.
I would be very grateful if I could get someone's help!

@justheuristic
Copy link
Contributor

justheuristic commented Dec 14, 2018

Hi!

short:
We also pad sequences to the same length, this is done in the as_matrix function.

long:
The padding is usually done out of one of two reasons:

  1. to put several strings in one tensorflow tensor (matrix)
  2. to compensate for model's inability to process sequences of different length.

The model we want you to build is able to process sequences of arbitrary length greater than conv filter size, so the only reason is (1.).

Instead of padding all data to a fixed length, we pad sequences when the batch is formed (in as_matrix function). For instance, if there are three sentences in batch, their lengths being [3, 7, 5], they all will be padded to length=7.
However, if the next batch has different sequence lengths, they will be padded to a maximum of batch's sequence lengths.

The default network architecture is applicable to a batch of arbitrary length:

  • first you apply convolutions/pooling which are applied to each patch of arbitrarily long sequence
  • then you take whatever is left after convolution and do global max/average pooling, computing max/mean over time for each channel. Thus if convolution output shape was [batch_size, time, num_filters], it will be reduced to [batch_size, num_filters] and will become length independent.
  • finally you apply dense layers to this fixed size representation

The only catch is that input must be long enough to apply convolutions. This should not be a problem if you use filter sizes of 2 or 3. For larger filters, please pad shorter sequences by either setting as_matrix(..., max_length=your_max_length) or just manually add dots (. . . . .) to shorter job titles.

@LB-Yu
Copy link
Author

LB-Yu commented Dec 15, 2018

Thank you very much for your detailed answer.

"This should not be a problem if you use filter sizes of 2 or 3."-Did you mean use square filter of the conv layer? If use a square filter then I know how to solve the problem.

However, I read the paper A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification. And in section 2.1, the author said "Because rows represent discrete symbols (namely, words), it is reasonable to use filters with widths equal to the dimensionality of the word vectors (i.e., d)." I think it's reasonable to use filters with widths equal to the dimensionality of the word vectors. Will it reduce the interpretability of the network, if we use suqare filters?

Thank you again for your answer!

@justheuristic
Copy link
Contributor

In this particular case we use 1D convolutions, so our filters are not squares, they're just 1d "stripes".

You can also consider this to be equivalent to using filters of dimension [vector_size x width] where vector size equals length of word embedding vectors and width is something around 2 or 3.

For this task it doesn't make much sense to use filter width equal to vector size because

  • vector size can be large, several hundreds
  • job title may contain 2-20 tokens, which is much smaller than vector size

@LB-Yu
Copy link
Author

LB-Yu commented Dec 17, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants