Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meet problem when building the model in week02_classification/seminar #26

Closed
ChenjieXu opened this issue Dec 26, 2018 · 1 comment
Closed

Comments

@ChenjieXu
Copy link

I stuck in building the model in week 2. Here is my code.

def build_model(n_tokens=len(tokens), n_cat_features=len(categorical_vectorizer.vocabulary_), hid_size=64):
l_title = L.Input(shape=[None], name="Title")
l_descr = L.Input(shape=[None], name="FullDescription")
l_categ = L.Input(shape=[None], name="Categorical")

# Build your monster!
# <YOUR CODE>

# Title
x_t = Embedding(input_dim=len(tokens),output_dim=5, name="Title_Embedding")(l_title)
x_t = Conv1D(filters=5, kernel_size=5, activation='relu')(x_t)
x_t = MaxPooling1D(5)(x_t)
x_t = Dense(1)(x_t)

# FullDescription    
x_d = Embedding(input_dim=len(tokens),output_dim=5, name="FullDescription_Embedding")(l_descr)
x_d = Conv1D(filters=5, kernel_size=5, activation='relu')(x_d)
x_d = MaxPooling1D(5)(x_d)
x_d = Dense(1)(x_d)

# Categorical
x_c = Embedding(input_dim=n_cat_features,output_dim=5, name="Catigorical_Embedding")(l_categ)
x_c = Dense(1)(x_c)

#Concat
contact = Concatenate()([x_t, x_d, x_c])
output_layer = Dense(1)(contact)

# end of your code
model = keras.models.Model(inputs=[l_title, l_descr, l_categ], outputs=[output_layer])
model.compile('adam', 'mean_squared_error', metrics=['mean_absolute_error'])
return model`

I met with following problem.

Expected the last dense layer to have 3 dimensions, but got array with shape (100, 1)

The reason why I change the shape of categorical input layer to none is that I am not able to concatenate a defined layer with other two undefined layer at the last step.

I choose embedding layer in Categorical encoder because "The shape of the input to "Flatten" is not fully defined (got (None, 1). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.", so I use embedding layer to keep them have the same dimension.

Could you please help me with these problems? Thank you in advance.

@justheuristic
Copy link
Contributor

Hi!
The issue here is that both title and description embedding need GlobalMaxPooling, not just MaxPool1d.

Global max pooling pools over the entire time axis so the resulting tensor will be 2-dimensional [batch, units]. You can then safely concatenate over "units" axis.
I'd also recommend using more than one unit in dense layers.

If it still won't fix after global max pooling, please ping me again in this very issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants