We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
If I run the code directly, everything is right.
one day, I delete these: #conv10 = Conv2D(2, (1, 1), activation='relu', padding='same')(conv9) #conv10 = core.Reshape((2,patch_height*patch_width))(conv10) #conv10 = core.Permute((2,1))(conv10) .... .... #patches_masks_train = masks_Unet(patches_masks_train) #reduce memory consumption
And when I train the model, I find: 1265s - loss: 14.6464 - acc: 0.2041 - val_loss: 14.2844 - val_acc: 0.2732 Epoch 2/150 Epoch 00001: val_acc did not improve 1425s - loss: 14.1214 - acc: 0.2283 - val_loss: 14.2647 - val_acc: 0.2608 ...
Why I must reshape the mask????
The text was updated successfully, but these errors were encountered:
I find the "categorical_crossentropy" needs (nb_samples, nb_classes)
Sorry, something went wrong.
No branches or pull requests
If I run the code directly, everything is right.
one day, I delete these:
#conv10 = Conv2D(2, (1, 1), activation='relu', padding='same')(conv9)
#conv10 = core.Reshape((2,patch_height*patch_width))(conv10)
#conv10 = core.Permute((2,1))(conv10)
....
....
#patches_masks_train = masks_Unet(patches_masks_train) #reduce memory consumption
And when I train the model, I find:
1265s - loss: 14.6464 - acc: 0.2041 - val_loss: 14.2844 - val_acc: 0.2732
Epoch 2/150
Epoch 00001: val_acc did not improve
1425s - loss: 14.1214 - acc: 0.2283 - val_loss: 14.2647 - val_acc: 0.2608
...
Why I must reshape the mask????
The text was updated successfully, but these errors were encountered: