New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine-tuning pre-trained VGG16 not possible since add
method is not defined for Model
class?
#4040
Comments
applications.VGG16
not possible since add
method is not defined for Model
class?add
method is not defined for Model
class?
You should recover the output you want to build on top of, and use it to instantiate a new model. If you want to use an intermediate layer, you can use initial_model = VGG16(weights="imagenet", include_top=False)
last = model.output
x = Flatten()(last)
x = Dense(1024, activation='relu')(x)
preds = Dense(200, activation='softmax')(x)
model = Model(initial_model.input, preds) This is detailed in the docs, too: https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes |
I am trying to get accuracy from my predicted results from the VGG16 model. However, the decode_predictions() function only returns a tuple containing ID and Label and not accuracy. Is there any way for decode_predictions() to return accuracy as well? #Get Lable for the images in directory |
With Keras 2.02 this does not work Trying with With AveragePooling2D()(x) there is no more errors this is the model we will train#model = Model(input=base_model.input, output=x) Anyone can help me? |
Same problem here, it is not able to recover the size of the base_model. |
@ptisseur @riccardosamperna Did you guys figure it out? |
@ptisseur My code works. Add a prediction layer before "model=Model(...)". If still not working, try to rename the prediction layer. |
I solved by creating "new_model = Sequential()". I then copied all the layers from "applications.VGG16(..)" into this new model.
|
@edoven |
EDIT: I realised actually looking at the VGG model, that the tutorial might be wrong? The tutorial suggests we block all layers up to 25, but that would freeze all the layers in VGG16! If we print the summary of VGG16 then it looks like the first 15 layers the three convolutional blocks that we want to freeze and the last block that we want to unfreeze seems to begin from layer 15. @edoven Thanks for the help on creating a new model out of the VGG model! I believe your code a little bit incorrect with respect to the original tutorial, which instructs the first 25 (but I think it should be 15) layers should be frozen, but not all of them; the last layers comprise the last convolutional block and are to be finetuned with the top model. Forgive (and correct) me if I'm wrong, but I think your code should be changed from this:
to this:
|
@fchollet Thank you very much for sharing your codes, a great men you. And @edoven Excellent contribution¡¡, at least it works for me. Thank you very much for you help¡¡¡. My version of code for spanish speaking people.
|
I am trying to fine-tune the pre-trained VGG16 network from keras.applications.VGG16. I'm doing the standard approach that @fchollet detailed in his blog post.
My code is as follows:
The
FCHeadNet
class simply defines aSequential
model. However, when I try to addhead
to themodel
I receive the following error message:Inspecting the
vgg16.py
source I see thatVGG16
is defined as aModel
versus aSequential
, thus there is no.add
method. My question is therefore:How do I fine-tune the pre-trained VGG16 class? Or is this simply not possible and I need to define VGG16 by hand and load the weights manually?
The text was updated successfully, but these errors were encountered: