Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Fine-tuning pre-trained VGG16 not possible since `add` method is not defined for `Model` class? #4040
My code is as follows:
# load the VGG16 network, ensuring the head FC layer sets are # left off print("[INFO] loading VGG16...") model = VGG16(weights="imagenet", include_top=False) # loop over the layers in VGG (until the final CONV block) and # freeze the layers -- we will only be fine-tuning the final CONV # block along with our dense FC layers for layer in model.layers[:15]: layer.trainable = False # load the FCHeadNet and add it to the convolutional base print("[INFO] loading head...") head = FCHeadNet.build((512 * 7 * 7,), 17, dropout=True) head.load_weights(args["head"]) model.add(head)
File "finetune.py", line 30, in model.add(head) AttributeError: 'Model' object has no attribute 'add'
How do I fine-tune the pre-trained VGG16 class? Or is this simply not possible and I need to define VGG16 by hand and load the weights manually?
You should recover the output you want to build on top of, and use it to instantiate a new model.
If you want to use an intermediate layer, you can use
initial_model = VGG16(weights="imagenet", include_top=False) last = model.output x = Flatten()(last) x = Dense(1024, activation='relu')(x) preds = Dense(200, activation='softmax')(x) model = Model(initial_model.input, preds)
This is detailed in the docs, too: https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes
I am trying to get accuracy from my predicted results from the VGG16 model. However, the decode_predictions() function only returns a tuple containing ID and Label and not accuracy. Is there any way for decode_predictions() to return accuracy as well?
#Get Lable for the images in directory
With Keras 2.02 this does not work
With AveragePooling2D()(x) there is no more errors
this is the model we will train
#model = Model(input=base_model.input, output=x)
Anyone can help me?
I solved by creating "new_model = Sequential()". I then copied all the layers from "applications.VGG16(..)" into this new model.
EDIT: I realised actually looking at the VGG model, that the tutorial might be wrong? The tutorial suggests we block all layers up to 25, but that would freeze all the layers in VGG16!
If we print the summary of VGG16 then it looks like the first 15 layers the three convolutional blocks that we want to freeze and the last block that we want to unfreeze seems to begin from layer 15.
@edoven Thanks for the help on creating a new model out of the VGG model!
I believe your code a little bit incorrect with respect to the original tutorial, which instructs the first 25 (but I think it should be 15) layers should be frozen, but not all of them; the last layers comprise the last convolutional block and are to be finetuned with the top model.
Forgive (and correct) me if I'm wrong, but I think your code should be changed from this: