You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However the trained model might have applied mean subtraction and Std normalization where the mean/STD values are calculated over entire training dataset.
if we need to fine-tune the model (or use it as-is for classification) shouldn't we have to apply the exact same preprocessing to new images?
The fine tuning example above applied Normalization to the input image but this will not match that over the entire pre-training dataset.
The text was updated successfully, but these errors were encountered:
Are you sure the Normalize=True should be retained in the image preloader?
And the original VGG 16 authors seems to indicate BGR images instead of RGB and they did not apply normalization, only zero mean. It all depends on how the VGG16.tflearn was trained.
The example at https://github.com/tflearn/tflearn/blob/master/examples/images/vgg_network_finetuning.py shows how to fine tune the pre-trained VGG16 model.
However the trained model might have applied mean subtraction and Std normalization where the mean/STD values are calculated over entire training dataset.
if we need to fine-tune the model (or use it as-is for classification) shouldn't we have to apply the exact same preprocessing to new images?
The fine tuning example above applied Normalization to the input image but this will not match that over the entire pre-training dataset.
The text was updated successfully, but these errors were encountered: