You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a little confuse.
If they use the same training data, In fine_tune phase just train the last few layer.
I think this operating is a little redundant with train_initialization.py
Looking forward to your guidance.
The text was updated successfully, but these errors were encountered:
As I understand it, if you unlocked all the layers and started base weights, you'd have to train for a VERY long time just to get a decent percentage, because the new layers would damage the weights of the existing layers. So you train with the model locked, and then once you have some good weights at your higher level layers, you then unlock layers and train the core model a bit. This is called "fine-tuning" your model.
I don't claim to be an expert at it, but it's supposedly the correct way to bring your model up a few % once you hit a wall. I learned this from two notable books, but it seems to have LOTS of meanings when I google the term. For me, transfer learning got 90%, but fine-tuning got the model to 93%. Was this just because it was trained longer? I can't empirically say, but I trust the books.
I am a little confuse.
If they use the same training data, In fine_tune phase just train the last few layer.
I think this operating is a little redundant with train_initialization.py
Looking forward to your guidance.
The text was updated successfully, but these errors were encountered: