Fine Tuning or Training Certain Layers Exclusively
Domenic Curro edited this page Apr 7, 2016
·
1 revision
Pages 35
- Home
- AWS EC2 GPU enabled Caffe AMI
- Borrowing Weights from a Pretrained Network
- Caffe installing script for ubuntu 16.04 support Cuda 8
- Caffe on EC2 Ubuntu 14.04 Cuda 7
- Caffe Output: .caffemodel .solverstate
- Contributing
- Development
- Excluding Layers: Train and Test Phase
- Faster Caffe Training
- Fine Tuning or Training Certain Layers Exclusively
- GeForce GTX 1080, CUDA 8.0, Ubuntu 16.04, Caffe
- IDE Nvidia’s Eclipse Nsight
- Image Format: BGR not RGB
- Install Caffe on EC2 from scratch (Ubuntu, CUDA 7, cuDNN 3)
- Installation
- Installation (OSX)
- Making Prototxt Nets with Python
- Model Zo
- Model Zoo
- Models accuracy on ImageNet 2012 val
- OpenCV 3.2 Installation Guide on Ubuntu 16.04
- Python Layer Unit Tests
- Related Projects
- Reporting Bugs and Other Issues
- Simple Example: Sin Layer
- Solver Prototxt
- The Data Layer
- The Datum Object
- Training and Resuming
- Ubuntu 14.04 ec2 instance
- Ubuntu 14.04 VirtualBox VM
- Ubuntu 16.04 or 15.10 Installation Guide
- Using a Trained Network: Deploy
- Working with Blobs
- Show 20 more pages…
Clone this wiki locally
Fine-Tuning
Fine-Tuning is the process of training specific sections of a network to improve results.
Making Layers Not Learn
To stop a layer from learning further, you can set it's param attributes in your prototxt.
For example:
layer {
name: "example"
type: "example"
...
param {
lr_mult: 0 #learning rate of weights
decay_mult: 1
}
param {
lr_mult: 0 #learning rate of bias
decay_mult: 0
}
}