Development
Domenic Curro edited this page Feb 9, 2016
·
24 revisions
Pages 35
- Home
- AWS EC2 GPU enabled Caffe AMI
- Borrowing Weights from a Pretrained Network
- Caffe installing script for ubuntu 16.04 support Cuda 8
- Caffe on EC2 Ubuntu 14.04 Cuda 7
- Caffe Output: .caffemodel .solverstate
- Contributing
- Development
- Excluding Layers: Train and Test Phase
- Faster Caffe Training
- Fine Tuning or Training Certain Layers Exclusively
- GeForce GTX 1080, CUDA 8.0, Ubuntu 16.04, Caffe
- IDE Nvidia’s Eclipse Nsight
- Image Format: BGR not RGB
- Install Caffe on EC2 from scratch (Ubuntu, CUDA 7, cuDNN 3)
- Installation
- Installation (OSX)
- Making Prototxt Nets with Python
- Model Zo
- Model Zoo
- Models accuracy on ImageNet 2012 val
- OpenCV 3.2 Installation Guide on Ubuntu 16.04
- Python Layer Unit Tests
- Related Projects
- Reporting Bugs and Other Issues
- Simple Example: Sin Layer
- Solver Prototxt
- The Data Layer
- The Datum Object
- Training and Resuming
- Ubuntu 14.04 ec2 instance
- Ubuntu 14.04 VirtualBox VM
- Ubuntu 16.04 or 15.10 Installation Guide
- Using a Trained Network: Deploy
- Working with Blobs
- Show 20 more pages…
Clone this wiki locally
Developing new layers
- Add a class declaration for your layer to
include/caffe/layers/your_layer.hpp.- Include an inline implementation of
typeoverriding the methodvirtual inline const char* type() const { return "YourLayerName"; }replacingYourLayerNamewith your layer's name. - Implement the
{*}Blobs()methods to specify blob number requirements; see /caffe/include/caffe/layers.hpp to enforce strict top and bottom Blob counts using the inline{*}Blobs()methods. - Omit the
*_gpudeclarations if you'll only be implementing CPU code.
- Include an inline implementation of
- Implement your layer in
src/caffe/layers/your_layer.cpp.- (optional)
LayerSetUpfor one-time initialization: reading parameters, fixed-size allocations, etc. -
Reshapefor computing the sizes of top blobs, allocating buffers, and any other work that depends on the shapes of bottom blobs -
Forward_cpufor the function your layer computes -
Backward_cpufor its gradient (Optional -- a layer can be forward-only)
- (optional)
- (Optional) Implement the GPU versions
Forward_gpuandBackward_gpuinlayers/your_layer.cu. - If needed, declare parameters in
proto/caffe.proto, using (and then incrementing) the "next available layer-specific ID" declared in a comment abovemessage LayerParameter - Instantiate and register your layer in your cpp file with the macro provided in
layer_factory.hpp. Assuming that you have a new layerMyAwesomeLayer, you can achieve it with the following command:
INSTANTIATE_CLASS(MyAwesomeLayer);
REGISTER_LAYER_CLASS(MyAwesome);
- Note that you should put the registration code in your own cpp file, so your implementation of a layer is self-contained.
- Optionally, you can also register a Creator if your layer has multiple engines. For an example on how to define a creator function and register it, see
GetConvolutionLayerincaffe/layer_factory.cpp. - Write tests in
test/test_your_layer.cpp. Usetest/test_gradient_check_util.hppto check that your Forward and Backward implementations are in numerical agreement.
Forward-Only Layers
If you want to write a layer that you will only ever include in a test net, you do not have to code the backward pass. For example, you might want a layer that measures performance metrics at test time that haven't already been implemented.
Doing this is very simple. You can write an inline implementation of Backward_cpu (or Backward_gpu) together with the definition of your layer in include/caffe/your_layer.hpp that looks like:
virtual void Backward_cpu(const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
NOT_IMPLEMENTED;
}
The NOT_IMPLEMENTED macro (defined in common.hpp) throws an error log saying "Not implemented yet". For examples, look at the accuracy layer (accuracy_layer.hpp) and threshold layer (threshold_layer.hpp) definitions.