New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spatial Transformer Layer #3114
Comments
I am working on a initial version of this with an affine transformation and bilinear sampling kernel. In my initial design, I am making a layer for the grid generator and resampler together, and the localization net can be built separately for flexibility. |
I am also interested in Spatial Transformer Layer (SPL). Does it possible to embed SPL in Alexnet? |
@ducha-aiki Thank you so much. I will take a look at his code. Did you try his transformer layer? Because my OS is linux, I cannot simply compile his caffe. |
@kevinlin311tw not yet. I suppose, you could just copy-paste transform_layer.cpp/cu, entry from caffe.proto and from header to your build. |
@sergeyk are you guys planning to include spatial transformer network with caffe? |
With @XiaoxiaoGuo implementation, is it possible to perturb the transformation parameters (\theta) in a random manner during training? Similar to how the dropout layer turns some neurons off randomly. If this is possible, you can generate spatial perturbations of data during the learning phase. This might be interesting for some people including myself. |
Here is another implementation including complete examples. https://github.com/daerduoCarey/SpatialTransformerLayer |
Here is a ready-to-compile caffe, including the implementation by @daerduoCarey: |
@matthieudelaro I want to build the spatial transform layer with py-faster-rcnn. Can you list me the steps. |
@matthieudelaro Have you successfully added the SpatialTransformerLayer with py-faster-rcnn? |
@matthieudelaro @whuhxb Have you successfully added the SpatialTransformerLayer with py-faster-rcnn? |
Hi:
I'm now trying STN with image recognition, but not yet apply it with py-faster-rcnn. How about you?
At 2017-08-12 16:00:22, "yanxp" <notifications@github.com> wrote:
@matthieudelaro Have you successfully added the SpatialTransformerLayer with py-faster-rcnn?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Any resource of spatial transformer network in py faster rcnn is appreciated. Thanks |
This layer seems to help fine grain localization. Link to paper by Max Jaderberg et al
http://arxiv.org/abs/1506.02025
Torch implementation is here
https://github.com/qassemoquab/stnbhwd
Theano/Lasagne implementation/doc is here
https://lasagne.readthedocs.org/en/latest/modules/layers/special.html#lasagne.layers.TransformerLayer
The text was updated successfully, but these errors were encountered: