Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VLAD Parameters #19

Closed
mpkuse opened this issue Dec 3, 2018 · 7 comments
Closed

VLAD Parameters #19

mpkuse opened this issue Dec 3, 2018 · 7 comments

Comments

@mpkuse
Copy link

mpkuse commented Dec 3, 2018

Hi,
I am trying to look at the VLAD parameters like w, b, c, ie. the centroid of the
cluster etc. in as in the NetVLAD paper.

I downloaded the best model:

load('vd16_pitts30k_conv5_3_vlad_preL2_intra_white.mat')

display( sprintf( '# layers = %d', size(net.layers,2) ) )

for i=1:size(net.layers,2)
    mat = cell2mat(net.layers(i));
    
    if isfield( mat, 'name' )
        ty = getfield( mat, 'type' ) ;   
        name = getfield( mat, 'name' ) ;
        display( sprintf('%d: %s : %s', i, ty, name ) );
    end
end

Running this script gave:

# layers = 35
1: conv : conv1_1
2: relu : relu1_1
3: conv : conv1_2
4: pool : pool1
5: relu : relu1_2
6: conv : conv2_1
7: relu : relu2_1
8: conv : conv2_2
9: pool : pool2
10: relu : relu2_2
11: conv : conv3_1
12: relu : relu3_1
13: conv : conv3_2
14: relu : relu3_2
15: conv : conv3_3
16: pool : pool3
17: relu : relu3_3
18: conv : conv4_1
19: relu : relu4_1
20: conv : conv4_2
21: relu : relu4_2
22: conv : conv4_3
23: pool : pool4
24: relu : relu4_3
25: conv : conv5_1
26: relu : relu5_1
27: conv : conv5_2
28: relu : relu5_2
29: conv : conv5_3
30: normalize : preL2
32: normalize : vlad:intranorm
34: conv : WPCA

So, I tried to look at layer 30 and 32, but I fail to see the learned weights.
I do see the weights for other layers though.
struct with fields:

>>  cell2mat( net.layers(30) )
        type: 'normalize'
        name: 'preL2'
       param: [1024 1.0000e-12 1 0.5000]
    precious: 0

>>  cell2mat( net.layers(32) )
        type: 'normalize'
        name: 'vlad:intranorm'
       param: [1024 1.0000e-12 1 0.5000]
    precious: 0

Am I missing something?

@Relja
Copy link
Owner

Relja commented Dec 3, 2018

Yes - NetVLAD is not a standard layer so it is not included in MatConvNet by default. It is a custom layer implemented as a class, so you are missing one of these two things (or both):

  1. You need to download the NetVLAD code and make sure it is in the path. Otherwise Matlab won't know how to load the objects from the mat file and will leave the custom layers empty.

  2. Because these layers are implemented as classes, they are not struct's but objects. So your code won't work -not sure what will cell2mat do, and isfield only works for struct's and returns false otherwise https://www.mathworks.com/help/matlab/ref/isfield.html , use isprop for objects https://www.mathworks.com/help/matlab/ref/isprop.html . As far as I know all MatConvNet layers have .name so you don't need to check for that, you probably checked because you found some empty layers due to (1.).

@Relja Relja closed this as completed Dec 3, 2018
@mpkuse
Copy link
Author

mpkuse commented Dec 3, 2018

So, where are the weights for the netvlad layer stored? I wish to access those.

@Relja
Copy link
Owner

Relja commented Dec 3, 2018

If you download the NetVLAD code and make sure it is in the path, loading the network from the mat file will work instead of producing some empty layers. Then you can examine the loaded NetVLAD layer and access it's weights. E.g. net.layers{31}.weights

@mpkuse
Copy link
Author

mpkuse commented Dec 6, 2018

I run your demo code, computeRepresentation.m and attempt to look at the weights.
under vlad:core I see only 2 weights, both are of dimensions (1, 1, 512, 64). But I was expecting to see 3 sets of weights for the NetVLAD layer as mentioned in your netvlad paper.

Am correct in assuming that the bias is not trained in this case?

@Relja
Copy link
Owner

Relja commented Dec 6, 2018

Yes, layerVLAD.m code is without bias while layerVLADv2.m is with bias (we mentioned on arXiv appendix that we fix the bias, but it seems we accidentally dropped it from the v3 version of the paper). For the setting in the paper, there is not much difference between the two, and the reason is that input features are L2 normalized in which case you can do the assignment with a simple scalar product and don't need bias (e.g. see the assignment equation 2, expand it and assume |xi|=1 and |ck|=1, then you don't get any bias terms as they all cancel out). If they were not L2 normalized, the two are not similar and layerVLADv2 should probably be used (but this is just theory, I don't know if it is needed in practice).

@mpkuse
Copy link
Author

mpkuse commented Dec 7, 2018

Ok, I get your point. How about the BatchNormalization? Is this network finetuned from the ImageNet VGG without batchnorm updates?

@Relja
Copy link
Owner

Relja commented Dec 7, 2018

As you can see from your list of layers - there are no batch norm layers in the original VGG network

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants