New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature dimension #3
Comments
The "eltwise6" layer is treated as the feature layer for face images. The operations of "slice6" and "eltwise6" layer are similar to the activation function such as ReLU or sigmoid. |
another quesion is that why images in the CASIA database should be resized into 144_144, while the faces in the LFW database should be resized into 128_128. If execute the step like this , the input of network will have different size, is it resonable or necessary? |
layers { As is shown in "transform_param", the crop_size is set to 128, which means the inputs of CNN are cropped from 144x144 to 128x128 randomly for training. This data augmentation trick is widely used in ILSVRC competition. Therefore, the real input of CNN is 128, and the test image should be resized to 128x128. |
hi, I'm so sorry for disturbe you again. I want to know whether the trianing sample is pair of two faces or a single faces! |
hello,I'm sorry to disturb you again. Three problem need to consult you. layers { name: "data" type:DATA top: "data" top: "label" data_param{
transform_param {
} include: { phase: TRAIN } } As is shown in "transform_param", the crop_size is set to 128, which means the inputs of CNN are cropped from 144x144 to 128x128 randomly for training. This data augmentation trick is widely used in ILSVRC competition. Therefore, the real input of CNN is 128, and the test image is resized to 128x128. — |
I am sorry to reply you so long.
If any questions, welcome to contact me by sending email to alfredxiangwu@gmail.com. |
Hi, Taking into account that the feature layer is 'eltwise6', I have problems to get your code working. This layer's shape size is 2, so there are some lines that will crush, for example: features_shape = (len(image_list), shp[1], shp[2], shp[3]). Should I remove shp[2] and shp[3] ? I thought this could be the solution but I'm not very sure if the results are correct. |
Hi, why the features stored in the matlab matrix is 256 dimensions? but fc1-layer in the network's output number is 512. I cannot understand which layer's output treat as the terminal feature!
The text was updated successfully, but these errors were encountered: