Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the accuracy with pure segmentation branch and no further finetuning #8

Open
mati1994 opened this issue Aug 18, 2018 · 7 comments
Assignees
Labels
question Further information is requested

Comments

@mati1994
Copy link

mati1994 commented Aug 18, 2018

We've trained the model with provided code without segmentation branch and have acquired similar results to your paper. It's mentioned in the paper that using the whole network with segmentation branch and the parameters trained without segmentation branch, an increase of mAP could be seen without further finetuning. However, we didn't observe such phenomenon in our experiments conducted, with models trained on different image sizes. We could not easily find the reason. Is there any key points when doing this? Could you kindly shared something that we may have ignored? Sincerely thank you.

@MahdiKalayeh
Copy link
Collaborator

can you please point out where exactly in the paper you are referring to? Is it the weight sharing (Table 5) or effect of segmentation (Table 4)?

@mati1994
Copy link
Author

mati1994 commented Aug 19, 2018

I'm referring to the table 4. In table4,Comparing the result of InceptionV3 and SPReID-combined the mAP increase is seen with the same baseline parameters and different use of segmentation branch. Besides,I think in table5 it is shown that both with segmentation branch,a further increase could be seen with further finetuning,witnessing mAP increase from 78.66 for SPReID-w/fg to 80.68 for SPReID-w/fg-ft. Is my undestanding right?

@MahdiKalayeh
Copy link
Collaborator

In Table 4, "Inception-V3" is the baseline model which solely uses person re-id backbone (ref. Fig 1) and global average pooling. While SPReID variations use the Human Semantic Parsing in addition to global average pooling, to pool from human body parts as well (ref page 4, first column, second paragraph). In Table 4, wo/fg means that we discard the foreground mask. So, Table 4 shows how much you expect to gain by adding semantic pooling vs simply global average pooling. In Table 5, we separate two cases of whether features that go through global and semantic pooling share re-id backbone or not. If they share, it looks like Fig 1, if they don't, there will be one re-id backbone that its features are exclusively used for global average pooling and another which are solely used for semantic pooling. The rationale behind this experiment was that if you connect a global average pooling classifier to the backbone, the training process may harm localization cues in activation maps since at the end it does global pooling which is agnostic with respect to where activations occur. And if it does, it can harm semantic pooling. We wanted to study this. We added the finetune cases to the Table 5 just to show how it changes after finetuning. I hope I've answered your question.

@mati1994
Copy link
Author

mati1994 commented Aug 19, 2018

Thank you for your detailed reply and I get the point.
Then I wanna make sure that in table4, are the model parameters of baseline branch for Inception-V3 and SPReID-w/fg all the same(just trained on the baseline and directly imported to SPReID)or there could be other potential further training on SPReID-w/fg, or the model to be imported for Inception-V3 and SPReID-w/fg may be trained with different strategies?

@MahdiKalayeh
Copy link
Collaborator

MahdiKalayeh commented Aug 19, 2018

No, we don't pre-train the Re-id backbone and then import it to SPReID. It trains through SPReID, initialized by ImageNet weights. But semantic segmentation (lower stream) is pre-trained on LIP and is frozen while training SPReID.

@mati1994
Copy link
Author

Oh, I get it. It's nice of you to give your reply and it' of great help to me. Just thank you!

@MahdiKalayeh MahdiKalayeh self-assigned this Aug 20, 2018
@MahdiKalayeh MahdiKalayeh added the question Further information is requested label Aug 20, 2018
@ajwl-pmli
Copy link

Hello, Do you know how to compute the rank1 and mAP? could you give me the code which compute the rank1 and mAP, My mail is pku1401210454@163.com, Thank you very much! @mati1994 @MahdiKalayeh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants