Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some thoughts on the paper. #20

Closed
VyBui opened this issue Jul 3, 2020 · 7 comments
Closed

Some thoughts on the paper. #20

VyBui opened this issue Jul 3, 2020 · 7 comments

Comments

@VyBui
Copy link

VyBui commented Jul 3, 2020

Hi @PeikeLi,

Great work on the paper and the code.

Finally, I have got my training code running on my custom dataset. While waiting for the model to convergence at 150 epochs.

I reviewed the paper again and have a question about this sentence:

Starting from a model trained on inaccurate annotations as initialization, we design a cyclically learning scheduler to infer more reliable pseudo masks by iteratively aggregating the current learned model with the former optimal one in an online manner. Besides, those corrected labels can in turn to boost the model performance, simultaneously. In this way, the self-correction mechanism will enable the model or labels to mutually promote its counterpart, leading to a more robust model and accurate label masks as training goes on.

  1. Why did you choose the number of epochs as 150?
  2. If I increase the dataset size, will the model get better? There are 2 ways I will have my dataset to increase:
  • Label more data
  • Predict on a new dataset that has the same distribution as my own now.
  1. Have you had any experience working with a much more large dataset other than LIP?

regards,

@GoGoDuck912
Copy link
Owner

  1. The number of epochs is chosen as 150 to make a fair comparison with CE2P.

  2. Usually, it will.

  3. Due to computational limitations, currently, I have no experience with a much more large dataset. However, LIP is already a large dataset with 30000+ training samples.

@VyBui
Copy link
Author

VyBui commented Jul 4, 2020

@PeikeLi I have found the detectron2 in your codebase. I suppose it is for multi-human parsing. I wanted to check if there is any document/tutorials on how to use it with SCHP ?

@rkhilnani9
Copy link

@VyBui Any challenges you faced while training it on a custom dataset? I have my own images and annotated segments, do I need to change anything? I understand I should have them ordered according to the structure of LIP dataset, but are there any other caveats?

Would appreciate a response. Thanks!

@VyBui
Copy link
Author

VyBui commented Jul 5, 2020

@rkhilnani9
I have written a few necessary steps to train on a custom dataset here: #14.

There are challenges I have faced while training SCHP with a custom dataset:

  1. Pytorch version has to be 1.2. I have never tried with 1.3, 1.4 but definitely not Pytorch v1.5
  2. Remember to set num_classes to the number of classes in your segmentation image.
  3. There is a bug on training code but I have seen the author fix it.
  4. set GPUs to the number of your GPUs local machine.
  5. Make sure you follow the LIP dataset convention and then you will be just fine.

@VyBui
Copy link
Author

VyBui commented Jul 14, 2020

closed

@VyBui VyBui closed this as completed Jul 14, 2020
@Julymycin
Copy link

hello, have you found how to deal with the multi human parsing task via SCHP?

@Julymycin
Copy link

hello, have you found how to deal with the multi human parsing task via SCHP?

well, I think it is a step to step (from single instance segmentation to human parsing, then convert and identify them) process, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants