Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

regarding use this framework for my semantic segmentation work #9

Open
surfreta opened this issue Feb 15, 2017 · 9 comments
Open

regarding use this framework for my semantic segmentation work #9

surfreta opened this issue Feb 15, 2017 · 9 comments

Comments

@surfreta
Copy link

surfreta commented Feb 15, 2017

Hi,

I have several questions regarding using this library

  1. If the studied data set is of totally different domain with the typical benchmark set, such as PASCAL VOC,
    What should be the right pipeline of using your framework. Can I still use the pre-trained model(weight) and re-train the model using my dataset?

  2. The problem I am studying is of limited number of images, each of this image has large sizes, i.e., 4096 * 4096 pixels. The masked area
    is about 5%~10% areas of each image.I have been thinking of generating large samples of training set from these large images, each training image
    is of 128 * 128. In other words, building a model based on 128 * 128.

During testing stage, conduct the sub-frame prediction (each sub-frame is 128*128) over the test image, and stitch these predicted mask together.
Is this the right approach?

Besides, are there any suggestions on generate those training set?

@MrChristo59
Copy link

I'm also very interested in any advices for generating a new dataset to be trained.

@warmspringwinds
Copy link
Owner

@surfreta @MrChristo59

I will load an example of usage for different dataset that I have done recently.

Small number of images is usually a problem.

Reusing pretrained weights won't make it worse I think.
At least, all of the works that I have seen before, use pretrained weights.

Let me know if it helps.

@MrChristo59
Copy link

Don't know if I understood it right but will you upload a example of how to re-train the model with a new dataset ?
If i'm right, that will be awesome !

@warmspringwinds
Copy link
Owner

@MrChristo59 , yeah, that is what I meant :)

@MrChristo59
Copy link

Looking forward to it.

Just a little question to be sure I'm right.
For creating a dataset to be trained for segmentation, you need an image and a another one with the mask of what you want to learn. I guess the color of the mask will define the type of class it will refer to.
Am I right ? If yes, are there any advice on the proper way to do this mask (border size and color...)
Thanks

@MrChristo59
Copy link

Hey Dannill,
Did you release the exemple yet ? Don't know if it's on your blog or on the git.

@deepk91
Copy link

deepk91 commented Dec 21, 2017

Hey @warmspringwinds Did you upload any example of training a new dataset for your scripts? I am trying to train a new dataset which less number of images around 250 but facing the error of OutOfBound Error as listed in the issues. Could you help resolve this problem?

@warmspringwinds
Copy link
Owner

warmspringwinds commented Dec 21, 2017 via email

@deepk91
Copy link

deepk91 commented Dec 21, 2017

Thank you @warmspringwinds for this suggestion. I want to use FCN32s model for segmentation purpose initialized by VGG16. After going through some of your files what I understood is the script pascalvoc.py in dataset makes use of PASCAL 2012 and Berkeley Pascal dataset which you mentioned in this repository as well. I can substitute the root path to my dataset and it works similar to generating tfrecords by using getannotationpairs methods in utils.pascal_voc.py. What I couldnot understand is where is the explicit example to use different dataset? I am Sorry I am just new to Deep Learning using CNN.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants