-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
regarding use this framework for my semantic segmentation work #9
Comments
I'm also very interested in any advices for generating a new dataset to be trained. |
I will load an example of usage for different dataset that I have done recently. Small number of images is usually a problem. Reusing pretrained weights won't make it worse I think. Let me know if it helps. |
Don't know if I understood it right but will you upload a example of how to re-train the model with a new dataset ? |
@MrChristo59 , yeah, that is what I meant :) |
Looking forward to it. Just a little question to be sure I'm right. |
Hey Dannill, |
Hey @warmspringwinds Did you upload any example of training a new dataset for your scripts? I am trying to train a new dataset which less number of images around 250 but facing the error of OutOfBound Error as listed in the issues. Could you help resolve this problem? |
Hi,
I would recommend trying this out, because it has an example of applying it
to different dataset:
https://github.com/warmspringwinds/pytorch-segmentation-detection
2017-12-21 14:18 GMT-05:00 deepk91 <notifications@github.com>:
… Hey @warmspringwinds <https://github.com/warmspringwinds> Did you upload
any example of training a new dataset for your scripts? I am trying to
train a new dataset which less number of images around 250 but facing the
error of OutOfBound Error as listed in the issues. Could you help resolve
this problem?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACYrB0W5uAA3Jfe-mIO7FaCXGA2nCgWbks5tCq71gaJpZM4MCIan>
.
|
Thank you @warmspringwinds for this suggestion. I want to use FCN32s model for segmentation purpose initialized by VGG16. After going through some of your files what I understood is the script pascalvoc.py in dataset makes use of PASCAL 2012 and Berkeley Pascal dataset which you mentioned in this repository as well. I can substitute the root path to my dataset and it works similar to generating tfrecords by using getannotationpairs methods in utils.pascal_voc.py. What I couldnot understand is where is the explicit example to use different dataset? I am Sorry I am just new to Deep Learning using CNN. |
Hi,
I have several questions regarding using this library
If the studied data set is of totally different domain with the typical benchmark set, such as PASCAL VOC,
What should be the right pipeline of using your framework. Can I still use the pre-trained model(weight) and re-train the model using my dataset?
The problem I am studying is of limited number of images, each of this image has large sizes, i.e., 4096 * 4096 pixels. The masked area
is about 5%~10% areas of each image.I have been thinking of generating large samples of training set from these large images, each training image
is of 128 * 128. In other words, building a model based on 128 * 128.
During testing stage, conduct the sub-frame prediction (each sub-frame is 128*128) over the test image, and stitch these predicted mask together.
Is this the right approach?
Besides, are there any suggestions on generate those training set?
The text was updated successfully, but these errors were encountered: