-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retraining with Personal Data #14
Comments
Hi, Unfortunately, due to a cooperation with a company, the labeling tool that we use is proprietary, but here are some you can give a shot: https://bitbucket.org/ueacomputervision/image-labelling-tool I will close this as it is not related to the framework, and to avoid spamming everybody, but feel free to email me for non-framework related questions. |
Thanks Andres,
I am trying to retrain the network on my data, do you have any guide for
the data/labels format and structure to be stored in ?
like the RGB folders and train and test, the format of the labeled
segmented images ?
Thanks
…On Wed, Apr 25, 2018 at 10:49 AM, Andres Milioto ***@***.***> wrote:
Closed #14
<#14>.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#14 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AONUxtzAQq5S2p2I7Jm5o_mFQkPTKTb8ks5tsH7ZgaJpZM4TiaHU>
.
|
Hi, I put a toy example in our server of the resulting dataset you get from pre-processing cityscapes (this case only one image) with the cityscapes parser included in the dataset folder, aux scripts. http://ipb.uni-bonn.de/html/projects/bonnet/datasets/cityscapes_toy.tar.gz By looking at this script, along with this dataset, and the data.yaml corresponding to cityscapes you should be able to understand better how the data format works! |
Hi Andres,
I am just wondering for training the CWC model, do I have to convert my
segmented labeled images into mono-chrome images through the dataset,aux
scripts? or just load my data with colored masks and put the color map on
the data.yaml file
Thanks for your kind support
Regards
…On Fri, Apr 27, 2018 at 4:34 AM, Andres Milioto ***@***.***> wrote:
Hi,
I put a toy example in our server of the resulting dataset you get from
pre-processing cityscapes (this case only one image) with the cityscapes
parser included in the dataset folder, aux scripts.
http://ipb.uni-bonn.de/html/projects/bonnet/datasets/cityscapes_toy.tar.gz
By looking at this script, along with this dataset, and the data.yaml
corresponding to cityscapes you should be able to understand better how the
data format works!
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#14 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AONUxotvvJX-zS6hNoHlAybjJCOOlGaNks5tssoEgaJpZM4TiaHU>
.
|
I am trying to retrain with my own dataset, in the dataset/aux_script , it is mentioning to use "Use the output format extracted from the BAG that uses images and color labels created by Philipp's label creator."
Is this a kind of tool I should use first for my annotation?
The text was updated successfully, but these errors were encountered: