You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have the angles for each pixel as a .npy file to be used as groundtruth as mentioned in the paper.
My question is: is this directly used in the network or is there a need to calculate frame field to be fed into the network? If so what should the format be?
I would appreciate any kind of help. Thank you.
The text was updated successfully, but these errors were encountered:
The angles to be fed into the network are generated during training data generation process implemented as data_transforms (f.e. https://github.com/Lydorn/Polygonization-by-Frame-Field-Learning/blob/master/dataset_folds.py). Your data should be located in "raw" folder, after that, training data are located in "processed" folder. You just need to adjust transformation to your own data.
@patriksabol Thank you for you input. I was able to understand that we only need the images and binary masks as input and not the angles as they will be calculated while running the network.
I was able to run the network and got the training data as '.pt' file in processed folder. Although the training stopped with error half way I was still able to get half of the data in processed folder.
I have the angles for each pixel as a .npy file to be used as groundtruth as mentioned in the paper.
My question is: is this directly used in the network or is there a need to calculate frame field to be fed into the network? If so what should the format be?
I would appreciate any kind of help. Thank you.
The text was updated successfully, but these errors were encountered: