Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong segmentation #5

Closed
ZhengdiYu opened this issue May 10, 2022 · 5 comments
Closed

Wrong segmentation #5

ZhengdiYu opened this issue May 10, 2022 · 5 comments

Comments

@ZhengdiYu
Copy link

Hi, outputs/segms/train/Capture9/0129_indextip/cam410062/image32412.png. This segmentation has a value of '33', which is not expected. Do you have any ideas?

@ZhengdiYu
Copy link
Author

image

@ZhengdiYu ZhengdiYu reopened this May 10, 2022
@zc-alexfan
Copy link
Owner

zc-alexfan commented May 11, 2022 via email

@ZhengdiYu
Copy link
Author

Hi, I have not noticed this before. Maybe you can see if there is 33 on the original texture map? Alex

On Tue, May 10, 2022 at 8:28 PM ZhengdiYu @.> wrote: Reopened #5 <#5>. — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJH2YBKNCBKVDOLRR6DYKSDVJKTFDANCNFSM5VSMBC3A . You are receiving this because you are subscribed to this thread.Message ID: @. com>

Weird. The training process always went normally before.

But it suddenly began to throw out an error yesterday when computing loss, like:

block: [0,0,0], thread: [16,0,0] Assertion t >= 0 && t < n_classes failed.
RuntimeError: cuda runtime error (59) : device-side assert triggered

Sometimes the error would also be the assertion from the swapr_lr_label() function, while sometimes it would be the RuntimeError above.

Do you still have the segmentation I mentioned above? It would be great if you could have a check with this frame.

@ZhengdiYu
Copy link
Author

ZhengdiYu commented May 11, 2022

Also, one tiny question, what is the tol value you used in experiments? (in def segm_iou(pred, target, n_classes, tol, background_cls=0):)

@zc-alexfan
Copy link
Owner

hmm. sorry, I don't notice this problem before. I would suggest to save the rendered image with 33 and visualize what's in 33.

I use tol= 20. This is just to avoid numerical problem when computing the IoU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants