Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An issue on implementation of SemanticConnectivityloss #1798

Closed
rose-jinyang opened this issue Feb 24, 2022 · 21 comments
Closed

An issue on implementation of SemanticConnectivityloss #1798

rose-jinyang opened this issue Feb 24, 2022 · 21 comments
Assignees
Labels
bug Something isn't working

Comments

@rose-jinyang
Copy link

Hi @LutaoChu and @dreamer121121
In the forward function of the SemanticConnectivityLoss implementation(https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.4/paddleseg/models/losses/semantic_connectivity_loss.py), are the parameter logits the probabilities map or logit map without applying softmax?
BTW:
image
There is an issue with the above function implementation.
pred_conn has labels greater than pred_num_conn value.
That's because pred_num_conn is limited as smaller than 10.
image
This results in an issue in the below line.
pred_conn = F.one_hot(pred_conn.long(), pred_num_conn)

How can we fix this issue?

@rose-jinyang rose-jinyang added the bug Something isn't working label Feb 24, 2022
@LutaoChu
Copy link
Contributor

Hi, Logits is logit map without applying softmax.

Regarding the above issue, it looks like there is indeed a problem. We need to slice pred_conn.
But it is very strange that I did not find any errors during the actual training process. Can you provide your error message?

@LutaoChu
Copy link
Contributor

I tested it for paddle. If the range of values in pred_conn exceeds the pred_num_conn value, F.one_hot can still be calculated

@rose-jinyang
Copy link
Author

In fact, I am using this loss in PyTorch.
In PyTorch, F.one_hot does not handle such case.

@rose-jinyang
Copy link
Author

In case of using Paddle, how will the values in pred_conn exceeds the pred_num_conn value be processed?

@LutaoChu
Copy link
Contributor

In fact, I am using this loss in PyTorch. In PyTorch, F.one_hot does not handle such case.

You can try it as the following
image

@LutaoChu
Copy link
Contributor

In case of using Paddle, how will the values in pred_conn exceeds the pred_num_conn value be processed?

Exceeded values will be ignored. The corresponding one hot codes returned are all 0

import paddle
import paddle.nn.functional as F

a = paddle.randint(0, 10, (4, 4))
b = F.one_hot(a, 5)
print(a)
print(b)

Output

Tensor(shape=[4, 4], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
       [[7, 2, 3, 3],
        [8, 6, 2, 6],
        [8, 1, 7, 4],
        [4, 1, 8, 6]])
Tensor(shape=[4, 4, 5], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
       [[[0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0.],
         [0., 0., 0., 1., 0.],
         [0., 0., 0., 1., 0.]],

        [[0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0.],
         [0., 0., 1., 0., 0.],
         [0., 0., 0., 0., 0.]],

        [[0., 0., 0., 0., 0.],
         [0., 1., 0., 0., 0.],
         [0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 1.]],

        [[0., 0., 0., 0., 1.],
         [0., 1., 0., 0., 0.],
         [0., 0., 0., 0., 0.],
         [0., 0., 0., 0., 0.]]])

@rose-jinyang
Copy link
Author

I've got the below issue with the above code.

image

@rose-jinyang
Copy link
Author

rose-jinyang commented Feb 25, 2022

What is your paddlepaddle version? Mine is paddlepaddle 2.2.2.

@LutaoChu
Copy link
Contributor

LutaoChu commented Mar 1, 2022

Mine is 2.2.1
Please give your running command and I'll repeat the problem

@rose-jinyang
Copy link
Author

rose-jinyang commented Mar 1, 2022

I've installed paddle 2.2.1.
But the issue appears still.

image

@LutaoChu
Copy link
Contributor

LutaoChu commented Mar 1, 2022

I have no problem here
What is your running command?

@rose-jinyang
Copy link
Author

import paddle
import paddle.nn.functional as F

a = paddle.randint(0, 10, (4, 4))
b = F.one_hot(a, 5)
print(a)
print(b)

This is the script.

@LutaoChu
Copy link
Contributor

LutaoChu commented Mar 2, 2022

So strange. I can run it
image

@rose-jinyang
Copy link
Author

The difference is only that I tried it on CPU without GPU.

@LutaoChu
Copy link
Contributor

LutaoChu commented Mar 7, 2022

I have the same problem with CPU. This is an OP bug in paddle. You can submit the issue here https://github.com/PaddlePaddle/Paddle

@LutaoChu
Copy link
Contributor

LutaoChu commented Mar 7, 2022

In fact, I am using this loss in PyTorch. In PyTorch, F.one_hot does not handle such case.

You can try it as the following image

If you only have CPU, you can modify it according to this

@rose-jinyang
Copy link
Author

rose-jinyang commented Mar 7, 2022

One of Paddle developers recognized this issue on GPU.
I think that your loss implementation should be updated under consideration of this point.

image

@rose-jinyang
Copy link
Author

Could u provide the implementation of this loss in PyTorch.
That's because some issues appears when running this loss implemented by myself in PyTorch.

@LutaoChu
Copy link
Contributor

LutaoChu commented Mar 9, 2022

In fact, I think that the input element can support greater than class_num is a feature, not a bug.

I will try the pytorch loss implementation

@rose-jinyang
Copy link
Author

Hi @LutaoChu
Did u implement PyTorch version of this loss?

@LutaoChu
Copy link
Contributor

Hi,I fix this bug on CPU with Paddle. You can use it after the PR is merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants