New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SOC problem #6
Comments
Hi, thanks for your attention! For your questions: Q2: the prediction of dp if just boundary which is not same as your paper |
thanks a lot for your reply. i have found the reson of Q2. maybe my kernel size of dilate/erode is too small , make the wrong result.could you please tell me the size of your dilate/erode kernel? |
Q1: could you please tell me the size of your dilate/erode kernel?
In fact, m_d in paper Figure 2 is a good example, please set your parameters according to it. Q2: it's still based on single image? is there any relation about time imformation? |
thank you very much. it helps me a lot. |
@TsykunovDmitriy |
Thanks for the answer. I wrote below the pseudocode for the implementation of equations 7, 8 which I use in my training pipeline. Tell me where I'm wrong. pred_semantic, pred_detail, pred_matte = model(image)
pred_semantic_fz, pred_detail_fz, pred_matte_fz = model_freeze(image)
de_mask = get_dilate_erode_mask(pred_matte.numpy())
seg_mask = get_segmentation_mask(pred_matte.numpy())
# equation 7
Ls = 0.5*( sqrt([pred_semantic - seg_mask]**2) ).mean()
Ld = (abs(pred_detail - pred_matte)* de_mask).sum() / de_mask.sum() # the same in training
Lcons = Ls + Ld
# equation 8
Ldd = (abs(pred_detail - pred_detail_fz)* de_mask).sum() / de_mask.sum()
loss = Lcons + Ldd
mode.update_weights(loss) If possible, then answer a few more questions. How much data did you use for fine-tuning? How many epochs? What does it mean "simultaneously" in paper? |
Q1: How much data did you use for fine-tuning? Q2: How many epochs? Q3: What does it mean "simultaneously" in paper? However, you sould split Ls into two terms as:
The gradients should go back from both branches at the same time (You should make sure the gradient can be back propagated through Besides, could you visualize a sample with |
Thanks a lot for the answers. |
I did some experiments. Unfortunately, the result has not improved. I have a guess that I have incorrectly formulated equation 5 from the paper. Could you please comment |
@TsykunovDmitriy Maybe you can try the following code for calculating
In this way, you do not need the This is our old implementation. It can also work well in our case. |
Thanks for your advice! Unfortunately, I did not receive satisfactory results. I think this is due to the small amount of data at the stage of training the model. Perhaps my model is not generalized enough. I think you can close the discussion. |
@TsykunovDmitriy |
Hi, all, Our main code for SOC adaptation is available now。 |
Thanks for your sharing.Nice work~
here is a question about the SOC in your paper.
The self-supervied stage is used in the new domain datasets, so the new or the target datasets are which we will test later?
And another question is when i try to train MODNet, the prediction of dp if just boundary which is not same as your paper .
The text was updated successfully, but these errors were encountered: