New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A doubt about the SOC process #62
Comments
Hi, thanks for your attention. The key idea behinds SOC is |
As mentioned in paper, SOC strategy can boost the results. So may I ask that is the dataset for SOC training&testing being a sequence frame from a video? If not, could you give some advice on which type of data is suitable for SOC ? @ZHKKKe |
@yarkable |
Got it! Thanks. |
@ZHKKKe 还有请问我在本地CPU上基于SOC策略训练loss是正常的,但当我进行多卡GPU训练,损失值变为NAN,请问你能给我些建议吗?我使用的是Keras版本框架。 |
你好,对于你的问题: Q2: 本地CPU上基于SOC策略训练loss是正常的,但当我进行多卡GPU训练,损失值变为NAN,请问你能给我些建议吗 |
@ZHKKKe |
@xafha I think It's not a value which is 0 or 1, but a value between 0 and 1. Because it can get smoother value. |
@xafha |
`## 1. bgr2rgb 2. TODO: 插值方法 + padding value, 512x512image = self._aspect_preserving_resize(image, cv2.INTER_LINEAR, (127, 127, 127)) 3. TODO: augmentationimage, matte = self._random_flip(image, matte) 4. normalizeimage = image.astype(np.float32) / 255.0 5. trimaptrimap = self.gen_trimap(matte) 6. gaussianblur semanticsemantic = cv2.resize(matte, target_size, interpolation=cv2.INTER_LINEAR) 7.detialboundaries = (trimap < 0.5) + (trimap > 0.5) 1. calculate the semantic loss(16x)loss = tf.square(gt_semantic - pred_semantic) 2. calculate the detail loss
3. calculate the matte loss
|
@xafha |
@amirgoren Q2: Can you tell what size of dataset you used during SOC? (epoch?) |
老哥,你的这个loss为NAN的问题解决了吗 |
Hello
How are you?
Thanks for contributing to MODNet project.
I have a question about the SOC process.
I know that the SOC process is to refine alpha matte by using the predicted segmentation on unlabeled samples.
If so, I think that this is based on the supposition that the predicted segmentation result on even unlabeled(unseen) samples are always good but the alpha mattes are NOT good relatively.
Then what is the guarantee that the segmentation result on unlabeled(unseen) images by MODNet is always good?
The text was updated successfully, but these errors were encountered: