You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thx for your nice work! I wonder how to train the self-teaching network. It seems you didn't give the code for training. According to (14), a group of results is needed, will it takes a lot of GPU memory during training?
Besides, I didn't find the drop+self-teaching result in the paper, I wonder the performance of this way.
The text was updated successfully, but these errors were encountered:
Hi @zhangmaoxiansheng ,
for now, we are not planning to release the training code. You can easily reimplement it on your own by extending monodepth2.
To train self-teaching, you need to load a pre-trained network to compute the distilled labels. Otherwise, you can pre-compute offline and load them as pseudo ground truth. In my code, I was able to compute them on-the-fly on a single GPU without memory issues.
About drop+self, it is missing in the paper because dropout itself, taken alone, performs poorly.
Anyway, you can find below the performance of drop+self with M supervision:
Thx for your nice work! I wonder how to train the self-teaching network. It seems you didn't give the code for training. According to (14), a group of results is needed, will it takes a lot of GPU memory during training?
Besides, I didn't find the drop+self-teaching result in the paper, I wonder the performance of this way.
The text was updated successfully, but these errors were encountered: