You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran the code and only got 48.8 mIoU which is 1 mIoU inferior to the reported one in the paper.
I found that the code randomly selects part of the source dataset at each round before conducting image selection. And 'pool_prop' in ccm_config.yml determines this proportion, namely, 20% in default. Then around 1k images are selected from the remained 20% images.
Would this random selection process affect the performance? And is it enough to use only 1k source images for the training during each epoch? What is the best set of these hyper-parameters?
The text was updated successfully, but these errors were encountered:
There exist some fluctuations in performance, so I encourage you to perform multiple runs.
The main contribution of our paper is how to avoid negative transfer by selectively search positive source samples. The experiment result also justifies that our approach could achieve competitive results with about 1K source images.
The pool_prop is designed to minimize the pool size for source selection. We found that, employing 20% of source samples randomly, already enables our selection criterion to find positive samples. You surely could disable such a random process, which means you need to enumerate the whole source set (24996 images).
About the motivation of our source selection, I encourage you to refer to our paper for more details.
Hi there,
Thanks for your code!
I ran the code and only got 48.8 mIoU which is 1 mIoU inferior to the reported one in the paper.
I found that the code randomly selects part of the source dataset at each round before conducting image selection. And 'pool_prop' in ccm_config.yml determines this proportion, namely, 20% in default. Then around 1k images are selected from the remained 20% images.
Would this random selection process affect the performance? And is it enough to use only 1k source images for the training during each epoch? What is the best set of these hyper-parameters?
The text was updated successfully, but these errors were encountered: