You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Congratulations on your publication! I am reading your code and paper, however, I have a question about the sampling policy.
In your paper, you mentioned M = 2, and N = 750, so two seeds, and their nearest 750 clusters are selected before CR, which makes a total of 1500.
However, in train_gcn.py line 146, the for batch in range(cls_num): it seems all the clusters are looped, and for each of them, a total of 1300+200 = 1500 clusters are sampled before CR. In every training step, the features from these clusters are used to construct the affinity graph after SR.
Did I miss something?
The text was updated successfully, but these errors were encountered:
RealNewNoob
changed the title
About sampling strategy
About sampling strategy and clustering setting
Jun 20, 2021
Congratulations on your publication! I am reading your code and paper, however, I have a question about the sampling policy.
In your paper, you mentioned M = 2, and N = 750, so two seeds, and their nearest 750 clusters are selected before CR, which makes a total of 1500.
However, in train_gcn.py line 146, the
for batch in range(cls_num):
it seems all the clusters are looped, and for each of them, a total of 1300+200 = 1500 clusters are sampled before CR. In every training step, the features from these clusters are used to construct the affinity graph after SR.Did I miss something?
The text was updated successfully, but these errors were encountered: