Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions references/classification/sampler.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ class RASampler(torch.utils.data.Sampler):
https://github.com/facebookresearch/deit/blob/main/samplers.py
"""

def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True, seed=0):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this might be useful but it's also a paradigm we don't do on TorchVision. Don't get me wrong, this approach is popular with other libraries it's just that setting seeds on constructors is something we don't typically do.

@fmassa thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@datumbox The RA sampler is modified from torch.utils.data.DistributedSampler which also has a parameter seed. The default value of seed is 0, so it does not affect the behavior of RA samper if users don't explicitly set a seed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. Since DistributedSampler supports it we can also have it.

if num_replicas is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available!")
Expand All @@ -32,11 +32,12 @@ def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
self.total_size = self.num_samples * self.num_replicas
self.num_selected_samples = int(math.floor(len(self.dataset) // 256 * 256 / self.num_replicas))
self.shuffle = shuffle
self.seed = seed

def __iter__(self):
# Deterministically shuffle based on epoch
g = torch.Generator()
g.manual_seed(self.epoch)
g.manual_seed(self.seed + self.epoch)
if self.shuffle:
indices = torch.randperm(len(self.dataset), generator=g).tolist()
else:
Expand Down
2 changes: 1 addition & 1 deletion references/classification/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import torchvision
import transforms
import utils
from references.classification.sampler import RASampler
from sampler import RASampler
Copy link
Contributor

@datumbox datumbox Dec 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, if this PR is not merged could you please bring this separately?

from torch import nn
from torch.utils.data.dataloader import default_collate
from torchvision.transforms.functional import InterpolationMode
Expand Down