New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: '_FakeLoader' object has no attribute 'pin_memory_device' #3655
Comments
@geg00 I'm making a pr tonight that fixes this along with some user warnings. |
def __init__(self, dataset=None, bs=None, num_workers=0, pin_memory=False, timeout=0, batch_size=None,
shuffle=False, drop_last=False, indexed=None, n=None, device=None, persistent_workers=False,
pin_memory_device='', **kwargs):
....
self.fake_l = _FakeLoader(self, pin_memory, num_workers, timeout, persistent_workers=persistent_workers,
pin_memory_device=pin_memory_device) Torch's default for I also updated the DistributedDL since that also does FakeLoader initialization. |
still see the same error on fastai@2.7.4 with pytorch@1.12.0+cu116, any idea? |
@jim-king-2000 What is the error message/trace you are getting? can you provide a minimal example? Because it passed unit tests, so I would need to know what the scenario is that doesnt work. |
This is the sample: from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
learn = vision_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1) And this is the error code:
|
And the env:
|
And CUDA env: import torch
print(torch.__version__)
print(torch.version.cuda)
print(torch.backends.cudnn.version())
print(torch.cuda.is_available())
|
And the OS is Ubuntu 22.04. |
It works when I switch to Pytorch@1.11.0+cu113. Then a new error occurs:
It is resolved when I change the batch size to 3. However, I think fastai should make the batch size adaptive. |
@jim-king-2000 This is not the same error. #3704 would be the correct place to post this. Please post you hardware info there also (use the original post #3704 as an example for hardware overview). Ref: #3704 (comment) |
Sure. |
Be sure you've searched the forums for the error message you received. Also, unless you're an experienced fastai developer, first ask on the forums to see if someone else has seen a similar issue already and knows how to solve it. Only file a bug report here when you're quite confident it's not an issue with your local setup.
Please see this model example of how to fill out an issue correctly. Please try to emulate that example as appropriate when opening an issue.
Please confirm you have the latest versions of fastai, fastcore, and nbdev prior to reporting a bug (delete one): YES / NO
Describe the bug
AttributeError: '_FakeLoader' object has no attribute 'pin_memory_device'
When I execute the dos.show_batch(max_n=6)
To Reproduce
Steps to reproduce the behavior:
Successfully installed torch-1.12.0.dev20220511
Right after you execute the DataBlock section
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method='squish')]
).dataloaders(path)
You execute the code
dls.show_batch(max_n=6)
===>
AttributeError Traceback (most recent call last)
Untitled-1.ipynb Cell 9' in <cell line: 1>()
----> 1 dls.show_batch(max_n=6)
File ~/opt/anaconda3/lib/python3.9/site-packages/fastai/data/core.py:102, in TfmdDL.show_batch(self, b, max_n, ctxs, show, unique, **kwargs)
100 old_get_idxs = self.get_idxs
101 self.get_idxs = lambda: Inf.zeros
--> 102 if b is None: b = self.one_batch()
103 if not show: return self._pre_show_batch(b, max_n=max_n)
104 show_batch(*self._pre_show_batch(b, max_n=max_n), ctxs=ctxs, max_n=max_n, **kwargs)
File ~/opt/anaconda3/lib/python3.9/site-packages/fastai/data/load.py:170, in DataLoader.one_batch(self)
168 def one_batch(self):
169 if self.n is not None and len(self)==0: raise ValueError(f'This DataLoader does not contain any batches')
--> 170 with self.fake_l.no_multiproc(): res = first(self)
171 if hasattr(self, 'it'): delattr(self, 'it')
172 return res
File ~/opt/anaconda3/lib/python3.9/site-packages/fastcore/basics.py:621, in first(x, f, negate, **kwargs)
619 x = iter(x)
620 if f: x = filter_ex(x, f=f, negate=negate, gen=True, **kwargs)
--> 621 return next(x, None)
File ~/opt/anaconda3/lib/python3.9/site-packages/fastai/data/load.py:125, in DataLoader.iter(self)
123 self.before_iter()
124 self.__idxs=self.get_idxs() # called in context of main process (not workers/subprocesses)
--> 125 for b in _loadersself.fake_l.num_workers==0:
126 # pin_memory causes tuples to be converted to lists, so convert them back to tuples
127 if self.pin_memory and type(b) == list: b = tuple(b)
128 if self.device is not None: b = to_device(b, self.device)
File ~/opt/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py:590, in _SingleProcessDataLoaderIter.init(self, loader)
589 def init(self, loader):
--> 590 super(_SingleProcessDataLoaderIter, self).init(loader)
591 assert self._timeout == 0
592 assert self._num_workers == 0
File ~/opt/anaconda3/lib/python3.9/site-packages/torch/utils/data/dataloader.py:521, in _BaseDataLoaderIter.init(self, loader)
517 self._prefetch_factor = loader.prefetch_factor
518 # for other backends, pin_memory_device need to set. if not set
519 # default behaviour is CUDA device. if pin_memory_device is selected
520 # and pin_memory is not set, the default behaviour false.
--> 521 if (len(loader.pin_memory_device) == 0):
522 self._pin_memory = loader.pin_memory and torch.cuda.is_available()
523 self._pin_memory_device = None
AttributeError: '_FakeLoader' object has no attribute 'pin_memory_device'
Expected behavior
A clear and concise description of what you expected to happen.
to show the images selected
Error with full stack trace
Place between these lines with triple backticks:
Additional context
I'm using the new version of torch-1.12.0.dev20220511 to take advantage of the M1 Metal option.
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: