Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'TSRobustScale' object has no attribute '_setup' #283

Closed
bob-mcrae opened this issue Nov 27, 2021 · 18 comments
Closed

AttributeError: 'TSRobustScale' object has no attribute '_setup' #283

bob-mcrae opened this issue Nov 27, 2021 · 18 comments
Labels
under review Waiting for clarification, confirmation, etc

Comments

@bob-mcrae
Copy link

bob-mcrae commented Nov 27, 2021

Sorry Ignacio; I had what I thought was a bug in my code, but looks like it's associated with the 0.2.23 tsai package; confirmed not present in 0.2.20. From what I have read, this sort of error is commonly associated with mutual imports (eg, two modules importing from each other).

from tsai.inference import load_learner

model_path = f'/gdrive/MyDrive/***/LVEFCLASS-16735.pkl' 
model      = load_learner(model_path)
print(type(model))
model.export(f'/gdrive/MyDrive/***/LVEFCLASS-16735_.pkl)
<class 'fastai.learner.Learner'>
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-11-a9c25badaf5d> in <module>()
      4 model      = load_learner(model_path)
      5 print(type(model))
----> 6 model.export(f'/gdrive/MyDrive/Semler/QF+/Sensor Data Files/Cached_Data/models/InceptionTime/LVEFCLASS-16735_.pkl')

12 frames
/usr/local/lib/python3.7/dist-packages/tsai/data/preprocessing.py in setups(self, dl)
    327 
    328     def setups(self, dl: DataLoader):
--> 329         if self._setup:
    330             if not self.use_single_batch:
    331                 o = dl.dataset.__getitem__([slice(None)])[0]

AttributeError: 'TSRobustScale' object has no attribute '_setup'
oguiza pushed a commit that referenced this issue Nov 29, 2021
@oguiza
Copy link
Contributor

oguiza commented Nov 29, 2021

Hi @bob-mcrae,
I have fixed an inconsistency between TSStandardize, TSNormalize, ... and believe the issue is fixed now. But it'd be good if you could confirm it.

@bob-mcrae
Copy link
Author

Hi Ignacio,
I tested 0.2.24 this morning and seeing an issue that may be associated with my test case. I wonder whether learner.dls.bs should be set to 1 if it's 0.

from tsai.inference import load_learner
learner = load_learner('/gdrive/MyDrive/***/LVEFCLASS-17420.pkl')
print(learner.dls.bs)
learner.export('/gdrive/MyDrive/***/LVEFCLASS-17420_.pkl')
0
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-6-3fbc79b68a80> in <module>()
      2 learner = load_learner('/gdrive/MyDrive/Semler/QF+/Sensor Data Files/Cached_Data/models/InceptionTime/LVEFCLASS-17420.pkl')
      3 print(learner.dls.bs)
----> 4 learner.export('/gdrive/MyDrive/Semler/QF+/Sensor Data Files/Cached_Data/models/InceptionTime/LVEFCLASS-17420_.pkl')

14 frames
/usr/local/lib/python3.7/dist-packages/fastai/data/load.py in __len__(self)
     92         if self.n is None: raise TypeError
     93         if self.bs is None: return self.n
---> 94         return self.n//self.bs + (0 if self.drop_last or self.n%self.bs==0 else 1)
     95 
     96     def get_idxs(self):

ZeroDivisionError: integer division or modulo by zero

@oguiza
Copy link
Contributor

oguiza commented Nov 29, 2021

Hi @bob-mcrae,
I'm not sure what's your use case is. But you don't usually export a model that's already been exported. The purpose of exporting a model is inference. And the model contains no data (that's why the batch size is 0).
If you need to reload the model because you plan to continue training it, for example, you should use learn.save and learn. load instead of learn.export and load_learner.
So what you are seeing now is not an issue. It's the expected behavior.

@bob-mcrae
Copy link
Author

Sorry, that was a minimal test case, which was likely not realistic. Here is a realistic use & test case, where I am seeing the same error. Is this a user error; should I be setting the batch size before calling get_X_preds?

filename      = 'DA 02.25.21.2.csv'
df            = load_file(DATA_DIR + filename)
df            = add_channels(df) # adds some channels to the raw data
learner       = load_learner('/gdrive/MyDrive/***/LVEFCLASS-17420.pkl')
x_data        = prep_4_model(df, learner) # sliced numpy array in the right orientation
learner.get_X_preds(x_data)

---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-27-a8827c82b975> in <module>()
     13 learner = load_learner('/gdrive/MyDrive/Semler/QF+/Sensor Data Files/Cached_Data/models/InceptionTime/LVEFCLASS-17420.pkl')
     14 x_data     = prep_4_model(df, learner)
---> 15 learner.get_X_preds(x_data)
     16 

13 frames
/usr/local/lib/python3.7/dist-packages/fastai/data/load.py in __len__(self)
     92         if self.n is None: raise TypeError
     93         if self.bs is None: return self.n
---> 94         return self.n//self.bs + (0 if self.drop_last or self.n%self.bs==0 else 1)
     95 
     96     def get_idxs(self):

ZeroDivisionError: integer division or modulo by zero

@bob-mcrae
Copy link
Author

I did find that setting the batch size is a work-around. Is that typical for inference?

learner.dls[1].bs = 1

@oguiza
Copy link
Contributor

oguiza commented Nov 29, 2021

learn.get_X_preds sets the batch size internally so that you don't need to worry about it during inference. The default value is set to 64 but you can override it passing a difference bs value.

@oguiza oguiza added the under review Waiting for clarification, confirmation, etc label Nov 29, 2021
@bob-mcrae
Copy link
Author

@oguiza Understood, but with the use-case I have, and without setting the batch size, the issue is blocking

@oguiza
Copy link
Contributor

oguiza commented Nov 29, 2021

I don't know what to do with this. In my opinion, this is not an issue. The code is working as expected. You may want to log an issue with the fastai library though.

@bob-mcrae
Copy link
Author

bob-mcrae commented Nov 29, 2021

@oguiza I agree it looks like the issue is with fastai, but the entry point is a tsai method. Knowing that there is a bug in fastai, I wonder whether it would make sense to set the batch size within the get_X_preds function. Something like (small change on line-6 [(bs is None or bs==0)]):

def get_X_preds(self: Learner, X, y=None, bs=64, with_input=False, with_decoded=True, with_loss=False):
    if with_loss and y is None:
        print('cannot find loss as y=None')
        with_loss = False
    dl = self.dls.valid.new_dl(X, y=y)
    dl.bs = self.dls.bs if (bs is None or bs==0) else bs
    output = list(self.get_preds(dl=dl, with_input=with_input, with_decoded=with_decoded, with_loss=with_loss, reorder=False))
    if with_decoded and hasattr(self.dls.tls[-1], "tfms") and hasattr(self.dls.tls[-1].tfms, "decodes"):
        output[2 + with_input] = self.dls.tls[-1].tfms.decode(output[2 + with_input])
    return tuple(output)

@oguiza
Copy link
Contributor

oguiza commented Dec 1, 2021

Hi @bob-mcrae,
I have a couple of questions:

  1. could you please briefly describe the use case for re-exporting an already exported model?
  2. have you checked if using the requested change fixes the issue. You can paste this code into your notebook and test it:
@patch
def get_X_preds(self: Learner, X, y=None, bs=64, with_input=False, with_decoded=True, with_loss=False):
    if with_loss and y is None:
        print('cannot find loss as y=None')
        with_loss = False
    dl = self.dls.valid.new_dl(X, y=y)
    dl.bs = self.dls.bs if (bs is None or bs==0) else bs
    output = list(self.get_preds(dl=dl, with_input=with_input, with_decoded=with_decoded, with_loss=with_loss, reorder=False))
    if with_decoded and hasattr(self.dls.tls[-1], "tfms") and hasattr(self.dls.tls[-1].tfms, "decodes"):
        output[2 + with_input] = self.dls.tls[-1].tfms.decode(output[2 + with_input])
    return tuple(output)

@bob-mcrae
Copy link
Author

@oguiza

  1. To answer your first question. As you can see from my test case below, I am not re-exporting the model (that was a minimal, fictitious case I used to show the issue previously).

  2. I am testing the proposed change now, but am seeing a different issue on 0.2.24. I will install 0.2.23 to see if I can replicate the batch size issue. In the meantime, here is the test case and the new issue I am seeing. If you find the test case valid, I'll enter this as a new issue; please let me know.

# load the model
learner       = load_learner(DATA_DIR + '//Cached_Data//models//InceptionTime//LVEFCLASS-16322.pkl')

# get the data
filename      = 'DA 02.25.21.2.csv'
df            = load_file(DATA_DIR + filename)
df            = add_channels(df)
x_data        = prep_4_model(df, learner)

# inference
learner.get_X_preds(x_data)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\tsai\inference.py", line 18, in get_X_preds
    dl = self.dls.valid.new_dl(X, y=y)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\tsai\data\core.py", line 468, in new_dl
    new_dloader = self.new(self.dataset.add_dataset(X, y=y))
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\tsai\data\core.py", line 455, in new
    res = super().new(dataset, cls, **kwargs)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\fastai\data\core.py", line 63, in new
    res = super().new(dataset, cls, do_setup=False, **kwargs)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\fastai\data\load.py", line 128, in new
    return cls(**merge(cur_kwargs, kwargs))
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\tsai\data\core.py", line 446, in __init__
    super().__init__(dataset, bs=bs, shuffle=shuffle, drop_last=drop_last, num_workers=num_workers, **kwargs)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\fastai\data\core.py", line 48, in __init__
    kwargs[nm].setup(self)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\fastcore\transform.py", line 192, in setup
    for t in tfms: self.add(t,items, train_setup)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\fastcore\transform.py", line 196, in add
    for t in ts: t.setup(items, train_setup)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\fastcore\transform.py", line 79, in setup
    return self.setups(getattr(items, 'train', items) if train_setup else items)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\fastcore\dispatch.py", line 118, in __call__
    return f(*args, **kwargs)
  File "C:\Users\rober\AppData\Local\Programs\Python\Python37\lib\site-packages\tsai\data\preprocessing.py", line 341, in setups
    if not self.use_single_batch:
AttributeError: 'TSRobustScale' object has no attribute 'use_single_batch'

@bob-mcrae
Copy link
Author

@oguiza

I'm sorry I can't be more helpful in troubleshooting at this point, but here is what I am finding with the last three versions using the test case outlined above. At this point, I cannot properly test my proposed solution.

# learner.get_X_preds(x_data): (tensor([[0.9776, 0.0224]]), None, tensor([0]))    tsai version: 0.2.20
# AttributeError: 'TSRobustScale' object has no attribute '_setup'                tsai version: 0.2.23
# AttributeError: 'TSRobustScale' object has no attribute 'use_single_batch'      tsai version: 0.2.24

@williamsdoug
Copy link
Contributor

Hi @bob-mcrae. Have you tried reproducing your problem using Google Colab? When I had an earlier problem @oguiza had me reproduce in Colab to help isolate any local system-specific issues. Also, I see you are running on Windows -- may not make any difference, but I switched from Windows to Linux earlier when I discovered that Fastai at the time was less robust on Windows.

Your error message AttributeError: 'TSRobustScale' object has no attribute 'use_single_batch' is curious since use_single_batch is always set during the __init__() method of the transform in the current main branch. My uninformed opinion is that there may be some form of package version skew issue here.

see line 333 in https://github.com/timeseriesAI/tsai/blob/92d222be8e5cba433c26b429b565bccde177de72/tsai/data/preprocessing.py

@oguiza
Copy link
Contributor

oguiza commented Dec 3, 2021

Your error message AttributeError: 'TSRobustScale' object has no attribute 'use_single_batch' is curious since use_single_batch is always set during the init() method of the transform in the current main branch. My uninformed opinion is that there may be some form of package version skew issue here.

I agree with this. use_single_batch was recently added to TSRobustScale to align it to TSStandardize. You may have trained a model using a previous version, but can't use it now.

There's a red warning in fastai documentation that says this:
"Warning: load_learner requires all your custom code to be in the exact same place as when exporting your Learner (the main script, or the module you imported it from)."

@bob-mcrae
Copy link
Author

@williamsdoug : Thank you for your suggestion. After getting back to this, I uninstalled tsai & dependencies on my local PC and was able to get 0.2.23 working as expected (no more 'TSRobustScale' object has no attribute '_setup' exception). I was also able to do the same on colab. As an aside, I was testing on my PC because I knew I had to make a modification to the source and did not know I could do that in colab. I was very pleasantly surprised that I could make changes to the source in colab and test the changes! Thank you for helping me learn that!

So, @oguiza I think we can close this issue, which was associated with version 0.2.23. It was likely an issue with some dependencies on my PC. Now, I was able to resume my testing of the ZeroDivisionError (#296) and have a working solution. The solution is currently a hack, but does work. If you like, I can create a pull-request and make this change with your feedback on perhaps a better implementation. Also, should we result this discussion in issue 296 instead of this one?

ZeroDivisionError work-around:

def get_X_preds(self: Learner, X, y=None, bs=64, with_input=False, with_decoded=True, with_loss=False):
    for dls_ in self.dls:
      if dls_.bs==0: dls_.bs=bs
      print(f'dls.bs: {dls_.bs}')
    if with_loss and y is None:
        print("cannot find loss as y=None")
        with_loss = False
    dl = self.dls.valid.new_dl(X, y=y)
    setattr(dl, "bs", bs)
    output = list(self.get_preds(dl=dl, with_input=with_input, with_decoded=with_decoded, with_loss=with_loss, reorder=False))
    if with_decoded and hasattr(self.dls.tls[-1], "tfms") and hasattr(self.dls.tls[-1].tfms, "decodes"):
        output[2 + with_input] = self.dls.tls[-1].tfms.decode(output[2 + with_input])
    return tuple(output)

@oguiza
Copy link
Contributor

oguiza commented Dec 4, 2021

This has already been fixed in the repo. I will create a pip release soon to fix the issue in tsai 0.2.23.

@bob-mcrae
Copy link
Author

Thanks @oguiza

@oguiza
Copy link
Contributor

oguiza commented Dec 16, 2021

Hi @bob-mcrae,
I've just updated tsai in pip. You can now use tsai 0.2.24.
I'll close this issue now, but please reopen it if necessary.

@oguiza oguiza closed this as completed Dec 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
under review Waiting for clarification, confirmation, etc
Projects
None yet
Development

No branches or pull requests

3 participants