New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'TSRobustScale' object has no attribute '_setup' #283
Comments
Hi @bob-mcrae, |
Hi Ignacio,
|
Hi @bob-mcrae, |
Sorry, that was a minimal test case, which was likely not realistic. Here is a realistic use & test case, where I am seeing the same error. Is this a user error; should I be setting the batch size before calling
|
I did find that setting the batch size is a work-around. Is that typical for inference?
|
|
@oguiza Understood, but with the use-case I have, and without setting the batch size, the issue is blocking |
I don't know what to do with this. In my opinion, this is not an issue. The code is working as expected. You may want to log an issue with the fastai library though. |
@oguiza I agree it looks like the issue is with fastai, but the entry point is a tsai method. Knowing that there is a bug in fastai, I wonder whether it would make sense to set the batch size within the get_X_preds function. Something like (small change on line-6 [(bs is None or bs==0)]):
|
Hi @bob-mcrae,
@patch
def get_X_preds(self: Learner, X, y=None, bs=64, with_input=False, with_decoded=True, with_loss=False):
if with_loss and y is None:
print('cannot find loss as y=None')
with_loss = False
dl = self.dls.valid.new_dl(X, y=y)
dl.bs = self.dls.bs if (bs is None or bs==0) else bs
output = list(self.get_preds(dl=dl, with_input=with_input, with_decoded=with_decoded, with_loss=with_loss, reorder=False))
if with_decoded and hasattr(self.dls.tls[-1], "tfms") and hasattr(self.dls.tls[-1].tfms, "decodes"):
output[2 + with_input] = self.dls.tls[-1].tfms.decode(output[2 + with_input])
return tuple(output) |
|
I'm sorry I can't be more helpful in troubleshooting at this point, but here is what I am finding with the last three versions using the test case outlined above. At this point, I cannot properly test my proposed solution.
|
Hi @bob-mcrae. Have you tried reproducing your problem using Google Colab? When I had an earlier problem @oguiza had me reproduce in Colab to help isolate any local system-specific issues. Also, I see you are running on Windows -- may not make any difference, but I switched from Windows to Linux earlier when I discovered that Fastai at the time was less robust on Windows. Your error message see line 333 in https://github.com/timeseriesAI/tsai/blob/92d222be8e5cba433c26b429b565bccde177de72/tsai/data/preprocessing.py |
I agree with this. use_single_batch was recently added to TSRobustScale to align it to TSStandardize. You may have trained a model using a previous version, but can't use it now. There's a red warning in fastai documentation that says this: |
@williamsdoug : Thank you for your suggestion. After getting back to this, I uninstalled tsai & dependencies on my local PC and was able to get 0.2.23 working as expected (no more So, @oguiza I think we can close this issue, which was associated with version 0.2.23. It was likely an issue with some dependencies on my PC. Now, I was able to resume my testing of the ZeroDivisionError (#296) and have a working solution. The solution is currently a hack, but does work. If you like, I can create a pull-request and make this change with your feedback on perhaps a better implementation. Also, should we result this discussion in issue 296 instead of this one? ZeroDivisionError work-around:
|
This has already been fixed in the repo. I will create a pip release soon to fix the issue in tsai 0.2.23. |
Thanks @oguiza |
Hi @bob-mcrae, |
Sorry Ignacio; I had what I thought was a bug in my code, but looks like it's associated with the 0.2.23 tsai package; confirmed not present in 0.2.20. From what I have read, this sort of error is commonly associated with mutual imports (eg, two modules importing from each other).
The text was updated successfully, but these errors were encountered: