Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to pickle dump tadgan pipeline #200

Closed
dyuliu opened this issue Feb 24, 2021 · 1 comment · Fixed by #201
Closed

Not able to pickle dump tadgan pipeline #200

dyuliu opened this issue Feb 24, 2021 · 1 comment · Fixed by #201
Assignees
Labels
bug Something isn't working
Milestone

Comments

@dyuliu
Copy link
Contributor

dyuliu commented Feb 24, 2021

  • Orion version: v0.1.5
  • Python version: 3.6
  • Operating System: macOS 11.2.1

(1)
I tried to use pickle dump tadgan pipeline instance created here
https://github.com/signals-dev/Orion/blob/23f2bccb057572cb244631ca79ba3b623c6080f0/orion/analysis.py#L30

Then an error message from TensorFlow (v1.14) came out:
NotImplementedError: numpy() is only available when eager execution is enabled.

(2)
For other non-tadgan pipeline, so far so good.

@sarahmish
Copy link
Collaborator

sarahmish commented Feb 24, 2021

Based on the keras adapter in MLPrimitives, we are in need to specify the __reduce__ attribute for the object. However, this is a low level operation and setting __getstate__ and __setstate__ is the recommended approach.

Two hurdles to consider:
(1) how to save the optimizer TadGAN.optimizer.
(2) the interpolated layer RandomeWeightedAverage is not recognized and causes ValueError: Unknown layer: RandomWeightedAverage error when loading the pipeline once again.
(3) similarly the custom loss functions are not recognized.

To solve this issue, we can specify custom_object in keras.load_model however, the architecture currently written overrides gradient_penalty loss for critic_x and critic_z.

Current solution is to use the TadGAN pipeline for predictions alone. In the new version of TadGAN #161, I will ensure that continuing training the pipeline is a supported feature.

@sarahmish sarahmish self-assigned this Mar 2, 2021
@sarahmish sarahmish added the bug Something isn't working label Mar 2, 2021
@sarahmish sarahmish added this to the 0.1.6 milestone Mar 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants