Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for mrDMD division by zero error during reconstruction #122

Closed
rfmiotto opened this issue Mar 2, 2020 · 2 comments
Closed

Fix for mrDMD division by zero error during reconstruction #122

rfmiotto opened this issue Mar 2, 2020 · 2 comments

Comments

@rfmiotto
Copy link
Contributor

rfmiotto commented Mar 2, 2020

Dear,

I am using PyDMD package to analyze the data from my research in fluid dynamics. Particularly, with the multi-resolution DMD (mrDMD), I was obtaining a zero-division error when doing the reconstruction of the flow field. This issue was reported by other users (see #81 , for example)

zero-division-error

Debugging

I noticed that the problem was coming from the exception of the fit method. That is, in the current implementation, if the number of resolution levels (used-defined parameter) is large, the fit method would return an error. In order to fix this problem, a try-except approach was used, and the eigenvalues, DMD modes, etc. were set as zero if the exception was raised. These null values were causing the problem in the reconstruction.

I solved this problem by proposing a small refactor in the fit method and I would like to make a pull request with the solution.

Moreover, I noticed that at the end of the fit method, the dmd_time and original_time dictionaries were being overwritten. It seemed to me that this is because the mrDMD cannot handle predictions yet (as mentioned in #63). So the dmd_time was forced to be equal to the original_time.

Finally, a MemoryError exception was added for when reconstructing the solutions built from large matrices and with many resolution levels.

Possible future contributions

In my research, I am interested in isolating a given physical phenomenon that occurs at a specific time interval. Since the dynamics have different spatiotemporal scales, the mrDMD came in handy. I had to implement more stuff in the MrDMD class in order to extract the modes related to the isolated temporal phenomenon. I believe that it might help other users and so, I will be pleased to include this contribution to the main source. Unit tests of these new methods will also come along.

Please, let me know what do you think of it.

Best regards,
Renato Miotto

@ndem0
Copy link
Member

ndem0 commented Mar 3, 2020

Dear @rfmiotto, thank you for the effort! We'll check the Merge Request as soon as possible.

Regarding your dubts:

  • the dictionaries initialization is performed for all the DMD classes, only to initialize the attributes (nothing related with the MrDMD prediction).
  • if I remember correctly, the try:...except lines have been added in order to avoid errors that occur when the step variable is too large to fit in a single window. Better (and cleaner) solutions are very welcome!

@rfmiotto
Copy link
Contributor Author

rfmiotto commented Mar 3, 2020

Dear @ndem0,

Thank you for clarifying.

But can I say that the mrDMD cannot handle predictions? (the dmd_time is a copy of original_time)


The solution that I propose in this PR is to limit the maximum level such that the bins at the topmost level are composed of at least 4 snapshots.

# Redefine max level if it is too big.
lvl_threshold = int(np.log(self._snapshots.shape[1]/4.)/np.log(2.))
if self.max_level > lvl_threshold:
    self.max_level = lvl_threshold
    print('Too many levels. Redefining `max_level` to {}'.format(self.max_level))

Also, I ensured that the minimum admissible step was 1, since slice step cannot be zero:
step = max(1, int(np.floor(old_div(n_samples, nyq))))

The reason for doing that is because (and please, correct me if I am wrong):

In the original code (I mean before this proposed commit), as we go to higher resolution levels the n_samples decreases up to a point when the n_samples/nyq becomes < 1. At this point, the step variable becomes zero, causing a ValueError to be raised in Xsub = Xraw[:, ::step]. So the code goes to the exception block and sets the DMD variables (modes, eigenvalues, etc) of that bin as zero.

This approach solves for the zero slice step in Xsub = Xraw[:, ::step], but when one tries to reconstruct the dynamics, these zero-valued bins are included in the computation and cause a division by zero error.

To my understanding, the try:..except blocks were added to solve this step problem. However, I didn't understand what exactly did you mean by the try:...except to be related to the step being large to fit a window. For example, the code below gives no errors:

a = np.arange(10)

print( a[::7] )  # Out: array([0, 7])
print( a[::25] ) # Out: array([0])

Please, let me know if you agree with that or if my conclusions are wrong.

Thank you and best regards,
Renato Miotto

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants