Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When using the analytic solver, fits sometimes crash with an underflow exception as follows:
This seems to happen if the pdf from the analytic solver comes to include any numbers less than the smallest positive floating point number, such that when solution.pdf_corr() (or pdf_err()) divides the pdf by dt, a number is produced that also falls in this "subnormal" range. This is a hard error to reproduce outside of fitting, but I was able to reproduce it with the following parameter values:
I think this means underflow already occurred in computing the analytic solution, and the only reason an exception is raised here is that in the loss function the pdf functions are called under
with np.errstate(all='raise'):
. So an alternative fix would be to allow the underflow here, too...I actually tried to see if I could reduce underflow in analytic_ddm_linbound() by taking logs and such, but it didn't seem to make much difference to the output, didn't solve this error, and resulted in a performance decrement--but fixing numerical issues is not my area of expertise.
More generally, I seem to get a lot of renormalization warnings when using the analytic solver on basically the same model I was solving numerically, but with a switch from exponentially to linearly collapsing bounds, so I don't know if that means the analytic solver is just more numerically unstable generally? This persisted after reducing dt, so I think it's due to extreme parameter values. Also, the renormalization warnings are for > 1.01, but in trying to solve the above error I noticed that pdfs < 1 can happen for two reasons: 1) RTs longer than T_dur; 2) very fast RTs that are less than the resolution of dt. Of course, the former is rightly captured by pdf_undec. The latter seems like a potential issue, but given that it will likely only happen for extreme parameter values during fitting, which probably will give poor fits, maybe it's not a major concern?