-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The return of the NaN #72
Comments
Changing to Na2_bl = 3.0789
Na2_mol = pyscf.M(
atom = 'Na %.5f 0.0 0.0; Na %.5f 0.0 0.0' % (-Na2_bl/2, Na2_bl/2),
basis = '6-31G'
) and |
although I have now noticed that it also occurs in |
And it is not just the gradients which are NaN, it is also the loss itself! |
@jackbaker1001 I can't replicate the issue with the above molecule in the 4th basic example script. The only change I made was including
in the |
If you made clips which resolved NaN's here @PabloAMC can you push them to the branch so I can test on my end? |
@jackbaker1001 test the new branch I created https://github.com/XanaduAI/GradDFT/tree/72b-the-return-of-the-nan. The only change is the clip constant lowering to 1e-25. I tested a few extra basis sets, but they seem to work fine. |
@PabloAMC I still get NaNs on this branch running the self-consistent training in the notebook Can you tell me if this happens in your local install please? |
Did lowering the clipping constant help? I can't replicate the issue on my end with the current one. |
I've noticed that for a few basis sets, NaN gradients are appearing again when trained using the DIIS SCF loops but not the linear mixing loops.
I think this is likely because degenerate eigenvectors/eigenvalues are being encountered in the calls to
jnp.linalg.eigh
routines. We can try switching these out for our customsafe_eigh
which will hopefully fix the bug.The text was updated successfully, but these errors were encountered: