New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use findif for ccsd(t) conv gradients most of time #2943
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it's the syntax qc_module="ccenergy"
is confusing since that's the name of the CC energy code, but it's signaling to compute gradients analytically. I'm not sure what I'd prefer, but I don't think this will be obvious to most users.
I agree there's not a great name for the cc suite. Only alternative I've seen is It looks like the |
OK, then let's go ahead. |
@@ -242,6 +242,7 @@ def extract_modules(winnowed): | |||
("cisd", None): (", ci\ *n*", "Arbitrary-order *n* through DETCI is inefficient byproduct of CI", ["fnocc"]), | |||
("zapt2", None): (", zapt\ *n*", "Arbitrary-order *n* through DETCI is inefficient byproduct of CI", None), | |||
("mp4", None): (", mp\ *n*", "Arbitrary-order *n* through DETCI is inefficient byproduct of CI", ["fnocc"]), | |||
("ccsd(t)", "CCENERGY"): ("FN", "Analytic gradients for conventional all-electron RHF/UHF computations can be requested through |globals__qc_module|\ ``=ccenergy``, but their scaling is best suited to small molecules.", None), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I see the word "scaling" in this context I think of CPU-time, not RAM requirement. If the main problem with the current implementation is the amount of RAM required, I think something along the lines of "but the amount of RAM required scales steeply with system size." would get that point across better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memory is what the output files show running out. But if I recall old conversations correctly, it's N^9 scaling, not the expected N^8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's N^8 or N^9. It should be N^7, but the prefactor is terrible in my implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eek, yes, N^8 instead of N^7 is what I meant to write. Good to know that the issue is actually prefactor.
If anyone wants to clarify the message, the document_capabilities.py
file is the single source. Any edits can get propagated to the tables in a future docs build.
Description
@lothian, the primary files to look at are procedures/proc.py, cc.rst, preview_capabilities_ccenergy.rst, and (for example) cc13b/input.dat
User API & Changelog headlines
set qc_module ccenergy
.Dev notes & details
qc_module=ccenergy
explicitlyChecklist
closes #2913
Status