-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DMRG-SCF error with pyscf+block2 #7
Comments
Thanks for finding this issue. Unfortunately I cannot reproduce this error so this may be environment dependent. Probably you can try whether adding However, even if you can make a single DMRG calculation run this way, the CASSCF iteration will not work because the output format of To do DMRGSCF with
Notice that the second line is different from the This |
Many thanks to your kind reply. I successfully run the DMRGSCF with your script. Is it true that currently I cannot perform a large DMRGSCF with Pyscf/block2? |
Thanks for the feedback. You can leave this issue open and I will let you know when a more efficient At the meantime, I am also interested in understanding why there is a
Is this the same error as Also, with |
You are exactly right. Indeed I made a mistake that I install both block2 and block2-mpi. It seems that I have always used block2. After adjustment I still meet with problems with my previous script. I don't know well on how mkl works. I can tell you some test results. (I) for script:
I just install block2, pip list is as follows:
It calls (II) Then I
then run the scripts like
In this case, block2 does not work. It terminates with
At this time, |
Thanks for your detailed response. So now there are two separate problems (I) and (II). For (I), this is using The reason for the MKL error is that, the non-mpi For (II), this is likely due to the wrong openmpi version. The |
Sorry for my late reply.
and did not terminate. I don't know whether it applied openmpi correctly. However, the size of tmp files seems to change which makes me believe that something is gong on. By the way, if I want to mannually install block2 with cmake 3.17.0. After
I get errors with lots of
Could you give me some guide on it? Thank you for your patience. |
Thanks for your test and try. Yes manually install an openmpi 4.0.6 is a good way. So now there are two new problems. (1) Since now you have at least two versions of mpi installed in your system, you need to be careful to check whether (2) The error is because your C++ compiler version (gcc/g++) is too old. We need roughly at least g++ 7.3.0. When you run cmake, it will print the compiler version. You can get a more recent g++ from conda (or you may use |
So I just follow the simple version of your guide,
and directly went
The
It could run normally under |
(1) The 'std::bad_alloc' error may indicate that the memory is not enough. The default memory is 2GB. When you run You can use
To change the memory for each instance to 1 GB. (2) If you think (1) is not the problem, you can attach your (3) The block2main code is updated now to version 0.4.12. So now if you (4) For optimal performance for larger systems, you should (a) set |
Thanks a lot! dmrg.conf.txt (2)It's nice for me to know these updates. After solving the problem I would like to try some practical examples as soon as possible with block2. |
Thanks for attaching the files. I can see the problem now. In your attached The reason for the error should be the problem in The solution to the The source of all these problems is the fact that you have multiple mpi versions installed in your system. When this happens, we need to make sure that (1) |
Thanks for your great guide! (1)Now I can run it. In fact I found I didn't have a Thanks again for guiding me so much! |
It is great that this is eventually solved. Since the issue that we discussed here may also help other potential users, I updated the The NEVPT2 implemented in |
@liquidmoons Hi again, recently I did some tests on using block2 as the DMRGSCF solver. I found a problem related to the orbital reordering after restarting which can sometimes cause the energy increase during the DMRGSCF. It does not affect the standalone DMRG calculation. To fix this you can update to the most recent version (v0.4.14) of block2 if you use pip ( I also added a page in block2 documentation for DMRGSCF (including an example of state average of mixed spin states): https://block2.readthedocs.io/en/latest/user/dmrg-scf.html Finally, the README page of |
Thanks for your reminder. And here is an additional question: |
The two solvers will not run simultaneously. Maybe I should change |
Hi All,
I am willing to use block2 as a FCI solver in DMRG-SCF scheme with Pyscf 2.0.1. Both packages had been installed via pip and can run some examples.
To build the connection between Pyscf and block2, I manually added a script settings.py, as:
import os
from pyscf import lib
So now I can correctly do 'from pyscf import dmrgscf'. I tried to run an example:
But it met with error termination. The error start from:
Intel MKL ERROR: Parameter 5 was incorrect on entry to DGEMM .Intel MKL ERROR: Parameter 5 was incorrect on entry to DGEMM .
If I remove dmrgscf.settings.MPIPREFIX = 'mpirun -n 3'. It also terminates with error.
If I use
then I can run with 'block2main dmrg.conf' normally. Can you give any clues?
Thanks in advance!
yunshu
The text was updated successfully, but these errors were encountered: