-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GlobalVector() applied to an eigenvector #59
Comments
Hello, Mikhail, I'm curious in what way this fails. But more to the point I do see a couple things I would change. First, if you are using MPI_COMM_SELF then every process is performing its own separate eigenvalue solve so they each have global eigenvectors. This means that the GlobalVector() call is not actually necessary. This call allocates a new vector and populates it with data from all processors so that each processor will have a copy of the full vector. Also, since this call allocates a new Vector, technically you should free y after copying it to x (assuming to don't simply eliminate y because it doesn't seem to be needed). Did you try: I hope this is useful, |
Hello Mark, Thank you for your reply. I use MPI parallelism on a coarser scale, i.e. I need to solve many eigenproblems - each one within it's own subdomain. The subdomains, nevertheless, can consist of many cells, so the matrices can be quite large and sparse. Therefore, I'd like to use LOBPCG to solve the eigenproblem. The issue is that this solver is implemented for parallel setup only. So, even though I'd like to solve each eigenproblem with one thread, i.e. sequentially, I have to pretend I solve it in parallel to use LOBPCG. You're right about the deallocation of the vector y, I had it in the code some time ago :) And, finally, thank you for the suggestion how to fix the issue - that works great! Cheers, P.S. But it would be helpful to know why calling GlobalVector() fails. From what I understand it's not prohibited to create a full vector even though only one thread had the whole vector. I mean, it's clearly not necessary, and not efficient, but shouldn't be harmful. But what I saw was segfaults and memory corruption messages, so I'm curious why. |
Hello Mark,
In fact, it doesn't - I checked that there were no segfaults and other nasty stuff, but in fact this approach doesn't fill up the matrix: Vector x;
modes.GetColumnReference(i, x);
x =lobpcg->GetEigenvector(i); The reason is because Any suggestions? Thanks, |
Hi, Mikhail, I found an error in how the eigenvectors were being allocated. In the file "linalg/hypre.cpp" the line "part[1]++;" in the HypreLOBPCG::SetOperator() method should be removed. I've made the change in our local version but I'll need to verify a few things before changing the GitHub version. Thanks for pointing this out. Have a nice day, |
Hi Mark, Now it works fine. Thank you for fixing that! Cheers, |
Hi Mihail, Can we mark this issue as resolved? Cheers, |
Yes, of course. I just thought you'd close it after updating the github repository, but it's sure resolved. Cheers, |
Dear All,
I'm trying to compute eigenvectors of a problem -Delta u = lambda u with homogeneous Dirichlet boundary conditions following the example 11 (ex11p.cpp). There are only 2 differences:
There are no problems with MPI_COMM_SELF, but when I compute the eigenvector, something goes wrong. Here is my addition to the ex11p.cpp right after the eigenvalue problem is solved:
This doesn't work for me, however. I suspect there is something in the GlobalVector() therefore I titled the issue that way. But, of course, it's maybe I'm doing something wrong.
Any advice is highly appreciated.
Thanks,
Mikhail
The text was updated successfully, but these errors were encountered: