Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GlobalVector() applied to an eigenvector #59

Closed
martemyev opened this issue May 9, 2016 · 7 comments
Closed

GlobalVector() applied to an eigenvector #59

martemyev opened this issue May 9, 2016 · 7 comments
Assignees

Comments

@martemyev
Copy link
Contributor

martemyev commented May 9, 2016

Dear All,

I'm trying to compute eigenvectors of a problem -Delta u = lambda u with homogeneous Dirichlet boundary conditions following the example 11 (ex11p.cpp). There are only 2 differences:

  1. I compute everything with 1 thread, so instead of MPI_COMM_WORLD I use MPI_COMM_SELF
  2. I also need to put the eigenvectors all together in a dense matrix, so that one column of the matrix is the eigenvector.

There are no problems with MPI_COMM_SELF, but when I compute the eigenvector, something goes wrong. Here is my addition to the ex11p.cpp right after the eigenvalue problem is solved:

218    DenseMatrix modes(fespace->GetVSize(), nev);
219    for (int i = 0; i < nev; ++i) {
220      const Vector *y = lobpcg->GetEigenvector(i).GlobalVector();
221      Vector x;
222      modes.GetColumnReference(i, x);
223      x = *y;
224    }

This doesn't work for me, however. I suspect there is something in the GlobalVector() therefore I titled the issue that way. But, of course, it's maybe I'm doing something wrong.

Any advice is highly appreciated.

Thanks,
Mikhail

@tzanio tzanio added the usage label May 9, 2016
@mlstowell
Copy link
Member

Hello, Mikhail,

I'm curious in what way this fails. But more to the point I do see a couple things I would change.

First, if you are using MPI_COMM_SELF then every process is performing its own separate eigenvalue solve so they each have global eigenvectors. This means that the GlobalVector() call is not actually necessary. This call allocates a new vector and populates it with data from all processors so that each processor will have a copy of the full vector. Also, since this call allocates a new Vector, technically you should free y after copying it to x (assuming to don't simply eliminate y because it doesn't seem to be needed).

Did you try:
Vector x;
modes.GetColumnReference(i, x);
x =lobpcg->GetEigenvector(i);

I hope this is useful,
Mark

@martemyev
Copy link
Contributor Author

martemyev commented May 10, 2016

Hello Mark,

Thank you for your reply. I use MPI parallelism on a coarser scale, i.e. I need to solve many eigenproblems - each one within it's own subdomain. The subdomains, nevertheless, can consist of many cells, so the matrices can be quite large and sparse. Therefore, I'd like to use LOBPCG to solve the eigenproblem. The issue is that this solver is implemented for parallel setup only. So, even though I'd like to solve each eigenproblem with one thread, i.e. sequentially, I have to pretend I solve it in parallel to use LOBPCG.

You're right about the deallocation of the vector y, I had it in the code some time ago :)

And, finally, thank you for the suggestion how to fix the issue - that works great!

Cheers,
Mikhail

P.S. But it would be helpful to know why calling GlobalVector() fails. From what I understand it's not prohibited to create a full vector even though only one thread had the whole vector. I mean, it's clearly not necessary, and not efficient, but shouldn't be harmful. But what I saw was segfaults and memory corruption messages, so I'm curious why.

@martemyev
Copy link
Contributor Author

Hello Mark,

And, finally, thank you for the suggestion how to fix the issue - that works great!

In fact, it doesn't - I checked that there were no segfaults and other nasty stuff, but in fact this approach doesn't fill up the matrix:

Vector x;
modes.GetColumnReference(i, x);
x =lobpcg->GetEigenvector(i);

The reason is because lobpcg->GetEigenvector(i) returns a vector with size = x.size()+1, i.e. one bigger than x, so when x is copy constructed from lobpcg->GetEigenvector(i), it is reallocated and no longer connected to the modes matrix.

Any suggestions?

Thanks,
Mikhail

@mlstowell
Copy link
Member

Hi, Mikhail,

I found an error in how the eigenvectors were being allocated. In the file "linalg/hypre.cpp" the line "part[1]++;" in the HypreLOBPCG::SetOperator() method should be removed.

I've made the change in our local version but I'll need to verify a few things before changing the GitHub version.

Thanks for pointing this out. Have a nice day,
Mark

@martemyev
Copy link
Contributor Author

Hi Mark,

Now it works fine. Thank you for fixing that!

Cheers,
Mikhail

@tzanio
Copy link
Member

tzanio commented May 25, 2016

Hi Mihail,

Can we mark this issue as resolved?

Cheers,
Tzanio

@martemyev
Copy link
Contributor Author

Yes, of course. I just thought you'd close it after updating the github repository, but it's sure resolved.

Cheers,
Mikhail

@tzanio tzanio closed this as completed May 25, 2016
@tzanio tzanio added the linalg label Feb 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants