Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unscalable ASM? #21

Closed
amneetb opened this issue Jul 13, 2015 · 13 comments
Closed

Unscalable ASM? #21

amneetb opened this issue Jul 13, 2015 · 13 comments

Comments

@amneetb
Copy link
Contributor

amneetb commented Jul 13, 2015

Ref:

TBOX_ASSERT(num_subdomains > 0);

This seems like an unscalable approach. All processors must own atleast one patch on a particular level.
This is not guaranteed in SAMRAI(?).

@knepley Is there a workaround in PETSc?

@amneetb amneetb changed the title Unscalable ASM Unscalable ASM? Jul 13, 2015
@knepley
Copy link
Contributor

knepley commented Jul 13, 2015

On Mon, Jul 13, 2015 at 1:44 PM, Amneet Bhalla notifications@github.com
wrote:

Ref:

TBOX_ASSERT(num_subdomains > 0);

This seems like an unscalable approach. All processors must own atleast
one patch on a particular level.
This is not guaranteed(?).

When would you have processors with no work? I could see if you are talking
about some level of multigrid
where we want to neck down the process set.

Thanks,

 Matt


Reply to this email directly or view it on GitHub
#21.

What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

@amneetb
Copy link
Contributor Author

amneetb commented Jul 13, 2015

Yes, this is for MG preconditioning. Basically we are forming level solvers as MG-smoothers. Some processors can be devoid of patches on some levels in SAMRAI, which will trigger an error.

@boyceg
Copy link
Contributor

boyceg commented Jul 13, 2015

On Jul 13, 2015, at 3:08 PM, Amneet Bhalla notifications@github.com wrote:

Yes, this is for MG preconditioning. Basically we are forming level solvers as MG-smoothers. Some processors can be devoid of patches on some levels in SAMRAI, which will trigger an error.

To be clear: this only happens in coarser levels.

Matt, can we add in "dummy" subdomains that do not have empty index sets in this case? Or do we need to make sub-communicators that include only MPI ranks with active DOFs?

@knepley
Copy link
Contributor

knepley commented Jul 13, 2015

On Mon, Jul 13, 2015 at 3:01 PM, Boyce Griffith notifications@github.com
wrote:

On Jul 13, 2015, at 3:08 PM, Amneet Bhalla notifications@github.com
wrote:

Yes, this is for MG preconditioning. Basically we are forming level
solvers as MG-smoothers. Some processors can be devoid of patches on some
levels in SAMRAI, which will trigger an error.

To be clear: this only happens in coarser levels.

Matt, can we add in "dummy" subdomains that do not have empty index sets
in this case? Or do we need to make sub-communicators that include only MPI
ranks with active DOFs?

Ideally one day we want subcommunicators, but it is currently not easy to
make work. Thus I think empty index sets are the best.
I assume PETSc is choking on empties? I can fix that. It should not.

Thanks,

Matt


Reply to this email directly or view it on GitHub
#21 (comment).

What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

@boyceg
Copy link
Contributor

boyceg commented Jul 13, 2015

On Jul 13, 2015, at 4:02 PM, Matthew Knepley notifications@github.com wrote:

On Mon, Jul 13, 2015 at 3:01 PM, Boyce Griffith notifications@github.com
wrote:

On Jul 13, 2015, at 3:08 PM, Amneet Bhalla notifications@github.com
wrote:

Yes, this is for MG preconditioning. Basically we are forming level
solvers as MG-smoothers. Some processors can be devoid of patches on some
levels in SAMRAI, which will trigger an error.

To be clear: this only happens in coarser levels.

Matt, can we add in "dummy" subdomains that do not have empty index sets
in this case? Or do we need to make sub-communicators that include only MPI
ranks with active DOFs?

Ideally one day we want subcommunicators, but it is currently not easy to
make work. Thus I think empty index sets are the best.
I assume PETSc is choking on empties? I can fix that. It should not.

I don't think that Amneet has sent in empty IS'es, just noticed that PETSc chokes if there are no subdomains.

@amneetb, can you setup code to pass "dummy" empty IS'es for cases in which there are no local DOFs?

@amneetb
Copy link
Contributor Author

amneetb commented Jul 13, 2015

Sure.

@amneetb
Copy link
Contributor Author

amneetb commented Jul 13, 2015

Adding code like:


        int num_subdomains = static_cast<int>(d_overlap_is.size());
        if (num_subdomains == 0)
        {
            d_overlap_is.resize(1);
            ierr = ISCreate(PETSC_COMM_SELF, &d_overlap_is[0]);
            IBTK_CHKERRQ(ierr);

            d_nonoverlap_is.resize(1);
            ierr = ISCreate(PETSC_COMM_SELF, &d_nonoverlap_is[0]);
            IBTK_CHKERRQ(ierr);
        }
        ierr = PCASMSetLocalSubdomains(ksp_pc, num_subdomains, &d_overlap_is[0],   &d_nonoverlap_is[0]);
        IBTK_CHKERRQ(ierr);


Gives PETSc error:

[4]PETSC ERROR: Argument out of range
[4]PETSC ERROR: Each process must have 1 or more blocks, n = 0

@knepley
Copy link
Contributor

knepley commented Jul 13, 2015

On Mon, Jul 13, 2015 at 5:41 PM, Amneet Bhalla notifications@github.com
wrote:

Adding code like:

    int num_subdomains = static_cast<int>(d_overlap_is.size());
    if (num_subdomains == 0)
    {
        d_overlap_is.resize(1);
        ierr = ISCreate(PETSC_COMM_SELF, &d_overlap_is[0]);
        IBTK_CHKERRQ(ierr);

        d_nonoverlap_is.resize(1);
        ierr = ISCreate(PETSC_COMM_SELF, &d_nonoverlap_is[0]);
        IBTK_CHKERRQ(ierr);
    }
    ierr = PCASMSetLocalSubdomains(ksp_pc, num_subdomains, &d_overlap_is[0],   &d_nonoverlap_is[0]);
    IBTK_CHKERRQ(ierr);

Gives PETSc error:

[4]PETSC ERROR: Argument out of range
[4]PETSC ERROR: Each process must have 1 or more blocks, n = 0

This is definitely a silly requirement that I can eliminate. However, can
you give it one block with an empty
IS to see if that works?

Thanks,

Matt


Reply to this email directly or view it on GitHub
#21 (comment).

What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

@amneetb
Copy link
Contributor Author

amneetb commented Jul 14, 2015

Runs but gives a seg fault...

Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000000
0x0000000000000000 in ?? ()
(gdb) where
#0 0x0000000000000000 in ?? ()
#1 0x000000010d83cc9a in ISSort (is=0x7f8fca203660) at index.c:831
#2 0x000000010e5cf3a8 in PCSetUp_ASM (pc=0x7f8fca931a60) at asm.c:257
#3 0x000000010e65e1d5 in PCSetUp (pc=0x7f8fca931a60) at precon.c:982
#4 0x000000010e75d5bd in KSPSetUp (ksp=0x7f8fca1ee260) at itfunc.c:332
#5 0x000000010e75fac9 in KSPSolve (ksp=0x7f8fca1ee260, b=0x7f8fca1b0a60, x=0x7f8fcb1aee60) at itfunc.c:546
#6 0x0000000109ecc88e in IBTK::PETScLevelSolver::solveSystem () at PETScLevelSolver.cpp:168

@knepley
Copy link
Contributor

knepley commented Jul 14, 2015

Okay, I will fix it.

Thanks,

 Matt

On Mon, Jul 13, 2015 at 10:53 PM, Amneet Bhalla notifications@github.com
wrote:

Runs but gives a seg fault...

Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000000
0x0000000000000000 in ?? ()
(gdb) where
#0 0x0000000000000000 in ?? ()
#1 #1 0x000000010d83cc9a in ISSort
(is=0x7f8fca203660) at index.c:831
#2 #2 0x000000010e5cf3a8 in
PCSetUp_ASM (pc=0x7f8fca931a60) at asm.c:257
#3 #3 0x000000010e65e1d5 in
PCSetUp (pc=0x7f8fca931a60) at precon.c:982
#4 #4 0x000000010e75d5bd in
KSPSetUp (ksp=0x7f8fca1ee260) at itfunc.c:332
#5 #5 0x000000010e75fac9 in KSPSolve
(ksp=0x7f8fca1ee260, b=0x7f8fca1b0a60, x=0x7f8fcb1aee60) at itfunc.c:546
#6 #6 0x0000000109ecc88e in
IBTK::PETScLevelSolver::solveSystem () at PETScLevelSolver.cpp:168


Reply to this email directly or view it on GitHub
#21 (comment).

What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

@knepley
Copy link
Contributor

knepley commented Jul 26, 2015

I do not see the same error. I am not sure what is different here. Can you take a look at https://bitbucket.org/petsc/petsc/branch/knepley/test-asm-empty-subdomain ? This is my branch which has KSP ex6. It uses both the default and by-hand versions of ASM with an empty subdomain and does not crash.

@amneetb
Copy link
Contributor Author

amneetb commented Jul 29, 2015

@knepley OK; the example helps. So we changed the way the empty IS is created. Before it was

IS is_empty
ISCreate(PETSC_COMM_SELF, &is_empty);

Now following your example it is changed to

IS is_empty;
ISCreateGeneral(PETSC_COMM_SELF, 0, NULL, PETSC_OWN_POINTER, &is_empty);

This made the code in IBAMR to work. Probably you want to make even the first call to succeed in the ASM code for the case of empty subdomain (?).

@knepley
Copy link
Contributor

knepley commented Jul 30, 2015

On Wed, Jul 29, 2015 at 1:18 PM, Amneet Bhalla notifications@github.com
wrote:

@knepley https://github.com/knepley OK; the example helps. So we
changed the way the empty IS is created. Before it was

IS is_empty
ISCreate(PETSC_COMM_SELF, &is_empty);

Now following your example it is changed to
IS is_empty;
ISCreateGeneral(PETSC_COMM_SELF, 0, NULL, PETSC_OWN_POINTER, &is_empty);

This made the code in IBAMR to work. Probably you want to make even the
first call work to succeed in the ASM code for the case of empty subdomain
(?).

This case is slightly different. *Create() creates a PETSc object
container, but until the type is set, it
has no virtual table, so calls to any member function fail. I think we
should require an actual object
here.

Thanks,

 Matt


Reply to this email directly or view it on GitHub
#21 (comment).

What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

@amneetb amneetb closed this as completed Aug 1, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants