Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow output using a coarser grid #38

Closed
ketch opened this issue Jun 21, 2011 · 3 comments
Closed

Allow output using a coarser grid #38

ketch opened this issue Jun 21, 2011 · 3 comments

Comments

@ketch
Copy link
Member

ketch commented Jun 21, 2011

Average q over blocks and then output, to speed up output and reduce file sizes in large runs.

@ghost ghost assigned ahmadia Jun 22, 2011
@ahmadia
Copy link
Member

ahmadia commented Aug 2, 2011

Notes from Manuel:

I have been working on this. First, I found the only available interpolations in PETSc are of zeroth and first order as one can see here [1]. I guess neither is good for post-computing purposes, but either is fine for plotting so I used zeroth order. This interpolation just allows a refinement factor of 2, see lines 410 and 411 of [2]. This is good enough for me and if not, I can do it a couple of times or more.

Anyway, the only real issue I found is that the number of processors used seems to determine if this is going to work or not. Let Mx and My be the number of grid cells in x and y respectively considering the coarse grid (this is PETSc notation) and np be the number of processors. Then Mx*My/np has to be an integer for the code not to crash, at least in my implementation (which is shown below).

I can work with this in the meantime (so no rush from my part Aron). Indeed, since I already did several heavy simulations, which I don't want to redo, I will just run a code that reads the finer solution, coarsen it and write it down in a different file. I guess that even for the following simulations, I will follow this approach since it may be better to have coarsen and non-coarsen outputs.

Implementation:
sol.state.q_da.setRefinementFactor(refine_x=2,refine_y=2) # This is the default
qda_coarsen = sol.state.q_da.coarsen()
gqVec_coarsen = qda_coarsen.createGlobalVec()
qda_coarsen.setInterpolationType(interp_type=0) # zeroth order, first is the default
mat,scale = qda_coarsen.getInterpolation(sol.state.q_da)
mat.multTranspose(sol.state.gqVec,gqVec_coarsen)
gqVec_coarsen = scale*gqVec_coarsen

then I do gqVec_coarsen.getArray().reshape(...), create a coarsen grid (x and y must be coarsen as well), coarsen state, coarsen solution and write this down using write_petsc()

references:
(1). http://www.mcs.anl.gov/petsc/petsc-2/snapshots/petsc-current/docs/manualpages/DA/DASetInterpolationType.html
(2). http://www.mcs.anl.gov/petsc/petsc-2/snapshots/petsc-current/src/dm/da/src/dainterp.c.html

@ahmadia
Copy link
Member

ahmadia commented Sep 30, 2012

Code exists for doing this in user-space, I'm still not sure this is something we want to roll into pyclaw as anything besides a demonstration in apps/

Should we add some of Manuel's code that does this into the apps/ directory, perhaps even as a test?

@ahmadia ahmadia closed this as completed Sep 30, 2012
@ketch
Copy link
Member Author

ketch commented Oct 21, 2012

On Sun, Sep 30, 2012 at 4:10 AM, ahmadia notifications@github.com wrote:

Code exists for doing this in user-space, I'm still not sure this is
something we want to roll into pyclaw as anything besides a demonstration
in apps/

Should we add some of Manuel's code that does this into the apps/directory, perhaps even as a test?

Yes.


Reply to this email directly or view it on GitHubhttps://github.com//issues/38#issuecomment-9009816.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants