Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local Numbering #4

Closed
araeli opened this issue Nov 16, 2017 · 4 comments
Closed

Local Numbering #4

araeli opened this issue Nov 16, 2017 · 4 comments

Comments

@araeli
Copy link

araeli commented Nov 16, 2017

Hi all,

I have three (maybe trivial questions) traversing my Voloctree in parallel, first of all:

  1. calling mytree.getCellCount(); I get the ghost octant in my count. It means that for 2 processes and 16 octants it counts 12 cells. How can I ask only for local ones?

Supposing now that I am traversing the structure Voloctree named mytree in this way:

for( auto & cell : mytree.getCells() ){
const long id = cell.getId() ;
[...]

  1. How can I get the local index of the cell in the processor?
  2. Asking information about neighborhood how can I get the face neighboring? There is an equivalent to _findCellEdgeNeighs, _findCellVertexNeighs ?

Thank you for your help.

@andrea-iob
Copy link
Member

  1. To get the number of internal or ghost cells you can use the functions getInternalCount() and getGhostCount() respectively;

  2. Cells are stored in a container called PeircedStorage. Internally, this containers stores the cells in a vector but there is no guarantee that the cells will be stored contiguously inside the vector. For examples, if you patch contains the cells 0, 1, 4, 31, 55 (note: the ids of the cells are not necessarily consecutive), those cells may be stored internally in a different order and with some holes between them:

-------------------------------
| 0 | 1 | x | x | 4 | 55 | 31 | 
-------------------------------

You can ask for the raw index of a cell in the internal storage of the PiercedVector with the function getRawIndex(long id) of the cell's container (mypatch.getCells().getRawIndex(id)), but if you want to attach some data to your patch, please use the PiercedStorage container. This container will mimic the internal layout of the cell's storage, so you don't have to worry about the internal structure of the container.

int nComponents = 2; 
PiercedStorage<double> field(nComponents, &mypatch.getCells());

VolOctree::CellIterator cellBegin = mypatch.cellBegin();
VolOctree::CellIterator cellEnd   = mypatch.cellEnd();
for (auto itr = cellBegin; itr != cellEnd; ++itr) {
        long cellId = itr.getId();
        std::size_t cellRawId = itr.getRawIndex();
        field.rawAt(cellRawId) = ...;   // Access to the first component
        field.rawAt(cellRawId, 0) = ...;   // Access to the 1st component
        field.rawAt(cellRawId, 1) = ...;   // Access to the 2nd component
}
  1. To get face neighbours you can use one of the findCellFaceNeighs overloads (those functions are defined in the base classe PatchKernel).

If you are asking face neighbours in a performance-critical portion of your code you can also ask this information directly to the cells, but beware that on border faces there will be a dummy cell id set to Cell::NULL_ID (you can recognize this special id because it's negative, while all regular cell ids are equal or greater than zero):

for (Cell &cell : mypatch.getCells() ){
        const long *neighs = cell.getAdjacencies();
        int nNeighs = cell.getAdjacencyCount();
        for (int n=0; n < nNeighs; ++n){
            long neigh = neighs[n];
            if (neigh < 0) {
                continue;
            }
    
            <your_code>
    }
}

@araeli
Copy link
Author

araeli commented Nov 17, 2017

Thank you. I have a question traversing the octants in this way.

Following your suggestions I just printed the local index for 2 processors (uniform mesh 2048 internalCount for processor)

		VolOctree::CellIterator cellBegin = mytree.cellBegin();
		VolOctree::CellIterator cellEnd   = mytree.cellEnd();
		for (auto itr = cellBegin; itr != cellEnd; ++itr) {

			long cellId = itr.getId();
			PetscSynchronizedPrintf(MPI_COMM_WORLD, "\n %d \t-\t %d",cellId,mytree.getRank());
			PetscSynchronizedFlush(MPI_COMM_WORLD,PETSC_STDOUT);

	}

The count for each processor seems correct but the loop does not stop in the "internal" octants that I would handle.
The last "correct" print that I get is:
2047 - 0
4095 - 1

Then

4096 - 0
4096 - 1
and go on until
4158 - 0
4158 - 1

Instead of the Partition seems to be correct
PABLO :: Initial Serial distribution :
PABLO :: Octants for proc 0 : 4096
PABLO :: Octants for proc 1 : 4096
PABLO ::
PABLO ::
PABLO :: Final Parallel partition :
PABLO :: Octants for proc 0 : 2048
PABLO :: Final Parallel partition :
PABLO :: Octants for proc 1 : 2048

The exceeding values what represent? Why they are in both processors with the same index? I also tried to extract their information to better understand their presence and also if the global index is the same the x,y coordinates don't mach. As example
cellId x y rank
4156 _ 0.945312 _ 0.507812 - 0
4156 _ 0.945312 _ 0.492188 - 1
4157 _ 0.960938 _ 0.507812 - 0
4157 _ 0.960938 _ 0.492188 - 1
4158 _ 0.976562 _ 0.507812 - 0
4158 _ 0.976562 _ 0.492188 - 1
4159 _ 0.992188 _ 0.507812 - 0
4159 _ 0.992188 _ 0.492188 - 1

(PS. for the coordinates I just used a call to evalCellCentroid(cellId);)

Can you please detail me these points to avoid possible memory problems or overwriting in my code?

@marcocisternino
Copy link
Member

Ciao Alice,
cellBegin and cellEnd methods in Voloctree are for begin and end of all the cells stored in the process portion of the patch(i.e. local cells). In Voloctree internals and ghosts are stored together in the same container. So, if you want to loop over internals only, you have to use internalBegin and internalEnd methods (ghostBegin and ghostEnd for ghosts looping).

@marcocisternino
Copy link
Member

This issue has not seen any recent activity. I'm closing the issue, but feel free to re-open a closed issue if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants