Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 41 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -379,42 +379,65 @@ of processes used. You can easily check if the files are the same by running:

### Ex10. Post-processing the data in python

\attention
You need to have a version %PDI with the \ref pycall_plugin "Pycall plugin"
to do this exercise.

In this exercise, you will once again modify the YAML file only and use python
to post-process the data in situ before writing it to HDF5.
Here, you will write the square root of the raw data to HDF5 instead of the
data itself.

* Examine the YAML file and compile the code.

* Load the \ref pycall_plugin "Pycall plugin" and enable this plugin
when the `loop` event is triggered.

Some variables of the python script inside `ex10.yml` are not defined.
The `with` directive of this plugin allows to specify input variables (parameters)
to pass to Python as a set of "$-expressions".
These parameters can be given as multiple blocks.

* Add a `with` block with the missing parameter to let the Python code process
the data exposed in `main_field` for event `loop`.

* Use the keyword `exec` of \ref pycall_plugin "Pycall plugin".
After the colon (":"), add a space and a vertical bar (" |").
Uncomment the python script.

Notice that the Decl'HDF5 configuration was simplified, no memory selection is
applied, the when condition disappeared.
applied, the `when` condition disappeared because it is done in the python script:
Comment thread
jmorice91 marked this conversation as resolved.
```python
if 0 < iter_id < 4:
transformed_field = np.sqrt(source_field[1:-1,1:-1])
pdi.expose('transformed_field', transformed_field, pdi.OUT)
```
The last line of the python script allows to expose the transformed field to %PDI.
Moreover, this data is known to %PDI in this call.

* Modify the Decl'HDF5 configuration to write the new data `transformed_field`
exposed from Python.

\attention
The dataset name is however explicitly specified now because it does not match
the %PDI variable name anymore, you will instead write a new variable exposed
from python.

The `pycall` section has been added to load the
\ref pycall_plugin "Pycall plugin".
It executes the provided code when the "loop" event is triggered.
The `with` section specifies the variables (parameters) to pass to Python as a
set of "$-expressions".
The provided code again exposes its result to %PDI and multiple blocks can be
chained this way.

* Add the missing parameter to the `with` block to let the Python code process
the data exposed in `main_field`.

* Modify the Decl'HDF5 configuration to write the new data exposed from Python.

* Match the output from `ex10.h5dump`. You can easily check if the files are the
same by running:
You should be able to match the expected output described in `ex10.h5dump`.
You can easily check if the files are the same by running:
```bash
diff ex10.h5dump <(h5dump ex10*.h5)
```
To see your `h5` file in readable file format,
you can check the section [Comparison with the `h5dump` command](#h5comparison).

\warning
If you relaunch the executable, remember to delete your old `ex10.h5` file before,
otherwise the data will not be changed.

\attention
In a more realistic setup, one would typically not write much code in the YAML
file directly, but would instead call functions specified in a `.py` file on
the side.
file directly, but would instead call functions specified in a `.py` file on the side.

## What next ?

Expand Down
53 changes: 32 additions & 21 deletions ex10.c
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,32 @@ int pcoord[2];
/// the alpha coefficient used in the computation
double alpha;

double L=1.0;
double source1[4]={0.4, 0.4, 0.2, 100};
double source2[4]={0.7, 0.8, 0.1, 200};

/** Initialize the data all to 0 except for the left border (XX==0) initialized to 1 million
* \param[out] dat the local data to initialize
*/
void init(double dat[dsize[0]][dsize[1]])
{
for (int yy=0; yy<dsize[0]; ++yy) for (int xx=0; xx<dsize[1]; ++xx) dat[yy][xx] = 0;
if ( pcoord[1] == 0 ) for (int yy=0; yy<dsize[0]; ++yy) dat[yy][0] = 1000000;
double dy = L / ((dsize[0]-2) *psize[0]) ;
double dx = L / ((dsize[1]-2) *psize[1]) ;

double cpos_x,cpos_y;
for(int yy=0; yy<dsize[0];++yy) {
cpos_y=(yy+pcoord[0]*(dsize[0]-2))*dy-0.5*dy;
for(int xx=0; xx<dsize[1];++xx) {
cpos_x=(xx+pcoord[1]*(dsize[1]-2))*dx-0.5*dx;
if((cpos_y-source1[0])*(cpos_y-source1[0]) + (cpos_x-source1[1])*(cpos_x-source1[1]) <= source1[2]*source1[2]) {
dat[yy][xx] = source1[3];
}
if((cpos_y-source2[0])*(cpos_y-source2[0]) + (cpos_x-source2[1])*(cpos_x-source2[1]) <= source2[2]*source2[2]) {
dat[yy][xx] = source2[3];
}
}
}
}

/** Compute the values at the next time-step based on the values at the current time-step
Expand All @@ -60,21 +79,15 @@ void init(double dat[dsize[0]][dsize[1]])
void iter(double cur[dsize[0]][dsize[1]], double next[dsize[0]][dsize[1]])
{
int xx, yy;
for (xx=0; xx<dsize[1]; ++xx) next[0][xx] = cur[0][xx];
for (yy=1; yy<dsize[0]-1; ++yy) {
next[yy][0] = cur[yy][0];
for (xx=1; xx<dsize[1]-1; ++xx) {
next[yy][xx] =
(1.-4.*alpha) * cur[yy][xx]
+ alpha * ( cur[yy][xx-1]
+ cur[yy][xx+1]
+ cur[yy-1][xx]
+ cur[yy+1][xx]
);
next[yy][xx] = (1.-4.*alpha) * cur[yy][xx]
+alpha * ( cur[yy][xx-1]
+ cur[yy][xx+1]
+ cur[yy-1][xx]
+ cur[yy+1][xx]);
}
next[yy][dsize[1]-1] = cur[yy][dsize[1]-1];
}
for (xx=0; xx<dsize[1]; ++xx) next[dsize[0]-1][xx] = cur[dsize[0]-1][xx];
}

/** Exchanges ghost values with neighbours
Expand All @@ -87,7 +100,7 @@ void exchange(MPI_Comm cart_comm, double cur[dsize[0]][dsize[1]])
int rank_source, rank_dest;
static MPI_Datatype column, row;
static int initialized = 0;

if ( !initialized ) {
MPI_Type_vector(dsize[0]-2, 1, dsize[1], MPI_DOUBLE, &column);
MPI_Type_commit(&column);
Expand All @@ -104,8 +117,8 @@ void exchange(MPI_Comm cart_comm, double cur[dsize[0]][dsize[1]])

// send up
MPI_Cart_shift(cart_comm, 0, -1, &rank_source, &rank_dest);
MPI_Sendrecv(&cur[1][1], 1, row, rank_dest, 100, // send column after ghost
&cur[dsize[0]-1][1], 1, row, rank_source, 100, // receive last column (ghost)
MPI_Sendrecv(&cur[1][1], 1, row, rank_dest, 100, // send row after ghost
&cur[dsize[0]-1][1], 1, row, rank_source, 100, // receive last row (ghost)
cart_comm, &status);

// send to the right
Comment thread
jmorice91 marked this conversation as resolved.
Expand All @@ -116,7 +129,7 @@ void exchange(MPI_Comm cart_comm, double cur[dsize[0]][dsize[1]])

// send to the left
MPI_Cart_shift(cart_comm, 1, -1, &rank_source, &rank_dest);
MPI_Sendrecv(&cur[1][1], 1, column, rank_dest, 100, // send column after ghost
MPI_Sendrecv(&cur[1][1], 1, column, rank_dest, 100, // send column after ghost
&cur[1][dsize[1]-1], 1, column, rank_source, 100, // receive last column (ghost)
cart_comm, &status);
}
Expand Down Expand Up @@ -162,7 +175,7 @@ int main( int argc, char* argv[] )
dsize[1] = global_size[1]/psize[1] + 2;

// create a 2D Cartesian MPI communicator & get our coordinate (rank) in it
int cart_period[2] = { 0, 0 };
int cart_period[2] = { 1, 1 };
MPI_Comm cart_comm; MPI_Cart_create(main_comm, 2, psize, cart_period, 1, &cart_comm);
Comment thread
jmorice91 marked this conversation as resolved.
MPI_Cart_coords(cart_comm, pcoord_1d, 2, pcoord);

Expand All @@ -178,11 +191,9 @@ int main( int argc, char* argv[] )
int ii=0;

// share useful configuration bits with PDI
PDI_expose("ii", &ii, PDI_OUT);
PDI_expose("pcoord", pcoord, PDI_OUT);
PDI_expose("dsize", dsize, PDI_OUT);
PDI_expose("psize", psize, PDI_OUT);
PDI_expose("main_field", cur, PDI_OUT);

// the main loop
for (; ii<10; ++ii) {
Expand All @@ -197,14 +208,14 @@ int main( int argc, char* argv[] )

// exchange data with the neighbours
exchange(cart_comm, next);

// swap the current and next values
double (*tmp)[dsize[1]] = cur; cur = next; next = tmp;
}
// finally share the main field as well as the loop counter after the loop
PDI_multi_expose("finalization",
"main_field", cur, PDI_OUT,
"ii", &ii, PDI_OUT,
"main_field", cur, PDI_OUT,
NULL);

// finalize PDI
Expand Down
Loading