cd mpi/diffusion.
make diffusionf **or** make diffusionc
./diffusionf **or** ./diffusionc
Discretizing Derivatives
Done by finite differencing the discretized values
Implicitly or explicitly involves interpolating data and taking derivative of the interpolant
More accuracy - larger `stencils’
Diffusion Equation
Simple 1d PDE
Each timestep, new data for T[i] requires old data for T[i+1], T[i],T[i-1]
Guardcells
How to deal with boundaries?
Because stencil juts out, need information on cells beyond those you are updating
Pad domain with `guard cells’ so that stencil works even for the first point in domain
Fill guard cells with values such that the required boundary conditions are met
Domain Decomposition
A very common approach to parallelizing on distributed memory computers
Maintain Locality; need local data mostly, this means only surface data needs to be sent
between processes.
Implement a diffusion equation in MPI
Need one neighboring number per neighbor per timestep
Guardcells
Works for parallel decomposition!
Job 1 needs info on Job 2s 0th zone, Job 2 needs info on Job 1s last zone
Pad array with `guardcells’ and fill them with the info from the appropriate node by messagepassing or shared memory
-Hydro code: need guardcells 2 deep
Do computation
guardcell exchange: each cell has to do 2 sendrecvs
its rightmost cell with neighbors leftmost
its leftmost cell with neighbors rightmost
Everyone do right-filling first, then left-filling (say)
For simplicity, start with periodic BCs
then (re-)implement fixed-temperature BCs; temperature in first, last zones are fixed
Hands-on: MPI diffusion
cp diffusionf.f90 diffusionfmpi.f90 or
cp diffusionc.c diffusionc-mpi.c or
Make an MPI-ed version of diffusion equation
(Build: make diffusionf-mpi or make diffusionc-mpi)
Test on 1..8 procs
add standard MPI calls: init, finalize, comm_size, comm_rank
Figure out how many points PE is responsible for (~totpoints/size)