PetscOpenMPSpawn

Initialize additional processes to be used as "worker" processes. This is not generally called by users. One should use -openmp_spawn_size <n> to indicate that you wish to have n-1 new MPI processes spawned for each current process.

Synopsis

#include "petsc.h"   
PetscErrorCode  PetscOpenMPSpawn(PetscMPIInt nodesize)
Not Collective (could make collective on MPI_COMM_WORLD, generate one huge comm and then split it up)

Input Parameter

nodesize -size of each compute node that will share processors

Options Database

-openmp_spawn_size nodesize - Notes: This is only supported on systems with an MPI 2 implementation that includes the MPI_Comm_Spawn() routine.

   Comparison of two approaches for OpenMP usage (MPI started with N processes)

   -openmp_spawn_size <n> requires MPI 2, results in n*N total processes with N directly used by application code
                                          and n-1 worker processes (used by PETSc) for each application node.
                          You MUST launch MPI so that only ONE MPI process is created for each hardware node.

   -openmp_merge_size <n> results in N total processes, N/n used by the application code and the rest worker processes
                           (used by PETSc)
                          You MUST launch MPI so that n MPI processes are created for each hardware node.

   petscmpirun -np 2 ./ex1 -openmp_spawn_size 3 gives 2 application nodes (and 4 PETSc worker nodes)
   petscmpirun -np 6 ./ex1 -openmp_merge_size 3 gives the SAME 2 application nodes and 4 PETSc worker nodes
      This is what would use if each of the computers hardware nodes had 3 CPUs.

     These are intended to be used in conjunction with USER OpenMP code. The user will have 1 process per
  computer (hardware) node (where the computer node has p cpus), the user's code will use threads to fully
  utilize all the CPUs on the node. The PETSc code will have p processes to fully use the compute node for 
  PETSc calculations. The user THREADS and PETSc PROCESSES will NEVER run at the same time so the p CPUs 
  are always working on p task, never more than p.

   See PCOPENMP for a PETSc preconditioner that can use this functionality


See Also

PetscFinalize(), PetscInitializeFortran(), PetscGetArgs(), PetscOpenMPFinalize(), PetscInitialize(), PetscOpenMPMerge()

Level:developer
Location:
src/sys/objects/mpinit.c
Index of all Sys routines
Table of Contents for all manual pages
Index of all manual pages