Executing OpenFMO

Setup

  • Set OMP_NUM_THREADS environment variable:

    # csh, tcsh:
    setenv OMP_NUM_THREADS (Number_of_Threads)
    
    # sh, bash:
    export OMP_NUM_THREADS=(Number_of_Threads)
    
  • Set LIBRARY_PATH environment variable:

    # csh, tcsh: add to shell
    setenv LIBRARY_PATH $LD_LIBRARY_PATH
    
    # sh, bash: add to shell
    export LIBRARY_PATH=$LD_LIBRARY_PATH
    
  • Set OFMOPATH environment variable that points the directory storing the OpenFMO executables ( ofmo-master , ofmo-worker , and ofmo-mserv ) or “skeleton-RHF” executable ( skel-rhf ), which is usually the directory where you compile OpenFMO programs (see How to Compile); you have to tell OpenFMO the path to this directory using the execution option of ofmo-master with -bindir . (See the detail in Command Line Options)

    # csh, tcsh:
    setenv OFMOPATH /OpenFMO/executables/install/directory
    
    # sh, bash:
    export OFMOPATH=/OpenFMO/executables/install/directory
    
  • Set SCRDIR environment variable that points the directory storing the temporary “scratch” files for OpenFMO executables; you have to tell OpenFMO the path to this directory using the execution option of ofmo-master with -scrdir . (See the detail in Command Line Options)

    # csh, tcsh:
    setenv SCRDIR /Pass/To/OpenFMO/Scratch/Files/Directory
    
    # sh, bash:
    export SCRDIR=/Pass/To/OpenFMO/Scratch/Files/Directory
    
  • Prepare the input files. (See Input File Format)

Command Line Options

By running the “skeleton-RHF” program, skel-rhf , with a help command-line argument, -h , its usage is printed:

$ ${OFMOPATH}/skel-rhf -h
Usage: skel-rhf [-snvh][-B buffer] input [density]
  -B buf: # buffer size (MB, default: 0)
  -s: sync
  -n: dryrun
  -v: verbose
  -h: show this help
Options for GPGPU:
  -d ndev: # devices (default:0)

Similarly, by running the OpenFMO program, ofmo-master , with a help command-line argument, -h , you can see some of its command-line arguments:

$ ${OFMOPATH}/ofmo-master -h
Usage: ofmo-master [options] [input [InitDens]]
  -ng #: # groups
  -np #: # total MPI procs
  -B #: buffer size / proc (MB, default: 512)
  -v: verbose
  -h: show this help
 Options for GPGPU:
  -d #: # devices (default:0)

Note that OpenFMO should be invoked with -ng and -np command-line arguments.

Table 1 lists the command-line arguments to ofmo-master . Note that the command-line arguments are given priority over the corresponding ones defined in Input File.

Table 1. Command Line Arguments to OpenFMO
Argument Acceptable Variables Explanation Note
-h, –help   Display the explanation of its command-line arguments  
-np #, -nmaxprocs # #: positive integer Total number of MPI processes Master + Server + Worker MPI processes; 1 + 2 + Ng multiplied by P , in the case of Figure 1
-ng #, -ngroup # #: positive integer Number of Worker groups Ng in the case of Figure 1
–niogroup # #: positive integer Number of server groups (default=1) 1 in the case of Figure 1; you can also set niogroup variable through $GDDI group in Input File.
–nioprocs # #: positive integer Size of each server group (default=1) 2 in the case of Figure 1; you can also set nioprocs variable through $GDDI group in Input File.
-B #, -buffer # #: zero or positive integer buffer size / proc in MB (default=512) you can also set buffer variable as nintic through $INTGRL group in Input File.
-bindir Path to the directory storing OpenFMO executables OpenFMO executables location (default = Current directory) Use OFMOPATH environment variable set in Setup
-scrdir Path to the directory storing the temporary “scratch” files Scratch files location (default = Current directory) Use SCRDIR environment variable set in Setup
-d 0 or 1 Turn off/on GPGPU (default=0)  

Multi-thread Execution of “skeleton-RHF”

  1. First set OMP_NUM_THREADS environment variable (see Setup).

  2. Next set OFMOPATH environment variable (see Setup). Then, execute the “skeleton-RHF” program within a single cluster node:

    $ ${OFMOPATH}/skel-rhf Input_File_Name > Log_File_Name
    
  3. Execute the GPGPU-accelerated “skeleton-RHF” program within a single cluster node:

    $ ${OFMOPATH}/skel-rhf -d 1 Input_File_Name > Log_File_Name
    

Hybrid Execution of “skeleton-RHF”

  1. First set OMP_NUM_THREADS , OFMOPATH , and SCRDIR environment variables (see Setup).

  2. Execute the “skeleton-RHF” program with N MPI processes:

    $ mpiexec.hydra -np N ${OFMOPATH}/skel-rhf Input_File_Name > Log_File_Name
    
  3. To perform GPGPU-accelerated RHF/RKS calculations with N MPI processes:

    $ mpiexec.hydra -np N ${OFMOPATH}/skel-rhf -d 1 Input_File_Name > Log_File_Name
    

Execution of OpenFMO

Here, we demonstrate an example of the way OpenFMO is executed by using the cluster including the GPU nodes; one node is comprised of Intel Xeon E-5-2680 (2.6 GHz 8 cores) 2 CPUs and NVIDIA Tesla M2090 (Fermi) 4 units.

If the GPU-accelerated FMO-RHF calculation is performed using 8 nodes (2 x 8 x 8 = 128 cores and 4 x 8 = 32 GPU units ) with 1 data server of 1 rank and 15 worker groups of 2 ranks (See Figure 1), you should run OpenFMO as follows:

  1. First, set up OFMOPATH and SCRDIR environment variables (see Setup).

  2. OMP_NUM_THREADS should be set to 4 (128 cores / 32 MPI processes).

  3. Then, execute ofmo-master with the proper command-line arguments (see Command Line Options):

    $ mpiexec.hydra -np 1 ${OFMOPATH}/ofmo-master -np 32 -ng 15 -d 1 -bindir ${OFMOPATH} -scrdir ${SCRDIR} Input_File_Name > Log_File_Name
    

The master openMP thread of each MPI rank controls one GPU unit. Therefore, you had better make the total number of MPI processes be equal to that of the available GPU units (32 in the above case) in order to bring out the GPU’s maximum performance on your cluster machines.

PBI Job File

Next, we demonstrate an example of the way OpenFMO is run on a PBS queuing system. When performing the same GPU-accelerated FMO-RHF calculations described above, you need to write a PBS job file. The minimum example “job.sh” is as follows:

#!/bin/sh
#PBS -j oe
#PBS -N JobName
OFMOPATH="/OpenFMO/executables/install/directory"
SCRDIR="/Pass/To/OpenFMO/Scratch/Files/Directory"
LIBRARY_PATH=${LD_LIBRARY_PATH}
cd ${PBS_O_WORKDIR}
OMP_NUM_THREADS=4
opt=""
opt+=" -np 32"
opt+=" -bindir ${OFMOPATH}"
opt+=" -B 0"
opt+=" -ng 15"
opt+=" -scrdir ${SCRDIR}"
date
set -x
mpiexec.hydra -np 1 -print-rank-map ${OFMOPATH}/ofmo-master $opt Input_Fine_Name
set +x
date

Then, submit the PBS job:

$ qsub job.sh