Compiling and Installing

Prerequisites

  • LINUX/UNIX Cluster Machines
  • GNU C Compiler
  • Intel C Compiler
  • MPI Libraries (Default: Intel MPI Library) supproting MPI_Comm_spawn functions.
  • Intel MKL(Math Kernel Library)

In addition, GPU-accelerated OpenFMO requires:

  • NVIDIA Graphics card (Fermi or Kepler microarchitecture) supporting double precision floating point operations
  • NVIDIA drivers for GPU

How to Get

OpenFMO program is available through the repositories hosted on github .

To check out the latest OpenFMO sources:

$ git clone https://github.com/OpenFMO/OpenFMO.git OpenFMO

How to Compile

After checking out the release archive of OpenFMO, you should move to its top directory:

$ cd OpenFMO

Makefile located in the top directory is used to build the OpenFMO executables. By typing make command with the “help” target option, the usage of the make command is printed:

$ make help
usage: make <target>

 original    build ofmo-master, ofmo-worker, ofmo-mserv.
 falanx      build ofmo-falanx.
 rhf         build ofmo-rhf.
 clean       remove build files.
 help        print this message.

In line with the printed explanation, the following command yields the “skeleton-RHF” executable, ofmo-rhf , in the top directory:

$ make rhf

The following command yields the three executables, ofmo-master , ofmo-master , and ofmo-mserv , in the top directory, which run the FMO calculations with the master-worker execute model (See Figure 1):

$ make original

If it is difficult to run with MPI_Comm_Spawn for your system, you can use Falanx programming middleware .

To build GPU-accelerated OpenFMO executables, one should modify the following parts of Makefile:

xcCUDAA = KEPLER
xcCUDAA = FERMI
xcCUDA = 0

Default value of xcCUDA variable is set to zero, which turns off nvcc compilation. To build the codes with nvcc for the Fermi microarchitecture, Makefile should be modified as follows:

#xcCUDAA = KEPLER
xcCUDAA = FERMI
#xcCUDA = 0

Similarly, for the Kepler microarchitecture, Makefile should be modified as follows:

xcCUDAA = KEPLER
#xcCUDAA = FERMI
#xcCUDA = 0

For getting optimal performance on your system, you may change dim2e[][] array in cuda/cuda-integ.cu.