Compile Pflotran On Nersc

4 minute read



See this documentation for installation on Linux machine.

Install Petsc

Download Petsc from Bitbucket, and save it into directory petsc_v3.11.3, and checkout the latest version 3.11.3.

git clone petsc_v3.11.3
cd petsc_v3.11.3
git checkout v3.11.3

Set current dir as PETSC_DIR, define --PETSC_ARCH to any name, and a subdir with the same name will be created within eg../petsc_v3.11.3/cori_haswell_intel_19_0_3

export PETSC_ARCH=cori_haswell_intel_19_0_3

Choose compiler

  • on haswell nodes, use craype-haswell, it should be the default one on Cori by checking using module list, otherwise, do the following

    module load craype-haswell
  • on KNL nodes, use craype-mic-knl by swapping the default with the new compiler.

    module swap craype-haswell craype-mic-knl

Load other modules

module load PrgEnv-intel
module swap cray-mpich cray-mpich        # yes, swap it
module load cray-hdf5-parallel
module load cmake
module load python # 2.7 or 3.7?
module unload darshan

configure petsc

  • use system installed hdf5 (recommended)
./config/ --with-cc=cc --with-cxx=CC --with-fc=ftn --CFLAGS='-fast -no-ipo' --CXXFLAGS='-fast -no-ipo' --FFLAGS='-fast -no-ipo' --with-shared-libraries=0 --with-debugging=0 --with-clanguage=c --PETSC_ARCH=$PETSC_ARCH --download-parmetis=1 --download-metis=1 --with-hdf5=1 --with-c2html=0 --download-mumps=1 --download-scalapack=1 --with-clib-autodetect=0 --with-fortranlib-autodetect=0 --with-cxxlib-autodetect=0 --LIBS=-lstdc++
  • Use the following command to configure petsc (download hdf5).
./config/ --with-cc=cc --with-cxx=CC --with-fc=ftn --CFLAGS='-fast -no-ipo' --CXXFLAGS='-fast -no-ipo' --FFLAGS='-fast -no-ipo' --with-shared-libraries=0 --with-debugging=0 --with-clanguage=c --PETSC_ARCH=$PETSC_ARCH --download-parmetis=1 --download-metis=1 --download-hdf5=1 --with-c2html=0 --with-clib-autodetect=0 --with-fortranlib-autodetect=0 --with-cxxlib-autodetect=0 --LIBS=-lstdc++
  • Glenn’s modified version for new Cori sys
./config/ \
PETSC_ARCH=cori_intel_19_0_3 \
--with-language=c \
--with-cc=cc --with-cxx=CC --with-fc=ftn \
COPTFLAGS='-g -O3 -fp-model fast' CXXOPTFLAGS='-g -O3 -fp-model fast' FOPTFLAGS='-g -O3 -fp-model fast' \
--download-hypre=1 --download-metis=1 --download-parmetis=1 --download-mumps=1 --download-scalapack=1 \
--with-hdf5=1 \
--with-debugging=0 \
  • example of currently loaded modules on haswell node
Currently Loaded Modulefiles:
  1) modules/                                 14) job/2.2.4-
  2) nsg/1.2.0                                        15) dvs/2.12_2.2.151-
  3) altd/2.0                                         16) alps/6.6.57-
  4) intel/                                 17) rca/2.2.20-
  5) craype-network-aries                             18) atp/2.1.3
  6) craype/2.6.2                                     19) PrgEnv-intel/6.0.5
  7) cray-libsci/19.06.1                              20) craype-haswell
  8) udreg/2.3.2-           21) cray-mpich/7.7.10
  9) ugni/         22) craype-hugepages2M
 10) pmi/5.0.14                                       23) cray-hdf5-parallel/
 11) dmapp/7.1.1-           24) cmake/3.14.4
 12) gni-headers/  25) python/2.7-anaconda-2019.07
 13) xpmem/2.2.20-
  • when seeing the following means configuration is sucessful.
 Configure stage complete. Now build PETSc libraries with (gnumake build):
   make PETSC_DIR=/global/project/projectdirs/m1800/pin/petsc_v3.11.1 PETSC_ARCH=cori_haswell_intel_19_0_3 all

compile Petsc

make all
# or
# make PETSC_DIR=/global/project/projectdirs/m1800/pin/petsc_v3.11.1 PETSC_ARCH=cori_haswell_intel_19_0_3 all

download and compile pflotran

git clone pflotran
cd pflotran/src/pflotran
make -j4 pflotran # use parallel thread to compile? You can also try -j8, -j16... if more cores are available

after compilation is complete, a new file named pflotran* executable is generated at current directory. You can also move this executable to another directory, e.g. ` pflotran/src/pflotran/bin/pflotran*, then you can export this directory to PATH`.

fast compilation

Caution! This works if only a small change is made because the compilation will not rebuild all the dependencies.

make pflotran fast=1

regression test

Do a regression test to see if pflotran if working.

export PFLOTRAN_EXE=/global/project/projectdirs/m1800/pin/pflotran/src/pflotran/pflotran
cd /global/project/projectdirs/m1800/pin/pflotran/regression_tests/default/543

request an interactive node to run the regression test.

salloc -N 1 -C haswell -q interactive -t 01:00:00 -L SCRATCH 
srun -n 1 $PFLOTRAN_EXE -pflotranin # need to use one core to run this example

Within seconds, the test model should finish, and the installation processes are done!

Realization dependent

PFLOTRAN supports running multiple realizations at once. Must have realization dependent dataset.

  • run single realization
srun -n 32 pflotran -realization_id 1
  • run all realization
srun -n 128 pflotran -stochastic -num_realizations 4 -num_groups 4

Note: # of realizations per groups equals num_realizations/num_groups; # of cores per simulation = # of cores/num_groups; in this case, 32 cores are used for one realization.


  • make sure PETSC_DIR and PETSC_ARCH are in your environment
export PETSC_ARCH=cori_haswell_intel_19_0_3
  • Pull the changes from remote repo
git pull 
  • Recompile PFLOTRAN
cd pflotran/src/pflotran
make -j4 pflotran
# make pflotran fast=1

automatically update

use the following bash script to automatically update PFLOTRAN to the latest on master branch.

#! /bin/bash
export PETSC_DIR=/global/project/projectdirs/pflotran/petsc-v3.13
export PETSC_ARCH=cori_intel_O

module load craype-haswell
module load PrgEnv-intel
module swap cray-mpich cray-mpich        # yes, swap it
module load cray-hdf5-parallel
module load cmake
module load python # 2.7 or 3.7?
module unload darshan

echo "<<<<<<<<<<<<<<<<<<update pflotran repo<<<<<<<<<<<<<<<<<<"
git pull

echo "<<<<<<<<<<<<<<<<<<compile pflotran<<<<<<<<<<<<<<<<<<<<<<<"
cd ./src/pflotran
make -j8 pflotran