Getting Started

Note: The current CMakefile and relevant instructions are mainly tested on linux-based systems including Windows Subsystem for Linux


The following prerequisites and dependencies are required for building OPS. Building each of the backends are optional and depends on the hardware and/or capabilities you will be targeting.


CMake 3.18 or newer is required for using the CMake building system. If the latest version is not installed/shipped by default, it can be downloaded from, e.g., using the following script.

# Assume that CMake is going to be installed at /usr/local/cmake
# sudo is not necessary for directories in user space.
sudo mkdir $cmake_dir
sudo sh ./cmake-$ --prefix=$cmake_dir  --skip-license
sudo ln -s $cmake_dir/bin/cmake /usr/local/bin/cmake


Python2 is required by the OPS Python translator. The CMake build system will try to identify it automatically. However, the process can fail sometime (e.g., if there are both Python2 and Python3 installed). If this happens, the path to Python2 can be specified manually by using -DPython2_EXECUTABLE when invoking CMake


HDF5 is required for parts of IO functionalities. The CMake build system uses the parallel version by default even for sequential codes, and automatically identify the library. If the automatic process fails, the path to the parallel HDF5 library can be specified by using -DHDF5_ROOT.

CUDA Backend

The CUDA backend targets NVIDIA GPUs with a compute capability of 3.0 or greater. The CMake build system will detect the tookit automatically. If the automatic process fails, the build system will compile the library without the CUDA support. Please use -DCUDA_TOOLKIT_ROOT_DIR to manually specify the path.

HIP Backend

The HIP backend targets AMD GPUs and NVIDIA GPUs which are supported by HIP - either through its CUDA support or the ROCm stack (tested with >=3.9).

SYCL Backend

The SYCL backend is currently in development and only working without MPI. It has been tested with Intel OneAPI (>=2021.1), Intel’s public LLVM version, and hipSYCL (>=0.9.1), and runs on Intel CPUs and GPUs through Intel’s OpenCL and Level Zero, NVIDIA and AMD GPUs both with the LLVM fork as well as hipSYCL. hipSYCL’s OpenMP support covers most CPU architectures too.

Tridiagonal Solver Backend

To use the tridiagonal solver OPS API in applications and build example applications such as adi, adi_burger and adi_burger_3D the open source tridiagonal solver (scalar) library needs to be cloned and built from the Tridsolver repository.

git clone

Details on building scalar tridiagonal solver library can be found in the README file located at the appropriate subdirectory.

Obtaining OPS

The latest OPS source code can be obtained by cloning the OPS repository using

git clone

Build OPS

Using cmake

Build library and example applications together

Create a build directory, and run CMake (version 3.18 or newer)

mkdir build
cd build
# Please see below for CMake options
make # IEEE=1 enable IEEE flags in compiler
make install # sudo is needed if a directory like /usr/local/ is chosen.

After installation, the library and the python translator can be found at the direcory specified by CMAKE_INSTALL_PREFIX, together with the executable files for applications at APP_INSTALL_DIR.

Build library and example applications separately

In this mode, the library can be firstly built and installed as

mkdir build
cd build
# Please see below for CMake options
make # IEEE=1 enable IEEE flags in compiler
make install # sudo is needed if a system direction is chosen,

Then the application can be built as:

mkdir appbuild
cd appbuild
# Please see below for CMake options
make # IEEE=1 this option is important for applications to get accurate results

cmake options

  • -DCMAKE_BUILD_TYPE=Release - enable optimizations

  • -DBUILD_OPS_APPS=ON - build example applications (Library CMake only)

  • -DOPS_TEST=ON - enable the tests

  • -DCMAKE_INSTALL_PREFIX= - specify the installation direction for the library (/usr/local by default, Library CMake only)

  • -DAPP_INSTALL_DIR= - specify the installation direction for the applications ($HOME/OPS-APPS by default)

  • -DGPU_NUMBER= - specify the number of GPUs used in the tests

  • -DOPS_INSTALL_DIR= - specify where the OPS library is installed (Application CMake only, see here)

  • -DOPS_VERBOSE_WARNING=ON - show verbose output during building process

Using Makefiles

Set up environmental variables:

  • OPS_COMPILER - compiler to be used (Currently supports Intel, PGI and Cray compilers, but others can be easily incorporated by extending the Makefiles used in step 2 and 3)

  • OPS_INSTALL_PATH - Installation directory of OPS/ops

  • CUDA_INSTALL_PATH - Installation directory of CUDA, usually /usr/local/cuda (to build CUDA libs and applications)

  • OPENCL_INSTALL_PATH - Installation directory of OpenCL, usually /usr/local/cuda for NVIDIA OpenCL implementation (to build OpenCL libs and applications)

  • MPI_INSTALL_PATH - Installation directory of MPI (to build MPI based distributed memory libs and applications)

  • HDF5_INSTALL_PATH - Installation directory of HDF5 (to support HDF5 based File I/O)

See example scripts (e.g. source_intel, source_pgi_15.10, source_cray) under OPS/ops/scripts that sets up the environment for building with various compilers (Intel, PGI, Cray).

Build back-end library

For C/C++ back-end use Makefile under OPS/ops/c (modify Makefile if required). The libraries will be built in OPS/ops/c/lib


For Fortran back-end use Makefile under OPS/ops/fortran (modify Makefile if required). The libraries will be built in OPS/ops/fortran/lib

cd $OPS_INSTALL_PATH/fortran

Build exampe applications

For example to build CloverLeaf_3D under OPS/apps/c/CloverLeaf_3D

cd ../apps/c/Cloverleaf_3D/