A2 - Introduction to OpenJij core interface (C++ interface)

Open in Colab

In this section, we introduce core interface (C++ interface) of OpenJij. If you only want to deal with the core Python interface, you can skip this section.

C++ interface is the lowest layer API of OpenJij. Core Python interface read out C++ interface internally. It can be used in applications where we want to get the best performance out of OpenJij using only C++, but not Python.

Run a problem

First, we clone repository from github.

$ git clone https://github.com/OpenJij/OpenJij
$ cd OpenJij

OpenJij is essentially a header-only library. Hence, we only need to specify the src directory path at compile to be able to use the C++ interface. We need to build library with CMake if we want to use GPU algorithms. Executing build_gcc.sh allows you to build automatically.

The code that output exactly the same result as the Python interface in the previous section can be written as follows (The same description can be found in project_template/template.cpp).

#include <graph/all.hpp>
#include <system/all.hpp>
#include <updater/all.hpp>
#include <algorithm/all.hpp>
#include <result/all.hpp>
#include <utility/schedule_list.hpp>
#include <random>

#include <iostream>

using namespace openjij;

int main(void){

    // set interaction matrix. use Graph module
    // define Dense graph of problem size N=5
    constexpr std::size_t N = 5;
    auto dense = graph::Dense<double>(N);

    // set interaction
    for(std::size_t i=0; i<N; i++){
        for(std::size_t j=0; j<N; j++){
            dense.J(i, j) = (i == j) ? 0 : -1;
        }
    }

    // set longitudinal magnetic fields
    for(std::size_t i=0; i<N; i++){
        dense.h(i) = -1;
    }

    // define random number generator (Mersenne Twister)
    auto rand_engine = std::mt19937(0x1234);

    // set System to compute from Graph
    // use system of normal classical MonteCarlo calculation
    auto system = system::make_classical_ising(dense.gen_spin(rand_engine), dense);

    // set schedules of annealing. use Utility module
    auto schedule_list = utility::make_classical_schedule_list(0.1, 100, 10, 10);

    // execute annealing. use Algorithm module
    // use simple SingleSpinFlip as a updating MonteCarlo step
    algorithm::Algorithm<updater::SingleSpinFlip>::run(system, rand_engine, schedule_list);

    // get result
    std::cout << "The result spins are [";
    for(auto&& elem : result::get_solution(system)){
        std::cout << elem << " ";
    }

    std::cout << "]" << std::endl;
}
At First, we execute make in project_template, and second we execute ./tutorial. Then we can see that \([1, 1, 1, 1, 1]\) is output as a solution just as before.
For more details of compile option, see also Makefile in projecto_template. Especially, when we use altorithms on GPUs, we can use build_gcc.sh to build them. Note that we need to link the CUDA library.

Comparison with sample script of core Python interface, it can be written in much the same way as a Python interface.

[ ]: