Ginkgo: The cb-gmres program (original) (raw)

The CB-GMRES solver example..

Table of contents
Introduction About the example The commented program Results Comments about programming and debugging The plain program

Introduction

About the example

This example showcases the usage of the Ginkgo solver CB-GMRES (Compressed Basis GMRES). A small system is solved with two un-preconditioned CB-GMRES solvers:

  1. without compressing the krylov basis; it uses double precision for both the matrix and the krylov basis, and
  2. with a compression of the krylov basis; it uses double precision for the matrix and all arithmetic operations, while using single precision for the storage of the krylov basis

Both solves are timed and the residual norm of each solution is computed to show that both solutions are correct.

The commented program

Make sure all previous executor operations have finished before starting the time

exec->synchronize();

auto tic = std::chrono::steady_clock::now();

solver->apply(b, x_copy);

Make sure all computations are done before stopping the time

exec->synchronize();

auto tac = std::chrono::steady_clock::now();

duration += std::chrono::duration(tac - tic).count();

}

Copy the solution back to x, so the caller has the result

x->copy_from(x_copy);

return duration / static_cast(repeats);

}

int main(int argc, char* argv[])

{

Use some shortcuts. In Ginkgo, vectors are seen as a gko::matrix::Dense with one column/one row. The advantage of this concept is that using multiple vectors is a now a natural extension of adding columns/rows are necessary.

using ValueType = double;

using IndexType = int;

The gko::matrix::Csr class is used here, but any other matrix class such as gko::matrix::Coo, gko::matrix::Hybrid, gko::matrix::Ell or gko::matrix::Sellp could also be used.

The gko::solver::CbGmres is used here, but any other solver class can also be used.

Print the ginkgo version information.

if (argc == 2 && (std::string(argv[1]) == "--help")) {

std::cerr << "Usage: " << argv[0] << " [executor] " << std::endl;

std::exit(-1);

}

Map which generates the appropriate executor

const auto executor_string = argc >= 2 ? argv[1] : "reference";

std::map<std::string, std::function<std::shared_ptrgko::Executor()>>

exec_map{

{"cuda",

[] {

}},

{"hip",

[] {

}},

{"dpcpp",

[] {

}},

{"reference", [] { return gko::ReferenceExecutor::create(); }}};

executor where Ginkgo will perform the computation

const auto exec = exec_map.at(executor_string)();

Note: this matrix is copied from "SOURCE_DIR/matrices" instead of from the local directory. For details, see "examples/cb-gmres/CMakeLists.txt"

auto A = share(gko::read(std::ifstream("data/A.mtx"), exec));

Create a uniform right-hand side with a norm2 of 1 on the host (norm2(b) == 1), followed by copying it to the actual executor (to make sure it also works for GPUs)

const auto A_size = A->get_size();

auto b_host = vec::create(exec->get_master(), gko::dim<2>{A_size[0], 1});

b_host->at(i, 0) =

ValueType{1} / std::sqrt(static_cast(A_size[0]));

}

auto b_norm = gko::initialize<real_vec>({0.0}, exec);

b_host->compute_norm2(b_norm);

auto b = clone(exec, b_host);

As an initial guess, use the right-hand side

auto x_keep = clone(b);

auto x_reduce = clone(x_keep);

const RealValueType reduction_factor{1e-6};

Generate two solver factories: _keep uses the same precision for the krylov basis as the matrix, and _reduce uses one precision below it. If ValueType is double, then _reduce uses float as the krylov basis storage type

auto solver_gen_keep =

cb_gmres::build()

.with_criteria(gko::stop::Iteration::build().with_max_iters(1000u),

.with_baseline(gko::stop:📳:rhs_norm)

.with_reduction_factor(reduction_factor))

.with_krylov_dim(100u)

.with_storage_precision(

gko::solver::cb_gmres::storage_precision::keep)

.on(exec);

auto solver_gen_reduce =

cb_gmres::build()

.with_criteria(gko::stop::Iteration::build().with_max_iters(1000u),

.with_baseline(gko::stop:📳:rhs_norm)

.with_reduction_factor(reduction_factor))

.with_krylov_dim(100u)

.with_storage_precision(

gko::solver::cb_gmres::storage_precision::reduce1)

.on(exec);

Generate the actual solver from the factory and the matrix.

auto solver_keep = solver_gen_keep->generate(A);

auto solver_reduce = solver_gen_reduce->generate(A);

Solve both system and measure the time for each.

auto time_keep =

measure_solve_time_in_s(exec, solver_keep.get(), b.get(), x_keep.get());

auto time_reduce = measure_solve_time_in_s(exec, solver_reduce.get(),

b.get(), x_reduce.get());

Make sure the output is in scientific notation for easier comparison

std::cout << std::scientific;

Note: The time might not be significantly different since the matrix is quite small

std::cout << "Solve time without compression: " << time_keep << " s\n"

<< "Solve time with compression: " << time_reduce << " s\n";

To measure if your solution has actually converged, the error of the solution is measured. one, neg_one are objects that represent the numbers which allow for a uniform interface when computing on any device. To compute the residual, the (advanced) apply method is used.

auto one = gko::initialize({1.0}, exec);

auto neg_one = gko::initialize({-1.0}, exec);

auto res_norm_keep = gko::initialize<real_vec>({0.0}, exec);

auto res_norm_reduce = gko::initialize<real_vec>({0.0}, exec);

tmp = Ax - tmp

A->apply(one, x_keep, neg_one, tmp);

tmp->compute_norm2(res_norm_keep);

std::cout << "\nResidual norm without compression:\n";

write(std::cout, res_norm_keep);

tmp->copy_from(b);

A->apply(one, x_reduce, neg_one, tmp);

tmp->compute_norm2(res_norm_reduce);

std::cout << "\nResidual norm with compression:\n";

write(std::cout, res_norm_reduce);

}

Results

The following is the expected result:

Solve time without compression: 1.842690e-04 s

Solve time with compression: 1.589936e-04 s

Residual norm without compression:

%%MatrixMarket matrix array real general

1 1

2.430544e-07

Residual norm with compression:

%%MatrixMarket matrix array real general

1 1

3.437257e-07

Comments about programming and debugging

The plain program

#include

#include

#include

#include

#include

#include

#include <ginkgo/ginkgo.hpp>

double measure_solve_time_in_s(std::shared_ptr exec,

{

constexpr int repeats{5};

double duration{0};

auto x_copy = clone(x);

for (int i = 0; i < repeats; ++i) {

if (i != 0) {

x_copy->copy_from(x);

}

exec->synchronize();

auto tic = std::chrono::steady_clock::now();

solver->apply(b, x_copy);

exec->synchronize();

auto tac = std::chrono::steady_clock::now();

duration += std::chrono::duration(tac - tic).count();

}

x->copy_from(x_copy);

return duration / static_cast(repeats);

}

int main(int argc, char* argv[])

{

using ValueType = double;

using IndexType = int;

if (argc == 2 && (std::string(argv[1]) == "--help")) {

std::cerr << "Usage: " << argv[0] << " [executor] " << std::endl;

std::exit(-1);

}

const auto executor_string = argc >= 2 ? argv[1] : "reference";

std::map<std::string, std::function<std::shared_ptrgko::Executor()>>

exec_map{

{"cuda",

[] {

}},

{"hip",

[] {

}},

{"dpcpp",

[] {

}},

{"reference", [] { return gko::ReferenceExecutor::create(); }}};

const auto exec = exec_map.at(executor_string)();

auto A = share(gko::read(std::ifstream("data/A.mtx"), exec));

const auto A_size = A->get_size();

auto b_host = vec::create(exec->get_master(), gko::dim<2>{A_size[0], 1});

b_host->at(i, 0) =

ValueType{1} / std::sqrt(static_cast(A_size[0]));

}

auto b_norm = gko::initialize<real_vec>({0.0}, exec);

b_host->compute_norm2(b_norm);

auto b = clone(exec, b_host);

auto x_keep = clone(b);

auto x_reduce = clone(x_keep);

const RealValueType reduction_factor{1e-6};

auto solver_gen_keep =

cb_gmres::build()

.with_criteria(gko::stop::Iteration::build().with_max_iters(1000u),

.with_baseline(gko::stop:📳:rhs_norm)

.with_reduction_factor(reduction_factor))

.with_krylov_dim(100u)

.with_storage_precision(

gko::solver::cb_gmres::storage_precision::keep)

.on(exec);

auto solver_gen_reduce =

cb_gmres::build()

.with_criteria(gko::stop::Iteration::build().with_max_iters(1000u),

.with_baseline(gko::stop:📳:rhs_norm)

.with_reduction_factor(reduction_factor))

.with_krylov_dim(100u)

.with_storage_precision(

gko::solver::cb_gmres::storage_precision::reduce1)

.on(exec);

auto solver_keep = solver_gen_keep->generate(A);

auto solver_reduce = solver_gen_reduce->generate(A);

auto time_keep =

measure_solve_time_in_s(exec, solver_keep.get(), b.get(), x_keep.get());

auto time_reduce = measure_solve_time_in_s(exec, solver_reduce.get(),

b.get(), x_reduce.get());

std::cout << std::scientific;

std::cout << "Solve time without compression: " << time_keep << " s\n"

<< "Solve time with compression: " << time_reduce << " s\n";

auto one = gko::initialize({1.0}, exec);

auto neg_one = gko::initialize({-1.0}, exec);

auto res_norm_keep = gko::initialize<real_vec>({0.0}, exec);

auto res_norm_reduce = gko::initialize<real_vec>({0.0}, exec);

A->apply(one, x_keep, neg_one, tmp);

tmp->compute_norm2(res_norm_keep);

std::cout << "\nResidual norm without compression:\n";

write(std::cout, res_norm_keep);

tmp->copy_from(b);

A->apply(one, x_reduce, neg_one, tmp);

tmp->compute_norm2(res_norm_reduce);

std::cout << "\nResidual norm with compression:\n";

write(std::cout, res_norm_reduce);

}