How to write a CFD library: The Conjugate Gradient class

When it comes to Computational Fluid Dynamics (CFD) applications, the Conjugate Gradient (CG) method is one of the more used algorithms to solve the linear system of equations \mathbf{Ax}=\mathbf{b}. It offers fast convergence, low storage requirements, and is relatively easy to implement. In this week’s article, we’ll look at how we can implement this algorithm into our CFD library and use it later to solve our heat diffusion equation.

Download Resources

All developed code and resources in this article are available for download. If you are encountering issues running any of the scripts, please refer to the instructions for running scripts downloaded from this website.

In this series

In this article

How we got here

Thus far, we have looked at the heat diffusion equation and discretised it in a previous article using the finite volume method. We looked at explicit and implicit time integration techniques and concluded that for equations that do not have a time derivative we need to use an implicit discretisation. The implicit discretisation requires a linear system of equations solver and we concluded that this is best handled by a library, which can be used over and over again for different discretised equations.

In last week’s article, then, we looked at how to flesh out a basic library structure to house the code for our linear system of equations solver. We also wrote a test code that implemented the discretised heat diffusion equation using our library which we haven’t written yet; this allowed us to design the interface of our library, i.e. how we want to call library functions, their names, their order, etc. We’ll use that test to ensure the correctness of our library code.

In this week’s article, I want to start looking at some library code. We will first start with the conjugate gradient algorithm, which we seek to implement in our library. The algorithm will not work as we have yet to define our matrix and vector class. But what it does allow us to do is to write code that shows us which functions we need to implement on the vector and matrix class side, especially operators we want to overload. If we want to look back at our basic library structure, here is what we are going to deal with this week:

root
├── build
├── linearAlgebraLib
│   ├── linearAlgebraLib.hpp
│   └── src
│       ├── vector.(hpp|cpp)
│       ├── sparseMatrixCSR.(hpp|cpp)
│       ├── linearAlgebraSolverBase.(hpp|cpp) // current article
│       └── conjugateGradient.(hpp|cpp)       // current article
├── main.cpp
└── buildAndRun.(sh|bat)

So with this preamble out of the way, let’s look at the conjugate gradient algorithm next.

The Conjugate Gradient algorithm

When it comes to iterative matrix solvers, the conjugate gradient algorithm is one of the most used ones in the field of computational fluid dynamics. Found in many commercial and open-source CFD solvers, such as Siemens’ Star CCM+ and OpenFOAM, it is typically an essential building block for fast convergence.

The conjugate gradient algorithm seeks to minimise the following quadratic function:

f(\mathbf{x})=\frac{1}{2}\mathbf{x}^T\mathbf{A}\mathbf{x}-\mathbf{x}^T\mathbf{b}

It just so happens that f(\mathbf{x}) at its minimum corresponds to the solution of \mathbf{Ax}=\mathbf{b}. So, instead of solving our linear system of equations, we can transform our problem into one of finding the minimum to f(\mathbf{x}). This is now a geometric task and finding the minimum can be simply done by picking any point on f(\mathbf{x}), calculating its gradient and then going towards the minimum. We repeat this step and iteratively approach the minimum, at which point we have recovered the solution to \mathbf{Ax}=\mathbf{b}.

If you are interested in a full walk-through of the algorithm and how we get to its final form, I can only recommend the following two lectures on the conjugate gradient method (and the method of steepest decent, which is a simplified form of the algorithm). You’ll need to set some time aside, but it’ll be worth your time:

We’ll be content in this article with the final form of the conjugate gradient algorithm, and we will not dive too deep into the theory, after all, we want to see the algorithm in code and how it works, and provide a working library implementation.

Below is an outline for the conjugate gradient algorithm taken from here. We’ll use this pseudo code to implement our version of it into our library.

Here, \mathbf{r}_0 is the starting residual, either computed from the initial solution or the previous iteration. Once this number becomes smaller than a convergence threshold, we have found our solution.

\mathbf{b} is the right-hand side vector, containing boundary conditions and explicit contributions, \mathbf{A} is the coefficient matrix and will depend both on the equation that is being solved, as well as the numerical schemes that are used. \mathbf{x}_0 is the solution vector (in our case the temperature), and the subscript indicates that this vector will be iteratively updated.

\mathbf{p} represents the search direction, i.e. in which direction we are marching to get towards the minimum of f(\mathbf{x}), while k is simply the iteration count. The two scalars \alpha and \beta are used to determine how far we are going along a gradient, and by how much to update the new search direction from the previous one (\mathbf{r}) to the new one (\mathbf{p}), respectively. We iteratively solve this problem until convergence is reached, which is judged based on \mathbf{r}. Given that \mathbf{r} is the residual of the linear system, i.e. \mathbf{r}=\mathbf{b}-\mathbf{Ax}, once it is sufficiently small, convergence is reached.

This website exists to create a community of like-minded CFD enthusiasts and I’d love to start a discussion with you. If you would like to be part of it, sign up using the link below and you will receive my OpenFOAM quick reference guide, as well as my guide on Tools every CFD developer needs for free.

Join now

An abstract base class for our matrix solvers

We know which algorithm we want to implement, and we will limit ourselves to the conjugate gradient algorithm here. However, in theory, we may want to have more than just a single iterative matrix solver. Whenever we are in a position where we have an overarching type, from which many more specialised types derive, we have a classical inheritance problem in C++ (or more broadly in object-orientated programming), and we should be providing a base class.

Now, there is one caveat: Kent Beck introduced extreme programming (XP), a philosophy widely practised in software engineering (and something we ought to talk about in the future!). It features a variety of components, one of them being YAGNI (you aren’t gonna need it). The idea here is to not write a single line of code if it is not actually needed. So in our case, we said that we only wanted to implement the conjugate gradient algorithm. If we only have a single derived class, then inheritance becomes redundant and YAGNI tells us to just use a single class.

This example is, however, overly simplified, I want to still provide a base class to simulate the case of having more than a single derived class (which would be typically the case for a library). So don’t send in those angry memes just yet, there is a reason for it. In reality, we have all of these guiding mantras which we have to break every now and then if we have a good reason to. That’s not my conclusion but directly taken from Uncle Bob’s book Clean Code: A Handbook of Agile Software Craftsmanship (and yes, everyone calls him Uncle Bob, that’s not just me being weird and another excellent book we ought to discuss in the future).

So then, let’s look at our abstract base class. The header file is shown below:

#pragma once

#include "linearAlgebraLib/src/sparseMatrixCSR.hpp"
#include "linearAlgebraLib/src/vector.hpp"

namespace linearAlgebraLib {

class LinearAlgebraSolverBase {
public:
  LinearAlgebraSolverBase(unsigned numberOfCells);
  virtual ~LinearAlgebraSolverBase() = default;

  void setCoefficientMatrix(const SparseMatrixCSR &matrix);
  void setRightHandSide(const Vector &rhs);
  
  virtual Vector solve(unsigned maxIterations, double convergenceThreshold) = 0;

protected:
  SparseMatrixCSR _coefficientMatrix;
  Vector _rightHandSide;
};

} // namespace linearAlgebraLib

It isn’t overly complicated! But let’s point out a few things here. First, we are now using a namespace on line 6. We haven’t talked about them before, but suffice it to say that namespaces are used to group classes into logical units. It avoids having classes with the same name defined in several places. For example, there already is a vector class implemented in the standard template library (STL), and you know it better as the std::vector. We want to provide later our own vector class, so putting it in its own namespace avoids a naming conflict. Namespaces are followed by the :: operator, so you can see that all STL containers are implemented in the std namespace.

The LinearAlgebraSovlerBase class features only a few functions. We have the constructor and destructor as we would expect. We will require the numberOfCells argument in the constructor as we want to initialise our coefficient matrix and right-hand side vector during construction. The two functions setCoefficientMatrix() and setRightHandSide() should be self-explanatory and simply set \mathbf{A} and \mathbf{b}, respectively.

There also is one virtual function called solve(). If we were to implement different iterative algorithms to solve \mathbf{Ax}=\mathbf{b}, then all of these algorithms would have to derive from LinearAlgebraLib::LinearAlgebraSolverBase and then provide an implementation for the solve() function, that’s it. They all would receive the coefficient matrix and right-hand side vector from their parent/base class. The function takes two arguments, one for the maximum number of iterations and the convergence threshold.

We see, in fact, that these two entities are stored on lines 19-20 within the LinearAlgebraSolverBase class. The types are imported on lines 3-4 and we’ll look at these classes in turn in the next two articles.

For completeness, let’s also look at the source file of this class to see the implementations we can already provide:

#include "linearAlgebraLib/src/linearAlgebraSolverBase.hpp"
#include "linearAlgebraSolverBase.hpp"

namespace linearAlgebraLib {

LinearAlgebraSolverBase::LinearAlgebraSolverBase(unsigned numberOfCells) :
  _coefficientMatrix(numberOfCells, numberOfCells), _rightHandSide(numberOfCells) { }

void LinearAlgebraSolverBase::setCoefficientMatrix(const SparseMatrixCSR &matrix) {
  _coefficientMatrix = matrix;
}

void LinearAlgebraSolverBase::setRightHandSide(const Vector &rhs) {
  _rightHandSide = rhs;
}

} // namespace linearAlgebraLib

One thing to note is that we have to include the namespace keyword here again. The constructor simply initialises the coefficient matrix \mathbf{A} and right-hand side vector \mathbf{b} with the correct size, while both setters on lines 9-11 and 13-15 set these entities, respectively. There is not much more to the class than this. All the implementation goes into the derived class, which we will look into next.

The derived Conjugate Gradient class

After defining the interface, we can simply derive from it and provide a concrete implementation of it using the conjugate gradient algorithm, as discussed above. So let’s look at the header file for the conjugate gradient method, which is shown in the following:

#pragma once

#include <iostream>

#include "linearAlgebraLib/src/linearAlgebraSolverBase.hpp"
#include "linearAlgebraLib/src/sparseMatrixCSR.hpp"
#include "linearAlgebraLib/src/vector.hpp"

namespace linearAlgebraLib {

class ConjugateGradient : public LinearAlgebraSolverBase {
public:
  ConjugateGradient(unsigned numberOfCells);
  virtual Vector solve(unsigned maxIterations, double convergenceThreshold) final override;
};

} // namespace linearAlgebraLib

We see that we inherit from the LinearAlgebraSolverBase class and again wrap everything into our linearAlgebraLib namespace. Apart from a constructor call, we only provide an implementation for the solve() function. Remember that anything marked as a pure virtual function (indicated by the = 0 in the base class) needs to be implemented in the derived class, otherwise, we’ll get a compilation error.

The source file contains the actual algorithm implementation and is given below:

#include "linearAlgebraLib/src/conjugateGradient.hpp"

namespace linearAlgebraLib {

ConjugateGradient::ConjugateGradient(unsigned numberOfCells) : LinearAlgebraSolverBase(numberOfCells) { }

Vector ConjugateGradient::solve(unsigned maxIterations, double convergenceThreshold) {
  Vector r0(_rightHandSide.size());
  Vector r1(_rightHandSide.size());
  Vector p0(_rightHandSide.size());
  Vector p1(_rightHandSide.size());
  Vector x(_rightHandSide.size());

  auto &A = _coefficientMatrix;
  auto &b = _rightHandSide;
  
  double alpha = 0.0;
  double beta = 0.0;

  unsigned iteration = 0;

  p1 = b - A * x;
  r1 = b - A * x;

  do {
    r0 = r1;
    p0 = p1;

    alpha = (r0.transpose() * r0) / (p0.transpose() * (A * p0));
    x = x + alpha * p0;
    r1 = r0 - alpha * A * p0;
    beta = (r1.transpose() * r1) / (r0.transpose() * r0);
    p1 = r1 + beta * p0;

    ++iteration;
  } while (iteration < maxIterations && r1.getL2Norm() > convergenceThreshold);

  return x;
}

} // namespace linearAlgebraLib

Since I want to discuss this code alongside the pseudo-code, I’ll repeat it here for convenience so we can avoid scrolling up and down.

We first have to define a few vectors on lines 8-12 which we will use during the update. These have a trailing 0 and 1 and are equivalent to k and k+1 in the pseudo-code above. We set the coefficient matrix and right-hand side vector on lines 14-15, which we get from the base class. On lines 17-23, we simply initialise a few variables and then on lines 25-36, we solve the conjugate gradient algorithm as shown in the pseudo-code above between the repeat and end repeat statements.

We see that we check convergence on line 36, where we stop the iterations either when the maximum allowable iterations have been reached or when the L2-norm of the residual has dropped below the convergence threshold. If that is the case, we return the solution x from our solve() function.

That’s all. While we have to provide the implementations for the vector and the matrix class, I hope you have seen that implementing the Conjugate Gradient method is not that difficult. In fact, implementing the matrix class is probably the most difficult part of the entire process (at least if we want to solve this problem efficiently), but we are getting ahead of ourselves. We have the conjugate gradient algorithm implemented, next up, we’ll look at the vector class and see how we can provide all required implementations according to the above-shown implementation.

Summary

This article looked at how we can implement the conjugate gradient algorithm for our linear algebra CFD library. We looked at the base class and then the derived class structure. We implemented the pseudo-code for the conjugate gradient algorithm and saw a stark resemblance between pseudo-code and C++ implementation. In next week’s article, we see which operators we have to overload in order to achieve this easy-to-read syntax. We’ll then get to the matrix class and see what tricks to employ to get an efficient sparse matrix implementation to work.


Tom-Robin Teschner is a senior lecturer in computational fluid dynamics and course director for the MSc in computational fluid dynamics and the MSc in aerospace computational engineering at Cranfield University.