How to integrate external CFD libraries in your code

This article will explore the most common build tools you come across when building CFD libraries from source. These include Make, Ninja build, MSBuild, CMake, Meson, and Autotools. While some build systems are preferred over others, you may encounter any of these in the wild. We’ll adapt our previously developed linear algebra library and create different build scripts showing how all of these can be used in your project.

In the second part of this article, we look at how we can compile the CGNS library, a common exchange format to store CFD grids/meshes and solution data (velocity, pressure, temperature, etc.). We look at two different routes, compiling it either with CMake and Ninja, which allows cross-compilation on Windows and UNIX, as well as Autotool and Make, which will only work in a UNIX environment.

Any CFD developer needs to know how to work with third-party libraries if they are serious about their code and want it to scale. This can be a daunting task at first. However, by the end of this article, you should have all the necessary tools in your arsenal to build any third-party library from source, irrespective of your operating system. You’ll also know how to write different build scripts from scratch, which you can use in your own software projects.

Download Resources

All developed code and resources in this article are available for download. If you are encountering issues running any of the scripts, please refer to the instructions for running scripts downloaded from this website.

In this series

In this article

How libraries get built in the real world

In our previous few articles in this series, we concocted our very own build script that either compiled a static or dynamic library or consumed a header-only library. We then used that library in our main executable and consumed its content. If you are developing a library which is supposed to be used by other people (or even yourself in the future), this is pretty much a recipe for disaster. There are two reasons for that

  • No one has time to read and understand your build script to figure out how to compile your damn library.
  • There are already tools and scripts in widely accepted usage so you don’t have to reinvent the wheel.

Remember, programming is all about being DRY, not WET.

So, any serious library that wants to be consumed by other people has to provide some form of accepted build procedure that we can learn and apply to any number of libraries out there. This makes our life easier; once we have learned the steps in this article, we can pretty much compile and use libraries in our code. And this is exactly what I want to cover today. We also look at compiling a real CFD library which makes use of all the different build tools mentioned in the next section, so let’s look at the contenders we have to know and work with.

As alluded to above, there are a range of accepted build systems that unify the build procedure and ensure that we can reproducibly build libraries, regardless of who is developing them. This means we need to know 2 things:

  • How to spot which build system is being used.
  • How to compile a library after we know which build system is used.

Identifying which build system is used is usually straightforward. Compiling libraries with them is also pretty much pain-free but we just have to remember a few steps to do just that. Below I want to look at the two points listed above and get you an idea of how you would use these build systems. To get a feeling for how these build systems or scripts work, we are going to transform our testHeaderOnlyLib.(sh|bat) file into a version that is understandable to these build systems. We looked at the testHeaderOnlyLib.(sh|bat) file in our previous article, and it was the simplest of all of the build scripts, but as a reminder, here it is again:

# testHeaderOnlyLib.sh
#!/bin/bash

rm -rf build
mkdir -p build

# compile main function and link against header-only library
g++ -g -O0 -Wall -Wextra -I. -DHEADERONLYLIB -o build/headerOnlyLibExample main.cpp

# run
echo "Running header-only library example"
./build/headerOnlyLibExample

And, on Windows, we had a corresponding batch file:

REM testHeaderOnlyLib.bat
@echo off

del /q build
mkdir build

REM compile main function and link against header-only library
cl /nologo /Zi /EHsc /Od /I. /DHEADERONLYLIB /Fe:build\headerOnlyLibExample.exe main.cpp

REM remove object files
del *.obj

REM run
echo Running header-only library example
build\headerOnlyLibExample.exe

As a reminder, we simply compile the main.cpp file in this case and link it against the header-only version of our library. This is the simplest case but will already lead to some rather long build scripts for some tools, hence we choose this example here. We also have to define the HEADERONLYLIB preprocessor macro on the command line to the compiler, so we’ll see how to do that as well with the different build tools. Let’s look at the different tools in the next sections.

Make

Make is probably the first proper build system we had. Released in 1976, it is still in wide use these days and I have a love/hate relationship with it. I love it because it is not opinionated. I can use it to build a C++ code, I can use it to clean up my directories (after a build, for example), I can compile my latex documents and even instruct it to go on the internet, download some files, and then do something with these files. In essence, Make is an automation tool which lets you do a lot of things. I hate it because, in this day and age, it is not fit for purpose anymore when it comes to building software, especially cross-platform (Windows doesn’t work well with Make).

If a project uses Make, you can spot this through a Makefile in the top-level directory. If you were to execute make in that directory, it would always look for this file and then process the information within this file.

Since Make does have not any opinion about what we are trying to build (or rather, automate), we have to provide all the raw commands ourselves. Now we have seen in the previous article, that build scripts for Windows and UNIX looked quite different. So if we wanted to support both, we have to essentially provide two different versions, be it through two different Makefiles, or if and else statements within one Makefile to distinguish between Windows and UNIX. This creates a mess and traditionally speaking, Makefiles are developed to only work on UNIX platforms, Windows is largely excluded from its use.

There is only one library I worked with in the past that provided a Makefile, and that is GLEW. It is used for 3D graphics applications (think showing your CAD geometry or mesh on screen, changing the camera location and interacting with the model through face or element selections). I can’t recall working with any other library that provided pure Makefiles but they are still extremely useful and we will see in a second why.

If you want to compile a project that uses a Makefile, you simply call make on the command line and it will process the script. Makefiles usually list different tasks, and you can specify which task to execute. Think of the example I gave above, i.e. compiling and cleaning up a directory after building, if the tasks were named build and clean, then you could execute them with make build and make clean. If you want to know which tasks (more commonly referred to as targets) are available, you need to look into the Makefile and see which targets are defined. The general syntax is as follows:

target: prerequisites
  COMMAND_TO_CREATE_TARGET

A Makefile for our header-only library, for example, would look like the following:

# compiler
CC = g++
CFLAGS = -g -O0 -Wall -Wextra -I. -DHEADERONLYLIB

# build targets
all: main.o
	$(CC) $(CFLAGS) build/main.o -o build/headerOnlyLibExample

main.o: main.cpp
	$(CC) $(CFLAGS) -c main.cpp -o build/main.o

# non-build targets
.PHONY: init
init:
	mkdir -p build

.PHONY: run
run:
	./build/headerOnlyLibExample

.PHONY:
clean:
	rm -rf build

I only show the version that would work on UNIX here, as I mentioned, this is where they are still in use. We also see later that we actually would not write Makefiles ourselves, but rather generate them, so we are not losing anything by not specifying a version that would work on Windows.

In the example above, we define two variables CC (compiler) and CFLAGS (compiler flags) on lines 2-3. We then define a total of 5 targets, 2 of which are used to build our program. These are given on lines 6-7 and 9-10, respectively. The all target is the default target that gets built and typically refers to the main target we want to build, e.g. an executable or a library. This target depends on main.o (prerequisite), as can be seen on line 6. If main.o is available, then we go ahead and execute the command on line 7, where we now reference our variables defined earlier, e.g. $(CC) $(CFLAGS) will be replaced by g++ -g -O0 -Wall -Wextra -I. -DHEADERONLYLIB. This command will create the executable headerOnlyLibExample in the build folder.

If, however, main.o is not available, then Make searches for a target to create the main.o target. This target is specified on line 9. This target depends on the main.cpp file (prerequisite), which we can assume to always exist. This target produces the main.o file from the main.cpp file and the command is again shown on the line below (line 10). Notice the additional -c flag, which instructs g++ to only generate the object file. Once this file is generated, the all target can execute and produce its executable.

There are additional PHONY targets, these indicate that the targets themselves are not files, but rather just names. For example, the .PHONY: init target tells Make to generate a new build folder, and since init is not a file, we supply the PHONY argument.

If we want to build our code now, we simply type make + target, e.g. make all, make init, make clean, etc. The all target is a special target and assumed to be the default target, so if you just type make, this call will execute the make all command. If you have a particularly large project you want to compile with Make, use the -j {number_of_cpus} flag to build the project in parallel. For example, to build the all target with 4 cores, type make -j 4.

Ninja

To me, ninja is somewhat of a modern version of Make. If you look at its build files, you’ll see a lot of inspiration taken from Make but it has been designed to be more messy and unreadable than Make. Why? This allows Ninja to focus on making the build process as fast as possible while sacrificing readability. Ninja build scripts are not supposed to be written by hand but rather be created by a generator, which we will look at below.

To identify a project using ninja, you’ll have to look for a file with the *.ninja extension, typically build.ninja. I haven’t come across a single project making solely use of ninja files, so I can’t give you a good example. However, many libraries use build systems that generate ninja files and these are then used to build the library.

A ninja version of the Makefile we looked at above would look like the following:

# Variables
cc = g++
cflags = -g -O0 -Wall -Wextra -I. -DHEADERONLYLIB
builddir = build

# Rules
rule compile
  command = $cc $cflags -c $in -o $out
  description = Compile C++ $out

rule link
  command = $cc $in -o $out
  description = Linking $out

rule mkbuilddir
  command = mkdir -p $builddir
  description = Make build directory

rule run
  command = ./$builddir/headerOnlyLibExample
  description = Run the program

rule clean
  command = rm -rf $builddir
  description = Clean build directory

build $builddir/main.o: compile main.cpp
  cflags = $cflags

build $builddir/headerOnlyLibExample: link $builddir/main.o

# Phony targets
build init: mkbuilddir

build run: run
  pool = console

build clean: clean

default $builddir/headerOnlyLibExample

Instead of hardcoding every command directly, we specify rules for compiling, linking, etc. and then apply them to our source files. Take the compile rule on lines 7-9, for example, this rule specifies how to compile a file that was provided ($in) and produces an output ($out). On line 27, we then use this rule to compile the main.cpp file ($in) to produce the main.o file ($out) which is located in the build directory ($builddir, set on line 4). The other targets proceed similarly and we set the default target on line 40, which would correspond to the all target for a Makefile.

We instruct Ninja to execute different targets with the ninja + target syntax, similar to what we saw earlier for Makefiles as well. The default target is simply run by typing ninja, and the other targets would be executed as ninja init, ninja run, ninja clean, etc. Also, we can instruct Ninja to run in parallel using the -j flag as we saw with Makefiles, e.g. ninja -j 4 will build the default target on 4 cores.

MSBuild

MSBuild, then, is Microsoft’s flagship build generator. It is uncommon to write this file yourself, similar to comments made about Make and Ninja, and instead, it would be generated by a generator. Or, if you are on Windows and are using Visual Studio anyway, then this build file will be generated for you as part of your project. A MSBuild file can be identified by its *.vcxproj file extension and when you open it you will be in XML heaven. I don’t know what it is about XML files that appeals so much to Microsoft, but their fetish for XML files can be seen in all their other products as well. If you have ever opened a Word, Excel, PowerPoint, etc. file in text mode, you know what I mean. Internally, all their office documents are stored as XML files. I digress.

Let’s see how our simple build script can be transformed then into an MSBuild-compatible build script.

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />

  <PropertyGroup Label="Globals">
    <RootNamespace>HeaderOnlyLibExample</RootNamespace>
  </PropertyGroup>

  <ItemGroup Label="ProjectConfigurations">
    <ProjectConfiguration Include="Debug|x64">
      <Configuration>Debug</Configuration>
      <Platform>x64</Platform>
    </ProjectConfiguration>
  </ItemGroup>

  <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
    <ConfigurationType>Application</ConfigurationType>
    <UseDebugLibraries>true</UseDebugLibraries>
    <PlatformToolset>v143</PlatformToolset>
    <CharacterSet>MultiByte</CharacterSet>
  </PropertyGroup>

  <PropertyGroup>
    <IntermediateOutputPath>build\$(Configuration)\obj\</IntermediateOutputPath>
    <OutDir>build\$(Configuration)\bin\</OutDir>
    <TargetName>HeaderOnlyLibExample</TargetName>
  </PropertyGroup>

  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />

  <ItemDefinitionGroup>
    <ClCompile>
      <PreprocessorDefinitions>HEADERONLYLIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
      <AdditionalIncludeDirectories>.</AdditionalIncludeDirectories>
      <WarningLevel>Level3</WarningLevel>
      <Optimization>Disabled</Optimization>
    </ClCompile>
    <Link>
      <SubSystem>Console</SubSystem>
      <GenerateDebugInformation>true</GenerateDebugInformation>
    </Link>
  </ItemDefinitionGroup>

  <ItemGroup>
    <ClCompile Include="main.cpp" />
  </ItemGroup>

  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />

  <Target Name="clean">
    <RemoveDir Directories="build" />
  </Target>

  <Target Name="run" DependsOnTargets="Build">
    <Exec Command="$(OutputPath)$(TargetName)$(TargetExt)" />
  </Target>

</Project>

If you’ve just glanced at it and said, “I’m done with Windows, I’m going back to UNIX” I understand, but let’s just try to understand the basic file syntax. We have a few <Import Project=... /> statements, that bring in functionality for MSBuild at the required location (the order is important and can’t be changed, oh no, MSBuild will penalise you for that with error warnings that send you down a rabbit hole that has nothing to do with the actual error, seriously, play around with it and see it for yourself).

We can define some global properties in the <PropertyGroup Label="Globals"> on lines 6-8 and then define some project-specific configurations on lines 10-15 in the <ItemGroup Label="ProjectConfigurations"> group. In this case, we only support Debug builds and 64-bit compilation. The Debug build is essentially what we did before, i.e. we never applied any optimisation (indicated by the -O0 flag, as well as the -g flag to include debug information). The opposite would be a Release build, where we would turn on compiler optimisations with -O3, for example.

Lines 17-22 then enforce these conditions, i.e. it sets up the environment for building a 64-bit Debug build. On lines 24-28, we specify some properties that would apply to all build targets, since we do not specify a condition here, unlike lines 17-22, which only apply to the 64-bit Debug build. This condition essentially just states where we want to put all build artefacts, such as the object files, but also the executable files.

Lines 32-43 add the pre-processor macro HEADERONLYLIB to the compiler, as well as define some debug-specific settings, such as setting the compiler warning level and disabling compiler optimisation. Lines 45-47 then list all source files and we specify some additional targets on lines 51-53 and 55-57 for running and cleaning our project, respectively.

To use this script, let’s assume that we have saved the above file as build.vcxproj, then, we would compile and build our executable by simply typing msbuild build.vcxproj. If we wanted to execute a specific target, such as run, then we can do that with the /t: flag for the target, e.g. msbuild build.vcxproj /t:run.

CMake

CMake is the industry standard build system, at least when it comes to compiled languages. Other, niche-specific build systems rule different markets, such as web or mobile development. We’ll exclude those in our discussion. A project making use of CMake will have a CMakeLists.txt file in the root directory of the project.

CMake doesn’t know how to build your code, but instead, it provides a high-level syntax that is used to generate build scripts, either for Make, ninja, or MSBuild. The best part is that you don’t even have to care about it, CMake will select the most appropriate one for your platform, i.e. MSBuild for Windows and Make for UNIX.

Let’s look at how we can turn our project into a CMake file, the CMakeLists.txt file in our root directory has the following content:

cmake_minimum_required(VERSION 3.10)
project(HeaderOnlyLibExample)
add_executable(headerOnlyLibExample main.cpp)
target_compile_definitions(headerOnlyLibExample PRIVATE HEADERONLYLIB)
target_include_directories(headerOnlyLibExample PRIVATE ${CMAKE_SOURCE_DIR})

5 lines of code, 2 of which are boilerplate and always required (the first two lines). So it took us only 3 lines of code to instruct CMake to compile our main.cpp file (line 3), add the HEADERONLYLIB macro to the compiler flags (line 4), and include the project root directory (here specified with the CMake variable ${CMAKE_SOURCE_DIR} during compilation (line 5, similar to the -I. flag we saw earlier). This will build either a Debug or Release version,d depending on how you invoke CMake, compare that to the 59 lines of MSBuild that can only produce a Debug build. You see why CMake is so powerful and preferred in the industry.

To build our code with CMake, it is customary to create build directory first, i.e. mkdir build. While we could define custom targets to do that within the CMake file, it is not very CMake-like and so not done here (I have no interest in teaching technical correct solutions, but rather best practices). We change into the build directory, i.e. cd build, and then execute the CMake file (which is now one directory level up) with cmake .. which will initialise the project. It is at this stage that we can specify different build types, such as Debug or Release, and specify the build script we want to use, e.g. Make, ninja, or MSBuild. To see which build scripts are available, run CMake with the -h flag, i.e. cmake .. -h, all build generators will be listed at the bottom. For example, to generate a ninja build script in debug mode, you would run CMake with the command

cmake -DCMAKE_BUILD_TYPE=Debug -G Ninja ..

This will just configure the project, so to build the executable, you would have to follow this command with

cmake --build . --config Debug -j 4

which will build the executable in the current (.) directory. Notice that we specify the target build type again here (Debug), you only need one definition, i.e. either during configuration (-DCMAKE_BUILD_TYPE=Debug) or during building (--config Debug). Which one you need depends on the build script you have chosen. However, if you specify them in both steps, CMake will use the one it needs and ignore the other command when it is not needed, so the above is a fool-proof way of ensuring you get a Debug build. Use Release instead of Debug to speed up the executable. The -j 4 flag again tells Cmake to use 4 cores during compilation.

CMake itself is a vast topic requiring its own series and likely one which will follow in the future, but for the moment, we have seen how to define a very simple CMake file that’ll do the job for us to build our executable and linking that against our header-only library, with very few lines of code.

Meson

When it comes to build systems, we are not short of options. To understand why Meson is around, and why unfortunately I believe it has no right to be used (anymore), we need to look at the build system landscape before Meson. We saw the elegance of CMake above, a short, precise syntax. But things were not always that good when it came to CMake. CMake is by far the most used build system for compiled projects such as C++, but many people started to use it simply because everyone else was using it, not by choice.

The first issue was that the syntax used to be ugly. CMake tried to be almost a programming language in its own right, with support for functions, if/else statements, loops, etc. Because CMake lets you do pretty much anything, people started hacking their CMake files to a point where they would only work for a specific compiler for a specific platform, but it would not allow for cross-platform compilation anymore. These brittle build files made it difficult to work with CMake. Add to that the documentation was pretty bad (and still to this day is), people were trying to leave CMake behind and look for alternatives. Enter Meson.

I honestly love Meson and its eccentric developer Jussi Pakkanen (he may not agree with his characterisation, and that’s ok). I have used Meson extensively in the past and really wanted to use it for all my projects. But I have given up on that and instead have started to use CMake. You may ask yourself why. The CMake developers were acutely aware of the negative feedback of the community and started to turn things around. Since CMake version 2.8.12 (but really version 3.0 and above), we have entered the Modern CMake era (it is actually called this way). The syntax that led to brittle builds was deprecated, and replaced by a set of logical statements that were easy to use and which ensured that builds were cross-platform compatible.

Since that time, I have found that CMake is, again, the build system everyone should be using, simply because everyone else is and this will make it much easier for you to bring other people’s code into your codebase. While the documentation is still pretty bad for CMake, there are good alternatives available, and my favourite resource is the book by Craig Scott – Professional CMake which teaches pretty much everything there is about CMake in an easy-to-follow and understandable manner.

Yet, some projects insist on using Meson in the CFD community, most notably SU2, an open-source CFD solver developed by a few universities around the world, and so I wanted to cover it here as well. Also, and this is a bit more selfish, I still look fondly at the Meson project and don’t want to completely forget about it, even if I am not planning on using it anymore in the future. So writing about it here gives me an excuse to play around with the build system again.

To see whether a project uses Meson, look for a meson.build file in the root directory. It is usually accompanied by a meson_options.txt file as well that allows to set some project-specific options. In our case, we are only going to create the meson.build file and copy the following content into it:

project('HeaderOnlyLibExample', 'cpp')
executable('headerOnlyLibExample', 'main.cpp',
  cpp_args : ['-DHEADERONLYLIB'],
  include_directories : include_directories('.'),
)

We have two commands, so technically only two lines, although the second command was split over a few lines as is customary when writing Meson build files. The project command defines some global parameters. We could have set some default compiler options, such as the C++ standard to use, or the project version number, but we have only given the name and type (cpp, i.e C++) of the project here.

The second command, executable, defines the name of the executable, and the required source files to create that executable, followed by some compiler options we want to pass on. Unfortunately, the cpp_args (compiler arguments for the c++ compiler) use the syntax -DHEADERONLYLIB to pass a preprocessor macro to the compiler, as it does suggest we are hardcoding this definition now for a UNIX compiler. However, Meson will abstract away the -D and replace that with the appropriate flag on Windows, i.e. /D.

With this meson.build file defined, we set up the project first with the command meson setup build. The third argument is the build directory to use, and you can give it any name, but build or builddir are common options. Next, with the project set up, we can compile the project with the command meson compile -C build -j 4. In this case, we tell the compile command that we want to build the project defined in the build directory. This is useful if you want to build different targets or versions of your code (say a Debug and a Release build) at the same time, so you can place them in different directories. The -j 4 flag, as before, indicates the number of processors to use.

I should say that specifying the -j 4 flag for any of the build systems we looked at thus far is, for this example, pretty much pointless, as we only have a single file to compile. To see an effect, we need to have at least as many source files as the number of cores specified by the -j flag. It was just shown in the previous examples to indicate how you could speed up the build process.

Autotools

This is probably the hardest section to write for me. To me, I have the same ethical concerns teaching you how to write an Autotools build script as I would if I were to teach you how to scam people and take away their money. Autotools is a toxic build system, and I would treat it as such. Keep this preamble in mind in all of the following discussion. For a project started in 1994, it looks like it was created in the 1970s and was probably already outdated by the time it was first released. There is a summary of some of the criticism and response to it, which I would summarise as:

  • Criticism: Autotools pretends to be cross-platform when it demonstrably isn’t. It introduced the configure step that we saw in CMake and Meson before we could proceed to produce our builds. I don’t share that criticism, I find the configure step very useful.
  • Response to criticism: Autotools gives you so much flexibility that you can do anything with it.

I find both sides a bit laughable, as they don’t really nail, at least for me, the reason we should not be using Autotools anymore. It is extremely useful to have the configure step before the compilation step because we can define different build targets (Debug/Release) or even have different directories for third-party libraries, your tests, etc. I don’t think this fine-level control over your build is a bad thing. But the response to this criticism really frustrates me (if you read the longer version of the Wikipedia article linked above).

First of all, the limitations described in the response are not up to date anymore. One of the points raised in the response to the criticism confuses me, it states that you can’t inject scripts or customise your build-in build systems such as CMake. This is just not true, and CMake does allow you to inject scripts into the build and configure step. As far as I can tell, this feature was available from at least 2014 (but that is only as far back as the documentation goes, it may have been there longer). The response to the criticism was written in 2021, 7 years later. Strong opinions, but little fact-checking.

Secondly, I don’t want my build system to give me all the freedom to do whatever I want to, if I need that, I use a Makefile, but there is a reason people no longer write Makefiles manually. In-fact, there is a reason why people don’t want to have build systems that are overpowered. Too much flexibility can result in messy build system code. The build system Meson that we looked at above, is based on this principle, any proposal that would make Meson a turning complete language would automatically be rejected.

For me, the real reason why Autotools is so toxic is that it locks you and anyone else who wants to use your code or library into a UNIX environment and it just doesn’t support native cross-platform compilation. It imposes so many limitations that sometimes it is a better idea to simply ignore a project if it only supports Autotools as a build system, and historically speaking, pretty much all CFD libraries used Autotools, but thankfully are moving away from it. The only library still in use today that I can think of that uses Autotools exclusively is PETSc, a library to solve the matrix product \mathbf{Ax}=\mathbf{b}, similar to the library we developed (with just lots more solvers).

Ok, but let’s say you are radioactive and really want to lock in your users to UNIX, you don’t want to give anymore a chance to even think of building on Windows (heck, you may even make your library commercial and charge people money for it, see, I have just taught you how to scam people out of money …). To identify a project that uses Autotools, look for a configure, configure.ac, and Makefile.am file in the root directory. For our header-only library, the configure.ac file would have the following content:

AC_INIT([HeaderOnlyLibExample], [1.0], [tom@cfd.university])
AM_INIT_AUTOMAKE([foreign])
AC_PREREQ([2.69])
AC_CONFIG_SRCDIR([main.cpp])
AC_CONFIG_HEADERS([config.h])
AC_PROG_CXX
AC_CONFIG_FILES([Makefile])
AC_OUTPUT

We define a few global properties of the project on the first line and then list a few prerequisites. Line 4 tells Autotools the required source file and a configuration header (which will be automatically generated for us), we specify that it is a C++ project on line 6 and then say that we want to use Makefiles in this case. So far so good. Let’s next look at the Makefile.am:

bin_PROGRAMS = headerOnlyLibExample
headerOnlyLibExample_SOURCES = main.cpp
AM_CXXFLAGS = -g -O0 -Wall -Wextra -I. -DHEADERONLYLIB
headerOnlyLibExample_LDADD =

# Define a 'run' target for convenience
.PHONY: run
run: headerOnlyLibExample
	./headerOnlyLibExample

# Define a 'clean-local' target to clean up additional files
clean-local:
	rm -rf build

EXTRA_DIST = readme.md

We hardcode compiler flags on line 3 that are bound to work only on a UNIX compiler (and most definitely only for the GNU GCC compiler suite, good luck trying to support a different compiler), and define a few additional .PHONY targets we discussed earlier as well.

To generate a configure file we can give to our end-users, we have to generate it from the configure.ac and Makefile.am files. We do that with the following commands:

autoheader && aclocal && autoconf && automake --add-missing

Executing all 4 commands in this order will generate the configure file, which you can now execute by simply typing ./configure which will prepare your project to be built. This step produces a Makefile which you can then execute by typing make. This will generate the executables and libraries, depending on what you are trying to build, and you can copy these executables into a different (permanent) directory using the make install command. By default, it will go into your /usr/local/bin or /usr/local/lib directory, but you can change that during the configuration step by providing a different path with the --prefix flag, e.g. ./configure --prefix=/path/to/executable/.

To see all of the available options we can pass to the configuration step (which Autotools has added for us automatically), run it with the -h flag, i.e. ./configure -h.

OK, so I don’t want to make this section any longer than it needs to be, but I want to take stock before we move on. Using Autotools, we have to first provide two separate input files. These files are then used to generate a configuration file using 4 different commands. The configuration file, in turn, is used to generate the final build file (a Makefile) and we can use that to build our project (in this case a single source file). Do you still remember all the steps, all the commands you have to call, and all the variables you have to specify in the different files? No?

Can you see why an opinionated build system such as CMake or Meson is easier to use? To achieve the same thing we needed to write 2-5 lines of code and configure and build the project with two separate commands. That’s it. You see, allowing a build system to let you do whatever you want comes at the cost of having to go through a lot of different steps. Most of the time, we don’t need that flexibility, and if we do, I’d argue that writing a Python script to handle specific tasks is a better idea (incidentally, you can execute Python scripts as part of the CMake or Meson build step).

Anyways, don’t be a scammer, use a better alternative to Autotools.

Case study: Compiling the CGNS library

Ok, so let’s recap, we have looked at a few different solutions to build projects, in general. Third-party libraries will typically use one of these build solutions and now that you have a rough idea of what these build solutions look like and how to execute them, I want to look at a particular library and practice its compilation.

The library we are going to compile is the CGNS library, which stands for the CFD General Notations System. It is a library that predominantly allows you to store a computational grid and a solution (velocity, pressure, temperature, etc.), but it has support for a few more things. The nice thing about the CGNS format is that we can store both a structured and an unstructured grid with its corresponding solution. It can be processed by most commercial CFD solvers, as well as grid generators, and it is somewhat similar in spirit to what *.pdf files are for exchanging documents (i.e. *.pdf files can be opened on any operating system).

The CGNS library was the first library I attempted to compile. Back then, it only provided support to be compiled on UNIX with Autotools, there were no other build options included. It was possible to get it compiled on Windows somehow (evident by the fact that the mesh generator I was using back then was working on Windows, which in turn required a Windows version of the library) but I couldn’t figure it out. I spent about 2 weeks before giving up in frustration and switching to Ubuntu. Having done the switch, I was able to get it compiled and working within 30 minutes, and I didn’t even know about build systems back then, let alone Autotools.

Things have changed, thankfully, and CGNS does provide support now for CMake, so cross-platform compilation is actually not that hard these days. We’ll compile it using both CMake and Autotools, as these are the supported build systems and you’ll get an idea of how to use these in practice. Again, I am apprehensive about showing you how to work with Autotools, but the truth is some CFD libraries, as already mentioned, only support Autotools, so I would be doing you a disfavour not showing you how to do this.

In general, I think it is always a good idea to learn how to compile libraries from scratch yourself. Especially, learning how to deal with error messages (and there will be error messages!). But, if you are in a rush and actually googled your way here to specifically compile the CGNS library, I have provided scripts for Windows and UNIX (Linux, macOS) that you can download at the top of this article. These scripts will automatically download, compile, and install the CGNS library with all of its dependencies (i.e. what I describe below using the CMake route). If you have trouble following along, you may want to use those scripts.

If you want to work with these automatic installation scripts, it is likely that they will not execute out of the box and be blocked by your operating system. For Windows, right-click on the file and go to properties. At the bottom, there is a checkbox titled Unblock, select the checkbox and close the properties, and you should be able to execute the script. On UNIX, use type the command chmod +x installCGNS.sh into your terminal while in the same folder as the script to make it executable. And with that out of the way, let’s get started.

Prerequisites for the CGNS library

In order to get the CGNS library compiled, we’ll need a few tools. First of all, we need a c++ compiler and CMake. If you are on Windows, make sure Visual Studio (Community version is fine) is installed and that C++ tools, including CMake, are installed. You can check that by going to the Visual Studio Installer, clicking on the Modify button and ensuring that the C++ development tools are installed. This is shown below in the screenshot

The package Desktop development with C++ has a few packages installed by default that you can see on the right. We need MSVC (minimum version is v143) which is Windows’ compiler tools. C++ CMake tools for Windows contains CMake which we’ll need. I have also highlighted C++ Clang tools for Windows. This is not strictly speaking required but Clang has produced some very decent tools which we will use in later series so you may as well download them now. The clang compiler is also rather decent and works across Windows and UNIX.

On Unix, it is rather straightforward, using your package manager, you can install packages the usual way. On Ubuntu, for example, you get a sensible default development environment installed with the command sudo apt install -y build-essential, and then you can bring in additional required tools with sudo apt install -y cmake and sudo apt install -y ninja-build.

For macOS, you’ll need to install the default development tools with xcode by running the following command in your terminal: xcode-select --install. You may then wish to install homebrew which allows you to install additional tools such as CMake and Ninja using brew install cmake and brew install ninja.

Other than the required tools to get the library compiled, we need a few additional libraries before we can compile the CGNS library. It used to implement its own data storage/compression but it has since been superseded by the hdf5 data storage format, which is the de facto standard in data compression in scientific applications. The hdf5 library can be built without any additional libraries, but in my experience, you should at least install zlib in order to avoid nasty compilation bugs later.

Using CMake and Ninja

Using CMake and Ninja means we can compile either on Windows or on Unix, it doesn’t really matter. I am showing how to do it here for Windows but will make comments when changes are required for Unix (pretty much only when specifying file locations and paths).

To get started, create a directory somewhere to download the different libraries. It doesn’t really matter where, as we will select the final destination of the libraries during the compilation. I am being lazy, so the Desktop it is for me! Download the CGNS, hdf5, and zlib libraries and put them in your folder (or Desktop, I won’t judge).

Open the developer powershell for VS <version>, where <version> is the one you have installed from Visual Studio (in my case 2022). Unfortunately, Windows still defaults to 32-bit by default when compiling which can give some nasty and strange bugs later. You will need to change the default from 32 to 64-bit, which is the case for pretty much every PC these days (unless you are on a surface pro, but even these devices will probably be 64-bit in the near future (and, if you are reading this on a surface pro (or whatever they are called in the future) with a 64-bit chip, welcome to the future!)).

You can follow this discussion to see how to switch to 64-bit. If you use the Windows script provided above to install CGNS automatically, then you will get instructions for how to change to 64-bit as well. If you are on UNIX (either Linux or macOS) then this, of course, doesn’t apply to you and you can just open a terminal (because any sensible operating system these days will detect that you are on a 64-bit machine and compile code for your (correct) architecture automatically … that’s the whole point of compiling code in the first place!). I digress, with your terminal open, you should now have access to all of your development tools.

Compiling the ZLib library

Navigate into the zlib library and create a folder with the following command (both Windows and Unix)

mkdir build

Switch into that directory with (both Windows and Unix)

cd build

At this point, I typically like to run CMake without any options, inspect the options that are available afterwards, and then rerun CMake with all options that I require. I will set, however, the build tool, here Ninja, as this will make my life easier later. To run CMake without options, type (both Windows and Unix)

cmake -G Ninja ..

This will execute CMake and look for the CmakeLists.txt file in the parent directory (indicated by the ..). After the command has finished, type

cmake -LAH

This will list all available options that you can set. For the zlib library, the only things I want to set are the build type and the installation directory. To set these, we use the variables CMAKE_BUILD_TYPE and CMAKE_INSTALL_PREFIX, which we set on the command line with the -D flag. I’ll put all libraries, once compiled, into my C-drive where I have created a directory called libraries, but you can put them in any other location if you want. Then, the full command becomes:

cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=C:\libraries\ ..

On Unix, specify a directory that makes sense to you. If you don’t specify a directory here, CMake will attempt to install the library into the default system directory /usr/local. This will typically make it easier for CMake and other tools to discover these libraries, but also requires admin privileges to install your library (so you would have to run the next command with sudo rights on Ubuntu). We have configured the project now, so it is time to build it. We can use the following command for that:

cmake --build . --target install --config Release -j 4

We instruct CMake to first build the project (using 4 cores, change that to the number of cores on your system) and then afterwards it should install the library. Installing here simply means copying all generated libraries and header files into the install directory we specified earlier with the CMAKE_INSTALL_PREFIX flag. Once this command is finished, you can verify that the compilation and installation has worked by looking into the C:\libraries folder (or /usr/local/) and verifying that you now have an include and lib folder at a minimum, which contains the libraries and header include files.

All other libraries we will compile and install next will copy their output into these directories as well.

Compiling the HDF5 library

The steps to compile the hdf5 library are pretty much the same. First, generate the build directory and change into the build directory. Run CMake with cmake -G Ninja .. and wait until this step has finished (the hdf5 library is quite large so this step can take a while). You’ll probably encounter a few warnings about a few libraries not being found, that is ok, you can ignore this as we don’t need to install all libraries for hdf5 to work.

Next, inspect what possible variables we can set with cmake -LAH. This time I want to set a few more configuration flags, as I need to tell CMake where the zlib library is installed. If you look through the list, you’ll find the following variables: ZLIB_INCLUDE_DIR and ZLIB_LIBRARY_RELEASE. These folders essentially specify where to find the header include files, as well as the library itself, respectively. So let’s set them correctly, as well as the build type and installation directory as:

cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=C:\libraries\ -DZLIB_INCLUDE_DIR=C:\libraries\include -DZLIB_LIBRARY_RELEASE=C:\libraries\lib\zlibstatic.lib -G Ninja ..

Again, ignore any warnings, you know that the configuration was successful if you can see a build.ninja file generated within the build folder. On UNIX, the static library will also not have an ending of *.lib but rather of *.a, and it will also simply be called libz.a. We compile and install the library the same way as the zlib library with:

cmake --build . --target install --config Release -j 4

Compiling the CGNS library

Well, rinse and repeat. I am not showing you anything new in this section, and hopefully, this will bring home the point that using a build system is a good idea. We use CMake here to compile and install our libraries and regardless of the library we are using, we are always working with the same commands (interface). Very convenient for the end user (us). So head into your CGNS directory, create a build folder, change into it, and then execute cmake -G Ninja .. to preconfigure the project. Check against he options with cmake -LAH and then make the following changes:

cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=C:\libraries\ -DHDF5_C_INCLUDE_DIR=C:\libraries\include -DHDF5_hdf5_LIBRARY_RELEASE=C:\libraries\lib\hdf5.lib -G Ninja ..

One interesting point to note is that when running cmake -LAH, there is one variable called CGNS_BUILD_SHARED:BOOL=ON, which indicates whether we want to build a static or dynamic library. So we have control over this here and you see that the default is set to dynamic (it used to be static).

If the build.ninja file is present, then, well, you guessed it, we type:

cmake --build . --target install --config Release -j 4

You should now have the CGNS library installed. You can use the same workflow for pretty much any other library, as these days most will support CMake as their build system.

Using Autotools and Make

A few things you should note before proceeding. If you are working on Windows and you are using WSL for development, then extract all downloaded zip (or archive files) through WSL using the tar command. Say you have downloaded hdf5-1.14.3.tar.gz and you want to extract that into /home/<username>/temp/hdf5 (make sure this directory exists before attempting to extract into it, i.e. using mkdir -p /home/<username>/temp/hdf5, replace <username> here for your actual username), then you can extract the files (assuming you are in the same directory as the hdf5-1.14.3.tar.gz file) using:

tar -xvf hdf5-1.14.3.tar.gz -C /home/<username>/temp/hdf5

This step may not be necessary but you can get nasty configuration errors if you extract on Windows and then attempt to compile on UNIX as Windows and UNIX use different characters to end lines, which can confuse compilers. To be safe, use the command above for all three libraries when extracting them.

Ok, so let’s assume we have our libraries extracted into /home/<username>/temp/zlib, /home/<username>/temp/hdf5, and /home/<username>/temp/cgns. All of these libraries do have a configure file, meaning we can use Autotools to compile all three libraries. Let’s start with zlib.

Compiling the ZLib library

As we alluded to earlier, Autotools produces builds in two separate passes. The first is the configuration, the second is the build step (we have a third step for installing targets, i.e. copying the compiled libraries and generated headers into a permanent directory). To get an idea of what configuration options are available, we can run the configure script with the -h flag, i.e. ./configure -h. The flag that we pretty much always want to set is the --prefix=<DIR> flag, which indicates where we want to install the library to. On Windows, we used the C:/libraries/ directory, so let’s say we want to store our libraries on UNIX in /home/<username>/libraries. Then, we can configure zlib with

./configure --prefix=/home/<username>/libraries

This will configure the project which should not take too long. This step will generate a Makefile as we saw earlier, which we can use to compile the project with:

make -j 4

Remember, the -j 4 here indicates how many cores we want to use for building the project, use as many as you have available. Once the compilation is done, install the library with

make install

You should now have a few new folders within /home/<username>/libraries directory (namely include/ and lib/) and you should see the zlib library, both a static (libz.a) and dynamic (libz.so) file within the lib/ folder, as well as a header include file (zlib.h and zconf.h) in the include/ directory.

If you don’t specify the --prefix flag, as mentioned earlier, your build will go into the /usr/local/lib and /usr/local/include directories. You need admin rights to copy files into these system directories and so need to run sudo make install to have the necessary privileges to copy these files into this directory.

Compiling the HDF5 library

The steps are always the same, e.g. first we run ./configure, then make, then make install. The only difference between libraries is the first (configuration) command, where we can configure the library to suit our needs. For the hdf5 library, for example, we want to specify the path to the zlib directory.

Note that if you did not specify the --prefix flag when you compiled zlib, hdf5 should be able to find zlib in the default /usr/local/ directory (where it would then get installed). But good software engineering practices tell us we should not pollute our /usr/local/ directory, because we may update a library in the future and then break existing code that depends on the now older library version. Changes may have been introduced to interfaces that are not backwards compatible (this happens all the time!), so keep all your libraries in separate directories. It is just good practice, and that is what we want to learn.

If we run the ./configure -h command within the hdf5 directory, then we will see a bunch of options. We are only interested, for the moment, in configurations specific to the zlib library. We can filter the output using grep, which allows us to only print lines that contain a keyword. To do this, run

./configure -h | grep zlib

We have used the pipe symbol (|) here, which states that we should run the command on its left (./configure -h) and pass that output to the command on its right (grep zlib). grep will then only display lines that contain the word zlib. We see the option --with-zlib=DIR, which is what we were looking for. This option lets the hdf5 library know where we installed the zlib library. Thus, our configure command becomes:

./configure --prefix=/home/<username>/libraries --with-zlib=/home/<username>/libraries

This step will take, again, a while. Once completed, compile the code with as many cores as available (about 3000 files to compile):

make -j 4

Finally, install the library

make install

That’s it. The folder /home/<username>/libraries should now have all the hdf5 libraries and header include files, as well as a few more executables within the /home/<username>/libraries/bin folder. We are finally ready for the CGNS library.

Compiling the CGNS library

This should now be more of the same, really. The only hiccup you may face is if you go into the CGNS directory, you’ll notice that there is no configure file! CGNS uses a non-standard library structure (annoyingly), but you can locate it within the src/ directory. So once you are inside the CGNS/src directory, do the usual steps, i.e. first we have to figure out what options to set with the ./configure -h command.

In general, if you want to tell during the configure step where to find a library, the syntax is usually --with-LIBNAME=DIR, and it is no different for the CGNS library (verify that by running ./configure -h | grep hdf5). Thus, we need to specify the location of where to find the hdf5 library (the root folder containing the lib/ and include/ directory). The configuration command becomes:

./configure --prefix=/home/<username>/libraries --with-hdf5=/home/<username>/libraries

If you run just ./configure -h, you’ll notice the option --enable-shared=all, which you can set to --enable-shared=yes, which will also build a shared object (*.so), or dynamic library as we have come to know it, as part of the build. The Autotools files will build, by default, only the static library (libcgns.a).

Now compile the library as per usual:

make -j 4

and finally, install it with:

make install

Congratulations, you now know how to use Autotools. It’s not something to be proud of, but sometimes we have to exercise that knowledge when working with libraries that still cling to ye good ol’ days

Summary

Let’s recap what we have looked at in this article. We now know that there are a plethora of build systems out there. We have looked at a few of them, the ones you most commonly will find when building libraries yourself form source.

If you have a choice, you should always be using CMake to build libraries. Most libraries will support CMake, even if they use a different tool to build their library internally for development. The best example I can come up with is google-test, a tool that allows you to automatically test your software. Google uses a build system called Bazel for all of its internal build requirements, yet this library features both a Bazel and CMake file to build the library. We also saw that other libraries, such as the CGNS and hdf5 library provide support for both CMake and Autotools, so CMake seems to be the common denominator.

Avoid using Autotools in the 21st century; we have come a long way when it comes to build systems and Autotools is no longer fit for purpose. It was always going to be a Unix-first (or only?) tool and while it is not impossible to get it to work on Windows, it is definitely not intended to be used that way. Cross-platform compilation should be our goal, for every software we write, and Autotools has no place in this space.

You now should have the required knowledge to build any third-party library you want to use from source. Regardless of which library you’ll end up using (or wanting to use), always check the documentation, especially the installing steps, as they may contain some additional information on how the library ought to be built. The steps outlined in this article provide a structure or framework, which may need some adjustments based on the documentation.