In this article, we will be looking at package managers for C++ and how we can use them to compile and install libraries with as little as a single command line input. Specifically, we look at Conan, an open-source package manager for C/C++ which is gaining a lot of momentum in C++ circles. We’ll look at the difficulties a package manager for compiled languages has to overcome and present a few alternatives to Conan and how they overcome these. Ultimately, though, we’ll see that Conan is the only viable option and we should learn at least the basics so that we can use it if we want or need to.
Compiling libraries yourself takes time, is frustrating and often error-prone. Offloading this task to a package manager means you get to write more code yourself. You may realise that dependency management becomes very easy; so much so, that you start using more and more dependencies.
By the end of the article, you will have an understanding of why managing libraries in C++ is difficult, compared to other (interpreted) languages such as Python and javascript. We’ll walk through the basic steps of using Conan to compile and install dependencies for us and how to integrate Conan into a simple starter project, from which you can grow your own code.
In this series
- Part 1: Understanding static, dynamic, and header-only C++ libraries
- Part 2: How to write a CFD library: Discretising the model equation
- Part 3: How to write a CFD library: Basic library structure
- Part 4: How to write a CFD library: The Conjugate Gradient class
- Part 5: How to write a CFD library: The vector class
- Part 6: How to write a CFD library: The sparse matrix class
- Part 7: How to write a CFD library: Compiling and testing
- Part 8: How to integrate external CFD libraries in your code
- Part 9: How to handle C++ libraries with a package manager
In this article
Why C++ package managers are difficult to maintain
If you come from a different programming language, you will probably already be familiar with the concept of a package manager. We have pip and conda for Python, npm and yarn for javascript, maven and gradle for java, pkg for julia, pub for dart, and luarocks for lua. There are plenty more examples, but you get the idea. A programming language these days typically comes with its own package manager.
What is a package manager then? Its primary goal is to bring in dependencies (or libraries as we have been mostly calling them, but these terms are used interchangeably), i.e. code that other people have written for us to use. It handles the downloading of the code from some remote repository (all these package managers have their own online repository), configuration if required, and exposing the library to our code, so we can use it straight away.
So why does C++, then, not come with a package manager by default? Well, because everything in C++ tends to be a bit more complicated than it needs to be (or so it seems, there are good reasons to endure this complexity). C++ has a few available package managers, but these are developed by third parties and there is always a risk that these projects will not be maintained in the future. However, they seem to have some series commitment behind them, for the moment, and so it is worth exploring these in our quest to make dependency management as painless as possible.
Let’s look at the main issue, in my view, why C++ package managers have a difficult job: ABI compatibility.
ABI compatibility
We looked at memory management a while back, and some terms we introduced there may be useful in the current discussion. If you need a refresher, have a look at the article and then continue here.
Difference in the hardware
Most will be familiar with the term API. It is used quite frequently in programming circles and stands for Application Programming Interface. Even if you have not come across this term, you will have implemented an API if you followed along in this series. The API is nothing else than the name of the functions and their arguments that our library exposes. For example, our Vector class defined functions such as unsigned size() const;
, unsigned size() const;
, double operator*(const Vector &other);
, etc. The collection of all of these function names and their argument forms the API. It is what other people have to call to use our library.
ABI stands for the Application Binary Interface and is how your API looks in compiled form. The API is for us humans, the ABI is for the compilers and computers. So what is ABI compatibility then? When we compile our code into object files and then ultimately into a binary file (executable), the compiler translates what we want to achieve in code to what our computer can do on the hardware. So we can see our compiler as a translator of sorts.
If we want to stick with the analogy that our compiler is a translator, then the code we write is a document in one language (say, English) and the compiled executable for a specific platform is the translated document (say French on Windows (why not!) and Spanish for macOS). If I take my executable from Windows (a French document) and try to execute that on macOS (i.e. giving my French document to a person who only understands Spanish), it’ll fail to execute. This is ABI in action.
So what is the cause here then? First of all, there can be hardware differences. We still live in an age where 32-bit processors are around, but on PCs, we all should have a 64-bit CPU by now. There are some differences between these two architectures, for example, the 32-bit processor only has access to 2^{32} memory locations, while a 64-bit process has an available address space of 2^{64}. Our compiler will take that into account and order memory access differently on these two architectures. This is where the first incompatibility comes in.
Another difference between 32-bit and 64-bit is the increased number and size of registers on the 64-bit. We discussed the memory hierarchy and how memory is brought into the CPU from the RAM through the L-level caches, all the way to the registers. If our registers change, then the compiler will prepare different memory-loading instructions. Even worse, if we try to run a 64-bit compiled executable on a 32-bit machine, we are potentially expecting to make use of registers that don’t exist! Consequently, we won’t be able to run the 64-bit executable.
When we discussed why we want to use C++ in the first place, one of the arguments we had was that C++ is capable of being compiled. Using a compiler allows us to make use of its capabilities to optimise our code for speed, and so the compiler will rewrite some of our functions, potentially. Just turning on optimisation may introduce additional incompatibilities if the compiler decides to optimise the code in a way that is only applicable to one of these architectures.
Differences in the operating system
The hardware is only one piece in the ABI incompatibility, but the operating system plays a crucial role here as well. It boils down to convention. An operating system will call functions potentially in a different way, or expect arguments to a function to come in a specific order, and all of that is forcing the compiler to produce a binary that is specific to a given operating system.
An important concept to understand here is name mangling. Name mangling is what your compiler does to your function names. If you call your C++ function solve()
, there is no guarantee that the compiler will actually keep that name. In-fact, in C++ your compiler will pretty much always change that name to something it wants, which may not make much sense to us. It has to do that because C++ supports operator overloading, i.e. reusing the name of a function with different arguments. The compiler needs to differentiate between your different functions, so it will assign different names for it. Let’s look at an example code:
class Math {
public:
Math() = default;
double add(double a, double b) { return a + b; }
int add(int a, int b) { return a + b; }
};
void solve() {
Math math;
math.add(1, 2);
math.add(1.0, 2.0);
}
int main() {
solve();
return 0;
}
It doesn’t do anything useful, but it exposes a few functions for us. Assuming this file was saved as main.cpp
, we can compile an object file of this file on UNIX with g++ -c main.cpp
(producing a main.o
) or cl /nologo /EHsc /c main.cpp
on Windows (producing a main.obj
file).
On UNIX, we can then proceed with a tool called nm
, which prints out all symbols (that is, the names of the functions in our case) in a file. Calling that on our object file, i.e. nm main.o
, we see now function names such as _Z5solvev
, _ZN4Math3addEii
and _ZN4Math3addEdd
(these may be different for you). We can see the function names still present, e.g. solve
and the class Math
+ its function add
, but it has additional identifiers. For example, the last two digits ii
or dd
indicate that two int
egers or double
s are passed as arguments. In this way, we have two separate function names and the compierl can differentiate between the two.
We can use nm
on Windows as well assuming it is installed (I got it when installing Perl). Using it as nm main.obj
provides us with mangled names such as ?solve@@YAXXZ
, ?add@Math@@QAEHHH@Z
, and ?add@Math@@QAENNN@Z
. So, you see, if I compile my code on UNIX, it creates an object file where my solve()
function has now been renamed by the compiler as _Z5solvev
. If I then try to execute it on Windows, the runtime may be looking for a function called ?solve@@YAXXZ
instead and this wouldn’t find anything. So this doesn’t work and simply using a different operating system breaks the ABI as well as we can see.
The following talk goes much more into details of what issues arise for compiled libraries and why package managers for C++ are more difficult to maintain. It is a somewhat longer talk but does give a pretty good overview.
An overview of available package managers for C++
As alluded to above, we have a few options when it comes to handling dependencies automatically. I want to look at three of them and give a brief overview of their design philosophies and why I would or wouldn’t recommend using them. C++ package managers do not just have to provide a platform to exchange and download code from, but they also have to ensure ABI compatibility. I’ll discuss how they achieve that below.
Conan
Conan is a free and open-source package manager for C/C++, despite being developed by JFrog, a company specialising in software delivery. It is written in Python and thus the easiest way to obtain Conan is actually to use Python’s package manager pip (using the command pip install conan
, you may need to install pip first). It has a decent documentation, although I wished some features were better covered sometimes. They may update the documentation over time though.
Conan’s philosophy is that you provide a file called conanfile.txt
or conanfile.py
, of which the *.txt
version is much simpler (the *.py
file allows you to create some more advanced configurations). This file contains the libraries that you want to install, any options you want to set (i.e. do you want to use a static or dynamic version of the library) and a generator. The generator specifies how you want to consume the output from Conan, i.e. how you want to integrate that into your build system. Typically, we would use a CMake project, as this covers both Windows and UNIX, but other options, such as Make, Meson, MSBuild, etc. are available.
In order to check which libraries are available through Conan, you can browse conan center, which is the remote server that stores all libraries. It is possible to also create your own remote server to host your own, private libraries if that is what you want and is a solution that may appeal to companies with in-house proprietary libraries that they don’t want to release to the public.
How does Conan then handle the ABI issues discussed above? Well, the easiest solution is that someone has already compiled and built the library for the target platform you are operating on, with the same processor architecture (32-bit vs. 64-bit) and in the same build type (Debug or Release). If that library already exists in compiled form (binary form), then Conan will simply download the library from its server and that’s all, you don’t have to compile anything yourself. But if it doesn’t exist, then Conan has enough information available to compile and build the library on your machine. After it has been compiled, it will be stored on your PC so if you need the library again in the future, it will be copied from here and not compiled again.
You interact with Conan through the command line, i.e. if you simply type conan
, it will return a list of possible commands. The most important command is conan install
, which will look at your conanfile.txt
or conanfile.py
and then install the libraries listed in this file (i.e. go to conan center, download either the binaries or a so-called recipe and then build the library from that recipe). It will output textfiles that can be integrated into your build system, which is steered by the generators specified in the conanfile.
I personally find Conan quite pleasant to use, it is intuitive and doesn’t take a lot to learn. It may be a bit more complicated to write your own recipes, but that is only really required if you are writing your own library and want others to be able to consume it through Conan. And, if you have a particularly useful library developed, you might not even have to write that recipe yourself, someone else may beat you to it and provide one for you.
We will look into the most common use case of Conan in the next section, but if you want to get a better overview of the matter, I’d recommend this talk which provides you with some more background and how to use Conan but also what changed when Conan 2.0 was introduced (and there were a few changes)
vcpkg
Who came up with that name? Seriously? I can never remember in which order the letters should be and they don’t seem to abbreviate anything, or at least not to anything that is openly shared on their website (I get that the pkg part probably means package, but vc? Anyone?). Oh wait, what are you saying? Microsoft is developing vcpkg? The same people who introduced non-standard C++ statements for dynamic libraries that only work on Windows? So the same people who can’t build libraries the way everyone else is (without having to hack C++) are developing a tool to manage and handle our libraries? Well, I think the name is perfect, actually! And surely this tool will provide a smooth journey when dealing with dependencies (if you don’t get sarcasm, read on, it gets much worse, the name is not the only issue …).
Microsoft developing a package manager is, to me, the same as employing a blind bus driver or using Facebook as a cloud storage for sensitive data. Would you get on board the bus? Would you store your PIN and card details on Facebook? Having said all that, cvpgkf (or whatever it was called) seems to be a pretty decent tool, it just comes with a lot of baggage (for me, anyway). If you are a Microsoft fanboy or fangirl, then this tool may be for you.
vkpibg is entirely stored on GitHub, and since we don’t have write access to someone else’s repository, we can’t compile and build binaries and then upload them to GitHub. So, instead, vpghlk will always compile and build libraries on your PC when you want to use them for the first time and then cache them so that they become available the next time you want to use them again. So you always compile and build from source, which is a pretty good idea. This is different to Conan, where we first check if compiled versions are already available.
Otherwise, working with vp0kfc feels very similar to Conan. It is a Python package as well but unlike Conan, you have to clone their repository from GitHub and run a configuration file before you can use it. After this is done, you type vcpkg <command>
, just like in Conan, for example, vcpkg install
which will then install all dependencies listed in a file called vcpkg.json
. The vcpkg.json
file is equivalent to conanfile.txt
and will list all the dependencies/libraries you wish to install.
There is a much more in-depth talk that shows how to use this tool which you may want to check out if you want to get a better overview of what this tool can do for you.
Hunter
The Hunter package manager is somewhat different to Conan and vipkl. It is essentially an extension to CMake, instead of a standalone package manager in its own right. After providing a few configuration steps within our CMakeLists.txt
file, we can use it by simply declaring the libraries we want to use and it will then resolve them at configuration time within CMake.
In its own right, I like the approach of defining all dependencies within your CMake file. However, in software engineering, we talk about the single responsibility principle, which states that any one module should only do one thing. Typically it is applied to classes, where each class should only do one thing (e.g. read a mesh, another class would then process the mesh (setting up data structures), yet another would calculate mesh quality metrics (one quality metric per class), etc. You would not have a single massive class called Mesh
doing all of that at the same time).
The single responsibility principle in this case can be understood to mean that CMake should look after compiling your code, not dealing with dependencies (in the sense of checking if they are available and if not, find them). Just because it can, doesn’t mean it should. Opinions on this vary. On a side note: CMake already provides support for this with ExternalProject and FetchContent so you can achieve the same without having to use Hunter. The appeal of using Hunter, though, is that you can achieve dependency management by writing fewer lines of build instructions.
The reason I am not advocating using Hunter is that it is CMake specific, it is developed by a relatively small group of people, though development seems to be active and ongoing, and as of the day of writing, Hunter is still in version 0.x
, meaning we still don’t have a stable version. But if you feel generous and want to support a bunch of motivated people to write a package manager when there are good alternatives available, backed with massive financial support, then go for it.
EasyBuild + Lmod
I want to love EasyBuild, I really do, but similar to Autotools, it is pretty much a UNIX-only tool. In their case, it makes sense, though, as they are targeting high-performance computing (HPC) platforms, which pretty much always run UNIX as an operating system. The 1% of HPC clusters that run Windows (yes, they seemingly exist) probably only exist as demos to show that you could run Windows on a cluster if you are crazy enough.
EasyBuild follows the philosophy of building everything, and I mean everything, from scratch. Say you want to install the CGNS library as we did in the previous article, then EasyBuild would look at the requirements that are needed to build CGNS (this is similar to what Conan calls a recipe) and then build these. But you don’t just need hdf5 and zlib installed, you also need a compiler, and a compiler will need some additional dependencies. So, before you even build any library, EasyBuild will compile a compiler for your specific platform. See, I told you, we build everything from scratch.
EasyBuild is just one part of the equation and it requires a Modules package, of which Lmod is the preferred package. Lmod is written in Lua (which gives us a perfect excuse to learn yet another programming language!) The module package will make different libraries available by setting and unsetting environmental variables. Say you want to install the CGNS library but you need a few versions to be compatible with all the different libraries you want to work with. In that case, EasyBuild will download the source files and compile and build all of them for you, while Lmod will provide the interface to you which allows you to load any version of the library. If you load a library that depends on another library, Lmod will silently load that in the background for you as well.
EasyBuild + Lmod is really a treat to work with, but if you are gunning for cross-platform compatibility, this is not the right tool to use. I am always in favour of building code that runs on more than one platform, so either Conan or vgplkh may be better suited in this case to handle dependencies. If you are, however, locked into using UNIX anyway, this may be a viable tool for you.
Case study: Using a package manager to consume the CGNS library
We looked at how to compile and build the CGNS library ourselves in the previous article, so I wanted to pick up this example and show you how to do that with either Conan or vgkpcf. We use a simple example project which has the following structure
root
├── build/
├── main.cpp
└── CMakeLists.txt
Here, the build/
folder will contain all output from the compilation. The main.cpp
file has the following content:
#include <iostream>
#include "cgnslib.h"
int main() {
int cgFile = 0;
float version = 0.0;
cg_open("test.cgns", CG_MODE_WRITE, &cgFile);
cg_version(cgFile, &version);
cg_close(cgFile);
std::cout << "cgns version: " << version << std::endl;
return 0;
}
It includes the CGNS header file on line 2, and then simply proceeds to open a new CGNS file in write mode on line 7, checks its version on line 8, closes the file on line 9 and then prints the version on line 11. If everything goes correctly, we should see the same library version printed on line 11 that we installed with either method shown below.
The CMakeLists.txt
file contains the following content:
cmake_minimum_required(VERSION 3.15)
project(CGNStest)
find_package(CGNS REQUIRED)
add_executable(${PROJECT_NAME} main.cpp)
target_link_libraries(${PROJECT_NAME} CGNS::CGNS)
We specify a minimum CMake version (which is pretty inconsequential here, i.e. we are not making use of any advanced, recent features, so anything above 3.0 should work just fine) and a name for our project. On line 4, we tell CMake that the CGNS library is required, and without it, we can’t produce an executable. On line 6, we generate said executable which depends on the main.cpp
file, and then on line 7, we specify that we want to link our executable against the CGNS library. Don’t get hung up too much on the specific syntax, just try to understand in spirit what we are trying to achieve here. We will look at CMake in a lot more detail in a future series.
Conan
We first need to make sure Conan is installed. As mentioned above, given that this is a Python package, we simply install it through pip with
pip install conan
The first time we want to use it, we have to generate a default profile. This profile will tell Conan what operating system we are using, what our processor architecture is (32-bit or 64-bit), and what type of build we want to produce (Debug or Release). We can generate several profiles or overwrite specific settings for specific libraries later, but we need to have a default place to start from. To do that, you would invoke the following command
conan profile detect
This command will spit out a warning saying that it tried to guess sensible default values. I have tried it on both Windows and Ubuntu and in both cases, it works quite well, so it does a good job of sensing your environment. We are now ready to use Conan to manage libraries for us.
First, we have to generate a conanfile.txt
, which will sit in the root directory. Our project structure now becomes
root
├── build/
├── main.cpp
├── conanfile.txt
└── CMakeLists.txt
and the content of the conanfile.txt
is as follows:
[requires]
cgns/4.3.0
[generators]
CMakeToolchain
CMakeDeps
[options]
cgns*:shared=False
The [requires]
section specifies the library, along with its version, we want to use. To find out which libraries are available, we have to browse through conan center. For example, if we search for CGNS, we will find, we will find it for version 4.3.0. On the left-hand side at the bottom, it shows us how to specify it in our [requires]
section.
There is also a link to the recipe on GitHub, where we can look for some options that we can set for this library. If you click through to the recipe, within the all/
directory, there is a file called conanfile.py
. You will find a Python dictionary called options
with entries for shared
, fPIC
, with_hdf5
, and parallel
. These are the options that we can set for the library and you see that in the [options]
section in the above shown conanfile.txt
. Here we set the shared
option to False
for all library versions (indicated by the asterisk), meaning we are going to build a static library.
The conanfile.py
file also has an Python dictionary called default_options
, which specifies the default options in case no options are specified in the conanfile.txt
. Even if shared
is by default set to False
, it makes sense to set the options here specifically to avoid surprises in the future should any default option change.
The [generators]
section, then, provides the interface to Conan, i.e. how we want to consume the libraries generated by Conan for us. As I mentioned before, CMake is probably the best default option as CMake should be our preferred build system, but we have support for other build systems as well here. Unfortunately, there doesn’t seem to be a good reference page for that in the Conan docs, so the best way to find out which generators are available is by running Conan with an invalid generator name (how about banana?). Conan will then print all currently available generators to the screen.
With the conanfile.txt
completed, we have to first create an output directory, which in our case is the build/
directory. Then, we invoke the following command
conan install . --output-folder=build --build=missing
It will install everything required into the output-folder
and it will compile any libraries from source if they are missing. Once this step is done, we can configure and compile our CMake project in the usual way. During the configuration step, though, we have to pass in one additional variable to point to the Conan toolchain file. A toolchain file contains information about essential compilation resources. It is usually used to specify options for cross-compilation (i.e. setting up the build script for different operating systems, but Conan (ab)uses it to inject all library dependencies here.
If you look inside the generated build/conan_toolchain.cmake
toolchain file, it contains two important entries:
list(PREPEND CMAKE_LIBRARY_PATH
"C:/Users/<username>/.conan2/p/b/cgnse905a92186320/p/lib"
"C:/Users/<username>/.conan2/p/hdf5e507ea1cd290d/p/lib"
"C:/Users/<username>/.conan2/p/zlibee1f000851145/p/lib")
list(PREPEND CMAKE_INCLUDE_PATH
"C:/Users/<username>/.conan2/p/b/cgnse905a92186320/p/include"
"C:/Users/<username>/.conan2/p/hdf5e507ea1cd290d/p/include"
"C:/Users/<username>/.conan2/p/hdf5e507ea1cd290d/p/include/hdf5"
"C:/Users/<username>/.conan2/p/zlibee1f000851145/p/include")
It sets the library and include paths for CMake to find (in this case, I am on Windows, on UNIX it would be in /home/<username>/.conan2/
). We see as before, that we need to install not just the CGNS library, but also hdf5 and zlib. Conan and CMake hide this fact from us so we just need to declare the libraries we want to use and they will resolve additional dependencies in the background for us.
So, now that we have an idea of what a toolchain file is, we start the configuration step, assuming we are within the build/
directory, with:
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=./conan_toolchain.cmake ..
Afterwards, we compile the code with the usual build command
cmake --build . --config Release
Depending on which platform you are on, the executable will now be located either within the build/Release
(Windows) or just the build/
folder (UNIX). Executing build/Release/CGNStest.exe
(Windows) or build/CGNStest
(UNIX), will print the following message to the console:
cgns version: 4.3
This is the print statement from our main.cpp
file and we can see that indeed the CGNS library was successfully installed and found, and we are getting the correct version.
This shows how to use Conan, in a nutshell. It is much more powerful than that and you can use it to do a few more things but most commonly, as an end-user, you wouldn’t be doing much else than what we have looked at in this section. So if you understood all in this section, then you are already a Conan power user.
vcpkg (no thank you)
I never used vbplg before so I thought it would be a good idea to get to grips with it and write up about the basic usage here. It can’t be that difficult I thought, but the more I looked into this tool, the messier it got, and it doesn’t seem to be just my opinion. So instead, I’ll summarise here why I think you should stay away from this tool as much as possible (sorry Microsoft fanboys and girls).
Before you start using vcfgh, you have to run a bootstrap script which will set up your environment. This script is doing weird stuff! First of all, it seems to be confused about what you already have installed on your system. Take for example Jason Turner’s shot at vgpgg, when he tried using it for the first time, the first thing it did was to download CMake, which was already installed on his machine. Furthermore, it detected an outdated compiler and even worse, different versions for the C and C++ compilers. How I have no idea!
Arguably his video is somewhat outdated by now (and I’ll get to that in a second), so I did not observe the same behaviour. however, when you run the bootstrap script it proudly announces that it will collect data about you to, of course, improve your user experience. It can’t possibly be used for more sinister reasons, can it? $20 million settlement over illegal data collection, and lawsuit over code piracy (yes, piracy) at Microsoft.
Microsoft is kind enough to allow you to opt-out (I guess legally they have to), so you can run the bootstrap script with user data collection turned off. This is what I did. Much to my surprise, looking through the logs, I was surprised to find the following line:
[DEBUG] VS telemetry opted in at SOFTWARE\WOW6432Node\Microsoft\VSCommon\17.0\SQM\\OptIn
I did not! So vgpjh decided to ignore my request. It definitely recognised that I did not want my data to be collected because if I run the bootstrap script with and without data collection enabled, the output looks different. I re-ran the bootstrapper, just to be sure, removed temporary directors and ran viplg again but I still got this line. This is a very questionable behaviour and, to be honest, reason enough not to use it.
This is coming from someone who has ditched Chrome for Brave, and Google for Brave search for the same reasons (and this sentence is probably the quickest way to destroy your SEO ranking on Google, ah well …).
The second issue I have is the design philosophy itself. The package seems to have radically changed and while that in itself is not necessarily an issue, it means that any documentation you look up other than the official documentation from Microsoft, is pretty much outdated (like Jason Turner’s video mentioned above).
This means there is just a single resource you should use, but most people will either use YouTube or ChatGPT first to get an idea and this simply will not work. This point will become less important over time, assuming that Microsoft is not introducing further breaking changes. If you always look at the documentation first anyway, then this may not apply. But, if you look at the documentation, the next surprise awaits.
Ok, I get it, vcght introduced a manifest mode, which I am in full support of. This requires a manifest JSON file which lists all dependencies that you want to install. When you then type vcpkg install
it will look through your manifest and install the required libraries, including resolving any additional dependencies which are needed to compile the libraries you are interested in. I get all that. But how can I specify that I want to build a library in release or debug mode? How can I specify that I want to build a static or dynamic library?
If you look at the manifest reference page, this is not mentioned. I have looked through a few other places as well, and something as basic as this seems to be hidden very well in the documentation (or, perhaps you don’t get a choice here).
If you still say that you can live with a package manager that can’t reliably detect your compiler, downloads dependencies it doesn’t need, collects your user data against your will and provides you with documentation that doesn’t document how to use your package manager, perhaps the next point will convince you.
It just doesn’t work. Period.
Let me explain. Let’s say you want to install CGNS, as we did with Conan. Then, you would provide a vcpkg.json
file of this form
{
"dependencies": [
"cgns"
]
}
This doesn’t look too bad (and is pretty self-explanatory). So next, we type vcpkg install --triplet x64-windows
(yes, we have to specify that we want to build for 64 bit which is not the default, although it seems to become one now) and we get an error message like so:
error: building zlib:x64-windows failed with: BUILD_FAILED
That’s interesting, something went wrong when building the first dependency. So you comb through your log files, realise Microsoft collects your data against your will, but you also stumble across the following line:
Downloading https://repo.msys2.org/mingw/i686/mingw-w64-i686-libwinpthread-git-9.0.0.6373.5be8fcd83-1-any.pkg.tar.zst
error: Failed to download from mirror set
Ok, so you check out the mirror and sure enough mingw-w64-i686-libwinpthread-git-9.0.0.6373.5be8fcd83-1-any.pkg.tar.zst
doesn’t exist. mingw-w64-i686-libwinpthread-git-9.0.0.6448.b03cbfb95-1-any.pkg.tar.zst
does exist but that isn’t detected. And sure, you could argue that there is probably a way to tell vghpl to use a newer version of this dependency, but that would be missing the point.
First of all, I don’t want to interact with low-level stuff. If I wanted to, I could just download, compile and install the packages myself. The whole point of a package manager is that we don’t have to deal with this low-level stuff ourselves. Being unable to resolve the dependencies correctly is a pretty bad sign for a package manager. You could argue that this may be one of the few packages for which the build fails while most other packages work. Ok, I get your point, but let’s move on to the next, much more problematic point:
Why is vgfth downloading MinGW as a dependency in the first place? MinGW is a GNU compiler that essentially makes UNIX tools available on Windows. Why is a package manager developed by Microsoft downloading UNIX compilers to compile the libraries you have requested? We saw in the previous article that zlib can be compiled natively on Windows using its own cl
compiler. So why does it need MinGW? It bamboozles me, and I don’t have a good answer here.
So there you have it, a broken package manager that is unable to manage packages, developed by Microsoft using UNIX tools to compile dependencies, and despite illegal data collection to improve my user experience, my user experience is still pretty abysmal (how are they using my data again?)
Summary
So what have we learned in this article? Mainly that Microsoft is fueling my inner dyslexia, but in more serious terms, we looked at package managers that help us deal with dependencies. Specifically, we highlighted Conan, vcpkg (yes, I know how to spell it …), Hunter and EasyBuild, while taking a deeper look at Conan, as well as reasons why I can’t recommend vcpkg.
Conan is a great package manager and every time I work with it, I am just very satisfied. It took me some time to master managing dependencies / installing libraries and I felt only comfortable doing that on UNIX for quite some time. With Conan, however, anyone can become a C++ power user and inject libraries into their code with a single command.
Package managers are supposed to make our life easier and allow us to concentrate on the important part; writing code (and not debugging the build process, yes vcpkg, I am looking at you). Given the relative ease at which you can pick up Conan, I would recommend starting with this package manager for all of your C++ projects. It’s a great and user-friendly entry into the world of managing dependencies automatically.

Tom-Robin Teschner is a senior lecturer in computational fluid dynamics and course director for the MSc in computational fluid dynamics and the MSc in aerospace computational engineering at Cranfield University.