The 6 biggest and unsolved challenges in CFD

Computational Fluid Dynamics (CFD) is undergoing constant development, yet development stagnated during the early 2000s. This led to a flurry of reports and articles that highlighted the biggest challenges in CFD we face. Among them, the 2030 CFD vision report by NASA is one of the most thoroughly researched ones, highlighting six key challenges we need to overcome to advance CFD capabilities beyond its current status quo. A set of 4 grand challenges is proposed that can only be achieved if sufficient progress in all six key challenges has been made.

In this article, we’ll have a look at the key challenges that NASA identified through surveys, workshops, and discussions with key stakeholders. Given that the 2030 CFD vision was published in the mid-2010s, we are halfway between its publication and reaching 2030, so we can already make comments on whether some of these challenges in CFD have been overcome. We’ll also look at a list of topics you can work on today to check for yourself if we have made enough advancements in key areas so that we can work towards overcoming the four grand challenges.

In this article

Introduction

In the early 1970s, McDonnell Douglas and Lockheed introduced their iconic DC-10 and L-1011 TriStar jets, respectively (see images below). Capable of long-range flight, these aircraft dominated the skies with their iconic triple-engine design, only rivaled by Boeing’s 747 jumbo jet, both in design, range, and capacity. In the late 1980s, Both the DC-10 and L-1011 TriStar were approaching retirement age and airlines started to look for alternatives to fill their space.

Boeing kept updating its 747 with various variants, which it continued to do for some decades to come, while both McDonnell Douglas and Lockheed were looking to come up with their own alternatives. The competition was fierce, and compared to the 1970s, American manufacturers suddenly had to take the European threat, operating under the name of Airbus, seriously. It, too, proposed its own version, the A330 and A340, to compete in this market segment. Airlines were spoiled for choice for once.

Boeing realised that Airlines weren’t interested in their extended Boeing 767. Airlines were spoiled for choice and could play each manufacturer against each other, so Boeing realised that they had to come up with a design from scratch that would satisfy all the Airlines’ requirements. Their original design was the Boeing 777 Trijet (see image below), resembling the look of the DC-10 and L-1011 TriStar, though that was later dropped in favour of a twinjet design that we know today and either love (or hate).

DC-10 (Source)
L-1011 TriStar (Source)
Original Boeing 777 Trijet design (Source)

Why do I bring this up? Well, for starters, I could be talking all day about aircrafts and their history (and some of my student will have had the (dis)pleasure of me doing so in one of my lectures), but the aerospace industry has a long-standing track record of taking emerging and disruptive technology and pushing it to the limit, far beyond of what their usecase was at the point of adoption.

We are living through one of these phases at the moment, where hydrogen is being proposed as an alternative to Jet A1 fuel, requiring a completely different approach to designing aircraft. While the exterior will look the same, the internal systems will have to operate differently. When the Boeing 787 Dreamliner was introduced, we were adopting carbon fibre on a massive scale (and not without its own initial problems), the A320 pushed for flight-by-wire technology, and, well, the Boeing 777 was the first aircraft to be fully constructed in CAD.

Unsurprisingly, then, the aerospace industry was also the main driver behind pushing CFD beyond its current use cases, and we have the aerospace industry to thank for CFD being as powerful as it is today. Through a constant stream of investment, research, and innovation, CFD methods have been developed, matured, and deployed into commercial CFD solvers, and not just the aerospace industry but other industries as well are benefitting from these developments.

It is only sensible, then, that the aviation industry is leading the way in mapping out the current challenges in CFD that we all face, not just the aerospace industry.

NASA spearheaded a committee in the mid-2010s to identify the key challenges in CFD, which comprised members from academia, industry, and governmental bodies representing areas of expertise in aeronautical engineering, applied mathematics, and computer science. Initial challenges were collected through a survey from individuals with a stake in CFD, and this was followed up by a workshop. The outcome was synthesized in their landmark study, which we nowadays know as NASA’s 2030 CFD vision.

This published report was a direct response to what many leading figures within the CFD community felt was stagnating CFD development (but don’t worry—around that time, I started my PhD, disaster averted …).

The report details key challenges in CFD that are currently representing bottlenecks in research. Although issued in the mid-2010s, there still are many open research questions that need addressing. Today, I thought it would be good to examine the 2030 CFD vision and comment on recent successes, areas that represent emerging challenges, and opportunities that could not have been foreseen in the mid-2010s.

Related to the 2030 vision, other authors were cashing in on the same idea and proposed their own challenges in CFD. Unsurprisingly, they all do reflect more or less the same challenges, albeit using slightly different words and terminology. But one study is of particular interest, and that is the work of Spalart and Venkatakrishnan, who, in their own rights, are influential people within the CFD community and recognised for their developments, who published a study on the challenges in CFD faced by the aeronautical industry.

I guess that’s a good way to spend retirement, no judgment. I do receive the odd email blurb from Spalart (it’s an email list of about 170 participants (or so), so no private communication), educating us all on the correct usage of the Spalart-Allmaras. It’s very informative (and not grumpy at all). I digress.

In their study, they produced the following image:

challenges in cfd in the aerospace sector
Taken from Spalart and Venkatakrishnan (I may or may not have modified the picture slightly …)

Here, areas shown in green indicate regular and confident use of CFD, purple area represent emerging trends where CFD is used more and more, while red areas represent areas where CFD is not yet routinely used. Even though this image is from 2016, picking any of the purple or red areas are an easy way to generate research ideas or in-deed a PhD research proposal.

We can see that the green areas refer to mainly steady-state RANS simulations while purple areas are mostly related to unsteady simulations, be it either unsteady RANS (URANS) or using some form of scale-resolved turbulence modelling approach (e.g. DES, SAS, LES).

Oh, and areas shown in black represent common sense but also a target for Boeing to achieve, perhaps this could be Boeing’s 2030 vision, although I really hope they achieve that much quicker!

So, in the remaining part of this article, I want to go through the NASA 2030 CFD vision report and highlight what challenges they have identified and then look at some areas you can pick to formulate your own research project, should you wish to contribute to the current challenges in CFD.

Challenges in CFD: The six key areas of development

NASA identified six key areas that are currently representing bottlenecks in the CFD development process. To advance CFD into the next generation of tools and methods, we need to overcome all of these jointly before we can make advances and push current CFD technology to the next level. Let’s review each section individually.

High-performance computing

What a great challenge to start on, as this was already partially addressed in my previous article on GPU computing in CFD. However, NASA does provide some more details here. GPUs are just part of the equation, but their reduced power consumption is an aspect that makes them rather powerful.

However, HPC systems of the future are heterogeneous in nature (CPUs and GPUs), and in order to run CFD software efficiently, we’ll need to be able to leverage both at the same time. Most legacy CFD codes are written with a pure distributed memory approach in mind, and switching to a hybrid approach, utilising CPUs and GPUs at the same time, as well as mixing distributed and shared memory approaches, will likely require entire new codes (or legacy codes with substantial investment in this area).

Programming languages and libraries also need to catch up, as the complexity of writing a scalable CFD solver across different HPC hardware architectures requires expertise in computer science and CFD, a set of skills that, in my experience, are usually mutually exclusive. Thus, writing a scalable CFD solver is not the result of a single motivated individual, but it becomes an inherently multidisciplinary endeavour, and if you want to write the next Fluent, OpenFOAM, or the like, you’ll need to find a strong partner with complimentary skills to yours.

CFD codes don’t scale well. For a long time, HPC research was concerned with producing the first Exascale-capable cluster, which we have achieved. However, utilising that amount of power is an entirely different challenge. If you get good scalability across 1,000 cores, that is a really strong achievement. However, if we want to utilise the Exascale clusters that become more widely available as time progresses, we need to be able to scale across 1,000,000 cores (and more) efficiently to leverage these computational powers.

Using 1,000,000 cores brings its very own challenges. The amount of data required, and produced, can no longer be processed by a human, and thus both the pre- and post-processing needs to be fully automated by tools that can handle this amount of parallelisation. We are seeing more and more tools being developed to adress this issue but it remainds to be seen if they scale effeciently across processors.

And even if we have all of these issues resolved, ultimately, we need to develop and test our code on the same HPC architectures. It is no good to develop your own tool on your own laptop or desktop PC and test that it works well in your consumer graphics card or 4-core CPU when the scaling of your code breaks down after adding a few more cores or GPUs. Developers need to have access to profile and test their implementation to make it scalable across a large number of heterogeneous compute nodes.

Physical modelling of turbulence

By the time of writing the 2030 CFD vision, available compute resources on HPC clusters were just about capable of running large eddy simulations of wings at realistic Reynolds numbers. This is shown in the following image:

challenges in cfd in turbulence modelling
Taken from Witherden and Jameson

Here, we have the available computational power plotted on the x-axis and the required memory on the y-axis. In the mid-2010s, we were in the Petascale compute capabilities, and we see that Exascale compute capabilities are required, which have already been achieved, so large-scale LES simulations have now become possible. Great, one challenge solved. HPC facilities will improve just by sitting and waiting.

One issue with turbulence models is that they are calibrated and adjusted over a wide range of flows, though these calibrations are done for simplistic flow types (flat plate, channel and pipe flows, shear layers, etc.). Sure, they give results for more complex geometries, but they are known for their sometimes poor performance for secondary flow structures and separation predictions. Experimental work for calibration, as well as validation and verification studies, are required to be made available to a wider audience so they can benefit from the test data.

RANS-based turbulence models assume that the flow is fully turbulent. If the laminar to turbulent transition plays an important role (as it does, for example, for stall predictions on airfoils and wings), then classic RANS models stand no chance of predicting this phenomenon accurately (and this is the reason we can only confidently use RANS in areas where the flow is steady and attached, i.e. see the green areas in the Figure above by Spalart and Venkatakrishnan).

If we take a look at Otto Aviation’s Celera 500L, an aircraft that is hyped to be a revolution in the aerospace sector for its radical design to promote laminar flow (I have done some CFD studies on it and remain sceptical about its actual advantage based on the numbers), then we would have no chance to predict lift and drag using conventional RANS models.

Transition-based RANS models have been developed and applied with modest success, but the most popular variant, the Re-theta-Gamma k-omega SST, or k-omega SST-Langtry-Menter model, is based on correlations alone and not physics. If it is applied to flows outside its correlation, it can fail just like any other RANS model if applied to flows for which it hasn’t been calibrated. Furthermore, this is a development driven by ANSYS, and some details of the model haven’t been published.

There are alternatives, such as the k-kl-omega model, which are based on physics and thus stand a better chance of predicting this phenomenon. However, it focuses on only natural and bypass transition, ignoring cross-flow and separation-induced transition. If the latter two are important, such as in cross-flow at wingtips that induce transition, which influences the prediction of aerodynamic flutter, then even this model would be unable to capture this. Sure enough, aeroelasticity and flutter are purple areas in Spalart and Venkatakrishnan’s figure above.

To complicate things further, the accuracy of the solution is not just a function of the turbulence model being used but also the grid which is used to solve the flow. Different models are more susceptible to grid-induced modelling errors than others, and transition-based RANS models have some of the strictest (and most prohibitive?!) modelling constraints. As far as I am aware, and as of the time of writing, there still isn’t any best-practice guideline available on what grid resolutions are required for solving turbulent flows with transition-based RANS models.

With DES and LES simulations, we require that we capture about 80% of the turbulent kinetic energy in the farfield (or away from any wall geometry). This is easier said than done, as most solvers (if not all) provide no support to test this (as there is no uniform agreement on how this can be verified). Sure, there are approximations to this, and it is typically down to the end-user to select one and implement it themselves to ensure sufficient flow details are resolved for DES and LES to work correctly.

Numerical Algorithms

If you have ever run a RANS simulation at moderate to high Reynolds numbers than you will be acutely aware of the challenges of getting a converged solution. Most of the times, convergence is impossible and rooted in the various modelling assumption that are inherent to different turbulence models. In these cases, we typically opt for surrogate measures to attest convergence, for example, by checking if integral quantities such as the lift or drag coefficient have converged to a mean value over a predefined averaging window.

If turbulence models themselves cannot be made to behave better to allow for convergence, more work on convergence acceleration techniques is required that are capable of speeding up the convergence rate, such as the CFL-based evolution strategies (which, when they work, are a bit of magic! I have gotten convergence to machine precision (1e-14) within about 20 iterations for a simple test case).

Even if we do get these techniques to work, there is no universal technique available that can accurately judge convergence over a wide area of flow cases. Domain expertise needs to be utilised to judge convergence (e.g. lift and drag for airfoils, wall-shear stresses for reattaching flows, heat transfer rates for chemical reacting flows, etc.), which makes judging convergence still a hot topic of debate. A universal measure remains absent a decade on from the 2030 CFD vision report.

Another area of improvement that falls under the numerical algorithms group is uncertainty quantification. While there have been advanced made, these remain unexplored in most cases. Typically, we do a grid dependency study and use that to quantify the uncertainty to some extend. However, the field of uncertainty quantification is vast and not only limited to CFD itself, yet little additional tools are used to quantify simulation uncertainties.

Even if uncertainty and error quantification could be used on a larger scale, it is not as simple as simply dropping some tools into our CFD workflow. Quantifying errors and uncertainties with our boundary and initial conditions is complicated. Turbulent quantities are approximated based on an average turbulent intensity and an assumed (approximated) turbulent length scale, which is rarely constant across a boundary, yet we used these average quantities to let our solution converge.

Modelling errors are also difficult to quantify. While er can confidently state the error we do by approximating a derivative or interpolation with a first-, second-, or higher-order approximation, other areas remain more challenging to quantify. RANS turbulence models, for example, are full of assumed correlations between mean field quantities and unresolved turbulent quantities, closure coefficients, and sometimes additional terms required to model some assumed behaviour.

While we can quantify errors against experimental data of a turbulent solution, we are not able to determine how much of that error is due to the turbulence model, the boundary or initial conditions, the numerical approximations, and even the grid, although the grid dependency study tells us at least how much the solution varies with successive refinements of the grid.

An additional issue is with existing methods used in CFD; typically, linearised error estimations are used to quantify errors, which work well for steady-state flows but have shown to grow without bounds for time-dependent flows. Since a move towards DES and LES is required to overcome existing challenges in CFD, we’ll have to explore additional ways to quantify errors and uncertainties with time-dependent flows.

Geometry and Grid generation

This is probably my favourite area in all of this, not because I am a particularly enthusiastic grid generation person (although I find the grid generation process very satisfying), but because there is so much work to be done in this area, and no one seems to be interested in doing anything about it.

I did make a statistic once by looking at the number of papers published on turbulence modelling and then at the number of papers published about grid generation and related CAD to CFD papers. Do you want to have a guess for how many more papers there are in the field of turbulence compared to grid generation? 42. First of all, 42 is a pretty good number by itself, but it also shows the sheer discrepancy in research funding.

It turns out that not just CFD practitioners can’t stand the mesh generation process, but even governmental funding bodies, who presumably have never done grid generation, can’t be bothered to invest in this topic. I find it fascinating, but then again, I find code documentation fascinating, and I know that no one is sharing this fascination with me (based on reader numbers).

This consistent underfunding of grid generation tools is now coming back to bite us as we move into the exascale computing era. Mesh generation needs to be automated and scale well over a number of cores, yet most mesh generators are incapable of generating high-quality grids automatically.

Since the CFD 2030 vision was published, new tools have sprung up, in particular, Ansys Fluent meshing, which I think is one of the (if not the) best meshing tools out there. It’s pretty straightforward to use, can be easily automated (a task taking me 3 weeks to automate in Pointwise was reduced to an hour with Fluent meshing), and is pretty robust. However, it can still fail for complex geometries, which is mostly related to the underlying CAD file that was given to the meshing tool.

This brings me to the next point: there is no good format to exchange CAD data with a CFD solver or meshing tool. Most other industries are quite happy with a rough approximation of the shape, whereas CFD has very tight requirements towards its CAD files. Ideally, they should be watertight (which they rarely are), do not feature any singularities, and show avoid any round-off induced geometry issues, such as inexact edges (which will mess with mesh). Gammon et al. give a pretty good overview of all types of issues with CAD files.

I have played around with the idea of proposing my own CAD format (standard) to alleviate this problem, though some good formats do exist. Among the best, I think, is the *.obj file format, which has support for requiring watertightness (though it does not enforce it), allows for surface grouping (required to assign different boundary conditions to different parts of the geometry), and is not as wasteful with its memory compared to *.stl files. The downside is that, just like *.stl files, it only features linear elements.

A *.obj file format that optionally enforces watertightness and has support for non-linear elements is probably a good step forward towards having a CFD-specific and fit-for-purpose CAD file format. But, would you believe it, it takes time to create such as format, implement and test it, and did I mention that no one cares to provide research funding for that? Anyway, I think this would be an important and hugely impactful feature for CFD applications, but there does not seem to be a lot of excitement working on a new file format.

Knowledge extraction

To date, the prevailing method of extracting data from CFD simulations is by using a post-processing tool, be it an onboard post-processor within the CFD solver or an external software such as Tecplot, Paraview, Fieldview, etc. Extracting information from simulations will become more and more challenging as the number of grid points increases, and downloading individual simulations onto a personal computer will become impossible.

Large-scale clusters with dedicated visualisation nodes, with high-performance graphic cards as well as sufficient RAM are required to visualise these simulations, but even then, extracting key information will require some form of processing of the data, which mostly remains a task of clicking a series of buttons within the post-processor, or automating that through bespoke user scripts. All of these factors will make it more challenging to extract information effectively from large-scale simulations.

With the adoption of finite-element methods, in particular, the Discontinuous Galerkin (DG) method in the field of CFD, post-processing software needs to catch up as well with visualising higher-order elements. While the CGNS file format allows storage of higher-order, standard elements (up to fourth-order), post-processors lack the ability to visualise these (rendering the use of higher-order elements somewhat limiting). In practice, higher-order elements still need to be decomposed into smaller, linear elements just for visualisation purposes alone.

If knowledge extraction from a single CFD simulation isn’t complicated enough, getting information from a series of CFD simulations over a large range of input parameters will be equally challenging. Creating a reduced-order model (ROM) from a collection of high-fidelity CFD data remains widely under-utilised, where we pay a high upfront cost to set up a ROM that can then be efficiently used to extract information.

CFD data is also typically used in conjunction with experimental data, where available. Instead of viewing each component as a separate entity to assess errors and uncertainties, data-fusion activities will be required to unlock further potential in CFD. From simple outlier detections to time-dependent data collection in experiments that are fed into a CFD solver as boundary conditions to overcome, for example, the issue of specifying averaged turbulent intensities and integral turbulent length scales.

Multidisciplinary Analysis and Optimization

We are moving into an age where CFD simulations on their own are no longer the only way to extract information. Instead, we are seeing more and more coupling of CFD solvers with one another to simulate multiphysics aspects. An example that most will probably be familiar with is that of Fluid-Structure Interactions (FSI), where we couple a fluid and a structural solver together, where pressure information from the CFD solver is given to the structural solver, which uses that as a non-uniform load to compute displacements and stresses.

While there are some frameworks available in which FSI simulations can be done with ease, there are a multitude of other physics-based solver out there that may benefit from coupling their solution with that of CFD. The challenge in multidisciplinary analysis and optimisation is thus one of finding a general way to couple two or more physics-based solvers together to increase the knowledge that can be extracted (i.e. not just the pressure for CFD, but also the displacement from a solid solver).

In the 2030 CFD vision report, it was proposed that a robust framework must be found to couple solvers from different disciplines together, and preCICE is one candidate working towards becoming this framework. It offers coupling of different solvers out of the box and comes with all of the goodies you’d expect; coupling of transient equations, support for parallelisation, and a way to efficiently map data between different simulations.

One thing it has not done yet, but one area in which the 2030 CFD vision is postulating is required is the propagation of uncertainties from solver to solver, which will keep track of the overall simulation uncertainty. Since this is an area that is still actively developed for CFD simulations themselves, we do not have, as of yet, a way to manage this in a multidisciplinary framework.

An additional barrier for these types of simulations is the exchange of data, not just through a library like preCICE, but in general for storing the data after simulations have finished. If we run an FSI simulation, how do we store the associated data? We have a CFD solution and one from a solid solver, and both will likely be stored in separate data files. For CFD purposes, we have the CGNS data format, which has been designed as a standardised way to store CFD data, and we will need similar developments for multidisciplinary data exchange formats.

NASA’s 4 Grand Challenges in CFD

The six areas we looked at above provide the building blocks for the 2030 CFD vision. These are all individual challenges in CFD that individuals or research teams may want to tackle, but in order to demonstrate that a readiness beyond the simplest of test cases has been achieved, NASA postulated four grand challenges, which, if successfully simulated, would attest sufficient maturity within all of these areas.

Given NASA’s aeronautical position, these grand challenges in CFD research are all tied to aeronautical applications. While the geometries may be aerospace-specific, changing the geometry can translate most of these challenges to other industries as well. We will look at each grand challenge next.

Challenge 1: Wall resolved LES simulation of complex industrial problems

Unsurprisingly, turbulence modelling is one of the key challenges in CFD we need to overcome, and it is tightly coupled with available computing resources. NASA is foreseeing the simulation of a complete aircraft configuration at critical points in the flight envelope using scale-resolved turbulence modelling, i.e. wall-modelled LES or DES.

Advances in HPC alone are not sufficient, and CFD solvers need to adopt algorithmic improvements to run simulations on exascale HPC clusters. Successful demonstration of this grand challenge will require simulations at low-speed approaches, take-off, and dynamic manoeuvres, where the behaviour is strongly influenced by turbulence characteristics such as separation and shock-boundary layer interactions.

Challenge 2: Off-design turbofan engine transient simulation

In this grand challenge, the complete simulation of an engine is envisaged. This will require rotating geometries, scale-resolved turbulence simulations, resolving combustion processes, conjugated heat transfer, and the prediction of aerodynamic noise. The challenge here is the coupling of potentially more than one solver, as well as the disparate time scales, i.e. going from the smallest time scales found in the combustion process through to turbulent time scales and all the way up to the energy-containing eddies which operate at the highest time scales.

Achieving this grand challenge will allow for virtual engine testing, which will include the prediction of compressor surges and stalls, as well as the intricate details of the combustion process itself, which will be tightly coupled to turbine cooling itself. Thus, while high confidence can be given to the flow around the engine intake, as the flow moves further downstream and complexities such as combustion and cooling are added, uncertainties grow with each modelling stage, and they need to be effectively captured and managed.

Challenge 3: Multidisciplinary Analysis and Optimization (MDAO)
of a highly flexible advanced aircraft configuration

The basic goal of this grand challenge is to demonstrate that CFD can be used to optimise complex aircraft geometries for different phases of the flight mission. This will require the coupling of at least CFD with a structural and flight-dynamics solver, which will be used to determine aerodynamic limits (buffeting), structural limits (plastic deformation), as well as manoeuvrability within safe limits (preventing stalls).

The first step is the coupling of at least three separate solvers, which will require a significant amount of compute resources, and if unsteady flight conditions at scale-resolved turbulent conditions are envisaged, uncertainty propagation becomes paramount to avoid divergence between the simulated conditions and flight tests.

Challenge 4: Probabilistic analysis of a powered space access configuration

Their final grand challenge is designed to touch upon all six areas of improvements that we looked at above, which include the aerothermal simulation of the fluid flow around a representative space vehicle, including the modelling of heat transfer and the material properties. Uncertainty propagation and capturing are additionally required, and as we saw with the other grand challenges in CFD, scale-resolved turbulence modelling in conjunction with HPC computing is required to demonstrate successful performance within this challenge.

What did they miss? Current emerging challenges in CFD

The 2030 CFD vision was researched in the early 2010s and published a year later. This meant that some merging technologies could not have been foreseen, or they were present but were not seen as important enough to have an influence on CFD itself. There is one area in particular that was difficult to assess in the early 2010s, which I think deserves an honourable mention here, as it is fast taking over various areas within the CFD toolchain, and I foresee it becoming ever more important.

I am talking, of course, about artificial intelligence (AI), and within this discipline, machine and deep learning in particular.

We have already looked at ChatGPT’s capabilities for writing CFD solvers, and we have established that it can be a good tool for small tasks but not for writing entire solvers. However, the real application of deep learning in CFD will replace the high cost of simulations themselves with a deep neural network.

There are already a few research groups working on replacing Navier-Stokes with machine learning and their results look impressive. However, looking impressive is not the measure by which we need to judge these results, and ultimately, any result obtained from a deep neural network must be judged with the same high-expectation that we have towards Navier-Stokes-based CFD solvers.

Conservation of mass, momentum, and energy are paramount, especially in compressible flow simulations, and this is an area where machine learning algorithms are just not capable of providing satisfactory results. If a machine learning algorithm is trained with too little data or is allowed to fit the data exactly (overfitting), it will not generalise well for different flow scenarios for which it wasn’t tested and thus will become useless. Thus, there will always be a degree of inaccuracy, and this will likely be enough to violate our conservation laws.

Thus, we need to use our machine learning frameworks in a clever way to exploit their potential, and this is where research will need to focus its attention if machine learning is to be adopted on a widespread scale within the CFD community. An additional issue is that uncertainty quantification will become rather difficult, so any advances made in this area will need to be judged in conjunction with losses made in the area of uncertainty quantification.

towards next-generation CFD

OK, so there were a lot of challenges in CFD and outstanding areas of research we need to overcome. Keeping all of that in our heads may be rather difficult the first time we read. So the CFD 2030 report also provides us with a handy roadmap and key areas of investigation that I wanted to highlight here as well, as they are rather useful to get a visual overview of all of these challenges we have looked at above, as well as providing a list of challenges in CFD we can use to formulate our own research projects.

The 2015 – 2030 roadmap

The 2015-2030 roadmap provides, well, a roadmap to overcome these issues and challenges in CFD. Since some time has already passed since the original inception of this report, some areas will have already progressed towards achieving these challenges, others will lack behind. To get a detailed understanding of where we are, an in-depth literature review would need to be performed, which is beyond the scope of this article (and I am sure you’ll thank me for not bombarding you with 500 academic references).

Let’s look at the roadmap then:

a roadmap for challenges in cfd
Taken from Slotnick et al. (NASA 2030 Vision)

We can see all areas of research on the left, and we can see that HPC is the main driver at the top. Without HPC, CFD is practically impossible for any serious geometry of interest. We also see how different research streams interact with others, as well as technology milestones we have to hit in order to demonstrate the maturity of any given development.

From reading the academic literature, I know for a fact that the first milestone around LES over an entire aircraft has already been achieved and so things are moving in the right direction. Having worked on the FlowSimulator myself, a platform for multidisciplinary simulation of CFD, structures, flight dynamics, and pretty much anything else required for the simulation of a full aircraft (including optimisation), I am confident that similar development elsewhere will result in the next technology demonstration to be achieved as well.

It remains to be seen if we will achieve all technology demonstration by the year 2030, which likely will remain a challenge.

Research topics you can work on to overcome challenges in CFD

Computational tools and CFD solvers are constantly evolving, with many pumping out 1 or 2 new versions each year. With that comes improvements in HPC, automation, and implemented algorithms that tackle one of the six key areas we looked at above. Thus, constant evaluation of current solver capabilities is required to assess whether any noticeable progress has been made towards achieving the 2030 CFD vision.

NASA provided a list of possible topics that need to be investigated to monitor progress against all six key areas of improvement and to work towards the four grand challenges in CFD themselves. Given their aeronautical orientation, their list is heavily geared towards these types of applications, but they can equally be applied in various other industries.

I have copied this list below and modified it somewhat so it is not just applicable to aerospace applications where applicable (some areas are still heavily aerospace-focused). This should give aerospace engineers, as well as engineers from other disciplines, a better idea of which areas need to be focused on:

  • Investigation of flow separations over smooth surfaces (wings), bluff-bodies (cars), as well as shock-induced separation
  • Laminar to turbulent boundary layer flow transition and subsequent reattachment (predicting laminar separation bubbles on wings/car diffusers, etc.)
  • Viscous wake interactions and boundary layer interactions or merging
  • Flows in corners or junctions (wing-fuselage intersection)
  • Simulation of icing and its build-up during flight
  • Passive and active flow control to control the separation point on a given surface
  • Turbomachinery flows (here, the main issue is complexity, i.e. resolving rotating parts, scale-resolving turbulence, combustion, and convective heat transfer)
  • Mixing and cooling in aerothermal flows (heat exchangers), predicting heat transfer rates accurately
  • Reactive flows, including gas chemistry and combustion
  • Simulation of the jet exhaust
  • Aero-acoustics for passenger comfort and noise-level predictions (airplanes, cars)
  • Resolving vorticities with a high degree of confidence around blade tips (helicopters, wind turbines, propellers) and wings (wingtip vortices)
  • Wake hazard reduction and avoidance (airplane spacing during approach and landing, motorsport applications for enhanced overtaking capabilities)
  • Correlation between wind tunnel and flight testing (scaling effects)
  • Rotor aero/structural/controls, wake and multirotor interactions, acoustic loading, ground effects
  • Shock/boundary layer, shock/jet interactions
  • Predicting and mitigating sonic boom
  • Store/booster separation
  • Planetary retro-propulsion
  • Aerodynamic/radiative heating
  • Plasma flows
  • Ablator aerothermodynamics

Summary

If you have followed this article to its end, you now have a pretty good idea of the current challenges in CFD. The six key challenges we need to overcome are in

  • High-performance computing
  • Physical modelling of turbulence
  • Numerical Algorithms
  • Geometry and Grid generation
  • Knowledge extraction
  • Multidisciplinary Analysis and Optimization

It is likely that some of these challenges in CFD will still persist beyond the year 2030, especially in the area of uncertainty management and multidisciplinary coupling of solvers at various scales.

I hope to have conveyed the key challenges in CFD, as well as areas to work on if you intend to check for yourself how much progress was made in different areas towards the 2030 CFD vision. These areas are challenging, but rewarding to work on, and some progress already does suggest that we are on a good trajectory to tackle most of these issues by the year 2030.

Finally, what the 2030 CFD vision did not foresee was the advancement of machine learning and the proliferation of applications within CFD. It is my personal opinion that we will continue to see case studies, research streams and success stories in various areas of CFD where machine learning will be applied to great success. This area offers a great benefit to the CFD community, yet there is no widespread uptake as of the time of writing. If you are looking for an emerging research topic, then why not consider machine and deep learning for CFD applications?