Volcano Eruptions On the Rise with Solar Minimum


Armstrong Economics Blog/Nature Re-Posted Dec 1, 2022 by Martin Armstrong

We are now in Solar Cycle 25 with peak sunspot activity expected in 2025. Solar Cycle 24 which ended in December 2019, was of average in length, at 11 years. However, it was the 4th-smallest intensity since regular record keeping began with Solar Cycle 1 in 1755. We’re now in Solar Cycle 25 and we are still in Solar Minimum conditions at this time. Solar Maximum is predicted to occur midway through this cycle which may come as soon as November 2024 but no later than March 2026, with this ideal peak reaching most likely by July 2025.

Right now, the solar wave is conforming more to our model than that of NASA. The Sun has become far more active than NASA has forecast or expected. NASA is beginning to worry that this Solar Cycle 25 could become the Strongest Cycle Since Records Began. Effectively, in terms of our model terminology, Solar Cycle 25 may be a Panic Cycle. In other words, we appear to be headed into the strongest cycle on record following the weakest cycle. That is high volatility in cycle terminology.

So what does this mean for Markets?

Since this Solar Minimum may continue into 2024, that appears to be a very major turning point on our global food index. Most of our models on markets are showing Panic Cycles in the 2027-2028 time period. That appears to be more war than nature.

I warned that Socrates, which monitors everything around the world, noticed a distinct correlation that more volcanos erupt during Solar Minimum. There have been many studies on the impact of UV and gamma radiation during solar changes and events. Gamma-rays are a form of electromagnetic radiation, as are radio waves, infrared radiation, ultraviolet radiation, X-rays, and microwaves. Gamma-rays can be used to treat cancer, and gamma-ray bursts are studied by astronomers.

I have also reported that our correlation models also show that solar minimums correspond to increased volcanic activity. Volcanic winters take place during solar minimums. It certainly seems that gamma-rays may be the reason volcanoes erupt more during these periods. This certainly also reduces food production and increases disease, presumably because of a rise in malnutrition. However, since gamma-rays are also used to treat cancer, there is at least a basis to warrant further investigation if the increase in gamma-rays during solar minimums impact certain susceptible people or those with particular DNA sequences.

1816 Year Without a Summer

We tend to ignore volcanos since they are not in our backyard. The deadly aspect of these volcanic eruptions is not the loudness of the boom, but how much ash it throws up into the atmosphere which then blocks the sun creating Volcanic Winter.

Go to the beach on a partly cloudy day. When a cloud blocks the sun it suddenly gets cool. This is common sense. As far as volcanoes blocking the sun, well all someone has to do is read: The Year Without Summer: 1816 and the Volcano That Darkened the World and Changed History

Tambora

Mount Tambora (VEI 7) erupted  in 1816 and threw into the air so much ash that it snowed during the summer in New York City. It became known as 18-hundred-and-froze-to-death. This account from history tells the story that 1816 was a year when the sunlight could not penetrate the natural pollution from Tambora. As a result of a volcanic eruption at Mount Tambora in Indonesia, weather patterns were disrupted worldwide for months, allowing for excessive rain, frost, and snowfall through much of the Northeastern U.S. and Europe in the summer of 1816.

1817-Wheat-Y

The global cooling altered the natural weather and it resulted in a serious food shortage that set off a mass migration from New England to the Midwest within the USA as people were trying to find the sun. Some saw this as an omen and there was also a religious revival.

Almost one year has now passed since the Hunga-Tonga-Hunga-Ha’apai volcano erupted (VEI 5). Only now are we starting to realize that this eruption was the biggest volcanic event in human history. They have mapped the 22,000 km(2) area around the Tonga volcano. This has curiously taken place with the weakest solar minimum on record. More significantly, such a major explosion takes about one year before its true impact is understood worldwide.

Mauna Loa, which is the largest volcano in the world covers half the island of Hawaii. It has erupted 33 times since 1843 making this an average cycle of 5.4 years. It tends not to be extremely violent as many others. Hawaii’s Mauna Loa has therefore erupted for the first time in decades but nearby, Kilauea is also erupting and both on the archipelago’s Big Island. Dual eruptions haven’t been seen since 1984.

The last eruption took place in 1984, making this prolonged quiet period the volcano’s longest in recorded history. More interesting, it is near Kilauea which erupted in 2018. The concern is not this volcano by itself. We are looking around the world at increased volcanic activity for the danger is a volcanic winter coming on top of these shortages manufactured by COVID restrictions.

There has been activity which has been detected in Alaska under what has been an 800-year dormant volcano near Sitka known as Mount Edgecumbe. This volcano was believed to have been extinct since it has not been active for at least 800 years. Earthquakes began earlier this year.

Just in August, in Indonesia, the Anak Krakatau Volcano erupted in Seven Explosions within Two Days. It unleashed 1500-Metre-High Scorching Ash.

The Ahyi Seamount is the largest submarine volcano which lies 449 feet deep in the Pacific Ocean below the Northern Mariana Islands, which are more than 3,700 miles west of Honolulu.

Meanwhile, over in Italy, the Stromboli volcano has also erupted also during October 2022. Several explosions inside of Italy’s Stromboli volcano sent enormous plumes of smoke into the sky and major streams of lava into the Tyrrhenian Sea over the weekend.

Over in Russia, its Shiveluch volcano has become active and now a powerful explosion is considered possible at any moment. It is one of the largest in the Russian far East volcanos recorded and it has had volcanic ash plume rising up to around 13000ft altitude.

There were 5 eruptions last year around the world at 5 or higher on the Volcanic Explosivity Index (VEI). The first two eruptions here in 2022 were Bezymianny in Russia during May and Popocatépetl in Mexico during June. There was only one in 2020, but 5 during 2019. It appears we are witnessing a rise in global activity that is starting from a general major low in volcanic activity overall.

We have a string of Directional Changes between 2022 and 2025.  We may be looking at rising volcanic activity into 2025. We will run our models on intensity as well. The undersea Hunga Tonga-Hunga Ha’apai eruption of December 2021 into January 14-15, 2022 was a volcanic explosivity equivalent to VEI 5. It was an eruption of a magnitude greater than the 1991 eruption of Pinatubo in The Philippines. According to a news article, the main undersea international fiber-optic communication cable that had been severed in multiple places due to the eruption had been repaired by February 21, 2022, and internet connectivity was restored the following day.

The sheer magnitudes of this eruption tends to warn that we may in fact witness a very significant rise in both the frequency of eruptions as well as the magnitude into 2025. The  VEI describes the size of explosive volcanic eruptions based on magnitude and intensity. The numerical scale (from 0 to 8) is a logarithmic scale, and is therefore similar to the Richter and other magnitude scales for the size of earthquakes.

The largest eruption of magna took place at Yellowstone at Huckleberry Ridge about 2.1 million years ago. Our cycle models on Yellowstone have turned up and the “ideal” target would be in the year 2100. The difference between a VEI5 and VEI6 is a factor of 10 to 100.

A Technical Study of Relationships in Solar Flux, Water and other Gasses in the upper Atmosphere, Using the October, 2022 NASA & NOAA Data


From the attached report on climate change for October 2022 Data we have the two charts showing how much the global temperature has actually gone up since we started to measure CO2 in the atmosphere in 1958? To show this graphically Chart 8a was constructed by plotting CO2 as a percent increase from when it was first measured in 1958, the Black plot, the scale is on the left and it shows CO2 going up by about 32.4% from 1958 to October of 2022. That is a very large change as anyone would have to agree.  Now how about temperature, well when we look at the percentage change in temperature also from 1958, using Kelvin (which does measure the change in heat), we find that the changes in global temperature (heat) is almost un-measurable at less than .4%.

As you see the increase in energy, heat, is not visually observably in this chart hence the need for another Chart 8 to show the minuscule increase in thermal energy shown by NASA in relationship to the change in CO2 Shown in the next Chart using a different scale.

This is Chart 8 which is the same as Chart 8a except for the scales. The scale on the right side had to be expanded 10 times (the range is 50 % on the left and 5% on the right) to be able to see the plot in the same chart in any detail. The red plot, starting in 1958, shows that the thermal energy in the earth’s atmosphere increased by .40%; while CO2 has increased by 32.4% which is 80 times that of the increase in temperature. So is there really a meaningful link between them that would give as a major problem?

Based to these trends, determined by excel not me, in 2028 CO2 will be 428 ppm and temperatures will be a bit over 15.0o Celsius and in 2038 CO2 will be 458 ppm and temperatures will be 15.6O Celsius.

The NOAA and NASA numbers tell us the True story of the

Changes in the planets Atmosphere

The full 40 page report explains how these charts were developed .

Methane is saturated


Methane: Much Ado About Nothing

David Archibald

Thanks to Modtran, an online program maintained by the University of Chicago, we know that carbon dioxide’s heating effect is logarithmic.  The first 20 ppm of carbon dioxide heats the atmosphere by 1.5°C. At the current concentration of 412 ppm each extra 100 ppm is only good for 0.1°C. Carbon dioxide is tuckered out as a greenhouse gas.

But what of methane which is the excuse du jour for wrecking livelihoods, towns, industries and whole economies? Methane, with a half life of nine years in the atmopshere, is carbon dioxide’s little brother in the pantheon of the satanic gasses.

Witness this headline about antics in New Zealand:

We return to Modtran to see what that oracle will tell us about methane’s heating effect. This is the model output converted to degrees C:

While not as pronounced as carbon dioxide’s drop off in heating effect with concentration, the effect is still there such that at the current concentration of 1.9 ppm, each extra 0.1 ppm heats the atmosphere by 0.05°C. With the methane concentration currently rising by 0.1 ppm every 20 years, the atmosphere will get an extra 0.2°C of heating by 2100. The reader can decide whether or not he/she/it need be worried by this projection.

But methane has only been going up at that rate for a few years. The atmospheric concentration of carbon dioxide has measured since 1958. Methane measurements only started in the mid-1980s and this is what the data looks like:

There is a steep rise at the beginning but then from the early 1990s to 2010 the concentration went sideways for nigh on 20 years. The Cape Grim concentration is particularly flat. NASA has helpfully provided a graph of rate-of-change:

There are three years – 2000, 2001 and 2004 – in which the methane level went down. Let’s disregard the noise and look at the bigger picture evident. And that is the rate of increase declined for 20 years and then went up for 20 years. A few more decades of observations might show whether or not this is cyclic.

But farms that have been going for generations might be wiped out by unnecessary concern about methane while we are waiting for that data.  So we will make a stab at the underlying science. Two factors are likely involved.

Firstly plant productivity has been going up with the increase in the atmospheric carbon dioxide concentration. Parts of the West Australian desert now have 30% more plant matter than a scant 30 years ago. The same is true of the vast stretch of forest and tundra across northern Russia. Unless this vegetation is consumed by fire, its fate is to be the source of methane via termites or rotting. So the hand of Man is not necessarily involved in a rising methane level.

Secondly, the Sun was more active in the second half of the 20th century than it had been in the previous eleven thousand years. That stopped in 2006 with the end of the Modern Warm Period. The Sun has become less active as shown by this graph of solar extreme ultra violet produced by the University of Bremen:

Our current solar cycle, 25, is tracking lower than any of the previous four. The natural enemy of methane is ozone, the most reactive gas in nature. Ozone is produced in the upper atmosphere by radiation with wavelengths less than or equal to 242 nano metres acting on oxygen. So less ozone has been produced since 2006 and this is when the atmospheric methane level stopped falling and started rising again.

Case closed. Nothing to see here. Move along. Only idiots would get hung up on such a minuscule effect that we can’t change anyway. There are real problems coming at humanity that will take all our attention. Destroying the production base in the interim will only make our situation worse.

David Archibald is the author of The Anticancer Garden in Australia

Atmospheric physics Nitrous Oxide and Climate Atmospheric physics


From the CO2 Coalition

Download the entire PDF Nitrous Oxide

Gregory R. Wrightstone

Nitrous oxide (N20) has now joined carbon dioxide (CO2) and methane (CH4) in the climate alarm proponents’ pantheon of anthropogenic “demon” gases. In their view, increasing concentrations of these molecules are leading to unusual and unprecedented warming and will, in turn, lead to catastrophic consequences for both our ecosystems and humanity.

Countries around the world are in the process of greatly reducing or eliminating the use of nitrogen fertilizers based on heretofore poorly understood properties of nitrous oxide. Reductions of N2O emissions are being proposed in Canada by 40 to 45 percent and in the Netherlands by up to 50 percent. Sri Lanka’s complete ban on fertilizer in 2021 led to the total collapse of their primarily agricultural economy.

To provide critically needed information on N2O, the CO2 Coalition has published an important and timely paper evaluating the warming effect of the gas and its role in the nitrogen cycle. Armed with this vital information, policymakers can now proceed to make informed decisions about the costs and benefits of mandated reductions of this beneficial molecule.

This new paper joins previous CO2 Coalition reports on other greenhouse gases, carbon dioxide and methane.

Key takeaways from the paper:

  • At current rates, a doubling of N2O would occur in more than 400 years.
  • Atmospheric warming by N2O is estimated to be 0.064oC per century.
  • Increasing crop production requires continued application of synthetic nitrogen fertilizer in order to feed a growing population.

N2O and its warming potential

The first portion of the paper is highly technical and reviews the greenhouse warming potential of N2O. Like CO2, nitrous oxide is a linear, chemically inert molecule that absorbs infrared radiation. However, N2O has a longer lifetime in the atmosphere than CH4 because it is more resistant to chemical or physical breakdown. Increasing atmospheric concentrations of N2O likely contribute some amount of warming to the Earth’s atmosphere. To assess how much is likely, the authors consider well-validated radiation transfer theory and available experimental evidence rather than very complex general circulation climate models, which have proven unreliable.

The current N2O concentration at sea level is 0.34 parts per million (ppm) and increasing at a rate of about 0.00085 ppm/year. This rate of increase has been steady since 1985 with no indication of acceleration. A comparison with CO2, at a present concentration of approximately 420 ppm, is in order. For current concentrations of greenhouse gases, the radiative forcing per added N2O molecule, is about 230 times larger than the forcing per added CO2 molecule. This sounds bad, but what are the facts?

The rate of increase of CO2 molecules is approximately 2.5 ppm/year, or about 3,000 times larger than the rate of increase of N2O molecules. So, the contribution of nitrous oxide to the annual increase in forcing is 230/3,000 or about 1/13 that of CO2. If the main greenhouse gases CO2, CH4 and N2O have contributed about 0.1 C/decade of the warming of the Earth observed over the past few decades, this would correspond to about 0.00064 degrees Celsius per year or 0.064oC per century of warming from N2O, an amount that is barely observable. At the present rate of increase, a doubling of the N2O concentration would take more than four centuries and, according to Figure 5 of the paper, the increase in warming would be imperceptibly small.

The nitrogen cycle

Along with water and carbon, nitrogen is of key importance to plant life and the right proportion of it is critical for optimal growth. Carbon is available to plants from CO2 in the atmosphere; nitrogen must be made available in the soil. To this end various microorganisms and plant species, with the aid of symbiotic microorganisms, fix diatomic nitrogen (N2) from the atmosphere into the soil, where it enters complicated cycles of nitrogen-containing compounds that can move more or less freely in soil and serve many plants. Through the activity of microorganisms (recent work shows that archaea are of comparable importance to bacteria) the nitrogen cycle ends by releasing N2, and to a much lesser extent N2O, back into the atmosphere. Because of losses to the atmosphere and leaching to waterways, soil nitrogen needs to be replenished continuously to optimize plant growth.

Agricultural and natural vegetative growth contribute comparable amounts to the nitrogen cycle. Optimum crop growth requires large amounts of nitrogen. Some nitrogen is provided by animal manure and decaying plants. However, these sources of nitrogen are insufficient for the needs of agriculture to feed a growing world population.

Figure 14 from the paper compares the relationship between the increasing use of artificial nitrogen fertilizer and the increasing yields of various crops in the U.S. from 1866 onward. The strong correlation between nitrogen fertilization and crop yields is striking. Figure 13 shows a similar correspondence worldwide between the use of nitrogen fertilizer and the yield of cereal crops. Of course, changes in complicated processes cannot be ascribed to a single cause. Also of considerable importance in crop production are other mineral fertilizers like phosphorus and potassium, better plant varieties like hybrid corn and increasing concentrations of atmospheric CO2. However, the crucial role of nitrogen fertilizers in tremendously increasing crop yields is unmistakable.

Figure 14 – Crop yields for corn, wheat, barley, grass hay, oats and rye in the United States.

Figure 13 – Annual world production of nitrogen fertilizer used in agriculture (blue, in Tg)
and world production of all cereal crops (orange, in Gigatonnes) from 1961 to 2019

Feeding a world population that is growing at a rate of 1.1 percent per year is no trivial matter. Devastating famines from the past have been kept at bay during the last century by the fundamental scientific developments noted above. At the moment many governments, under the influence of ‘’green’’ pressure groups, exhibit a dangerous inclination to limit the use of nitrogen fertilizers to move farmers ‘’back to nature’’ in order to save the world from “climate disaster.” In the Netherlands, the government is considering forcing large numbers of farmers out of business to supposedly prevent catastrophic warming from N2O emissions. As this new paper shows, N2O emissions will have a trivial effect on temperature increases. Farmers themselves, not government bureaucrats, should determine the optimum amounts of nitrogen fertilizer to maximize crop yields.

Agriculture free of artificial fertilizers, despite it being highly labor-intensive and producing very low yields, may be feasible for a small niche of the world population willing and able to pay for it. However, it is inconceivable that the growing masses , or even the current world population, can be fed without the intelligent, science-based use of nitrogen and other fertilizers.

‘’Green’’ illusions cannot feed billions of people.

Wheat with and without nitrogen fertilizer – Deli Chen – University of Melbourne

The Dirty Secrets inside the Black Box Climate Models


By Greg Chapman
“The world has less than a decade to change course to avoid irreversible ecological catastrophe, the UN warned today.” The Guardian Nov 28 2007
“It’s tough to make predictions, especially about the future.” Yogi Berra
Introduction
Global extinction due to global warming has been predicted more times than climate activist, Leo DiCaprio, has traveled by private jet.  But where do these predictions come from? If you thought it was just calculated from the simple, well known relationship between CO2 and solar energy spectrum absorption, you would only expect to see about 0.5o C increase from pre-industrial temperatures as a result of CO2 doubling, due to the logarithmic nature of the relationship.
Figure 1: Incremental warming effect of CO2 alone [1]
The runaway 3-6o C and higher temperature increase model predictions depend on coupled feedbacks from many other factors, including water vapour (the most important greenhouse gas), albedo (the proportion of energy reflected from the surface – e.g. more/less ice or clouds, more/less reflection) and aerosols, just to mention a few, which theoretically may amplify the small incremental CO2 heating effect. Because of the complexity of these interrelationships, the only way to make predictions is with climate models because they can’t be directly calculated.
The purpose of this article is to explain to the non-expert, how climate models work, rather than a focus on the issues underlying the actual climate science, since the models are the primary ‘evidence’ used by those claiming a climate crisis. The first problem, of course, is no model forecast is evidence of anything. It’s just a forecast, so it’s important to understand how the forecasts are made, the assumptions behind them and their reliability.
How do Climate Models Work?
In order to represent the earth in a computer model, a grid of cells is constructed from the bottom of the ocean to the top of the atmosphere. Within each cell, the component properties, such as temperature, pressure, solids, liquids and vapour, are uniform.
The size of the cells varies between models and within models. Ideally, they should be as small as possible as properties vary continuously in the real world, but the resolution is constrained by computing power. Typically, the cell area is around 100×100 km2 even though there is considerable atmospheric variation over such distances, requiring each of the physical properties within the cell to be averaged to a single value. This introduces an unavoidable error into the models even before they start to run.
The number of cells in a model varies, but the typical order of magnitude is around 2 million.
Figure 2: Typical grid used in climate models [2]

Once the grid has been constructed, the component properties of each these cells must be determined. There aren’t, of course, 2 million data stations in the atmosphere and ocean. The current number of data points is around 10,000 (ground weather stations, balloons and ocean buoys), plus we have satellite data since 1978, but historically the coverage is poor. As a result, when initialising a climate model starting 150 years ago, there is almost no data available for most of the land surface, poles and oceans, and nothing above the surface or in the ocean depths. This should be understood to be a major concern.
Figure 3: Global weather stations circa 1885 [3]

Once initialised, the model goes through a series of timesteps. At each step, for each cell, the properties of the adjacent cells are compared. If one such cell is at a higher pressure, fluid will flow from that cell to the next. If it is at higher temperature, it warms the next cell (whilst cooling itself). This might cause ice to melt or water to evaporate, but evaporation has a cooling effect. If polar ice melts, there is less energy reflected that causes further heating. Aerosols in the cell can result in heating or cooling and an increase or decrease in precipitation, depending on the type.
Increased precipitation can increase plant growth as does increased CO2. This will change the albedo of the surface as well as the humidity. Higher temperatures cause greater evaporation from oceans which cools the oceans and increases cloud cover. Climate models can’t model clouds due to the low resolution of the grid, and whether clouds increase surface temperature or reduce it, depends on the type of cloud.
It’s complicated! Of course, this all happens in 3 dimensions and to every cell resulting in considerable feedback to be calculated at each timestep.
The timesteps can be as short as half an hour. Remember, the terminator, the point at which day turns into night, travels across the earth’s surface at about 1700 km/hr at the equator, so even half hourly timesteps introduce further error into the calculation, but again, computing power is a constraint.
While the changes in temperatures and pressures between cells are calculated according to the laws of thermodynamics and fluid mechanics, many other changes aren’t calculated. They rely on parameterisation. For example, the albedo forcing varies from icecaps to Amazon jungle to Sahara desert to oceans to cloud cover and all the reflectivity types in between. These properties are just assigned and their impacts on other properties are determined from lookup tables, not calculated. Parameterisation is also used for cloud and aerosol impacts on temperature and precipitation. Any important factor that occurs on a subgrid scale, such as storms and ocean eddy currents must also be parameterised with an averaged impact used for the whole grid cell. Whilst the effects of these factors are based on observations, the parameterisation is far more a qualitative rather than a quantitative process, and often described by modelers themselves as an art, that introduces further error. Direct measurement of these effects and how they are coupled to other factors is extremely difficult and poorly understood.
Within the atmosphere in particular, there can be sharp boundary layers that cause the models to crash. These sharp variations have to be smoothed.
Energy transfers between atmosphere and ocean are also problematic. The most energetic heat transfers occur at subgrid scales that must be averaged over much larger areas.
Cloud formation depends on processes at the millimeter level and are just impossible to model. Clouds can both warm as well as cool. Any warming increases evaporation (that cools the surface) resulting in an increase in cloud particulates. Aerosols also affect cloud formation at a micro level.  All these effects must be averaged in the models.
When the grid approximations are combined with every timestep, further errors are introduced and with half hour timesteps over 150 years, that’s over 2.6 million timesteps! Unfortunately, these errors aren’t self-correcting. Instead this numerical dispersion accumulates over the model run, but there is a technique that climate modelers use to overcome this, which I describe shortly.
Figure 4: How grid cells interact with adjacent cells [4]

Model Initialisation
After the construction of any type of computer model, there is an initalisation process whereby the model is checked to see whether the starting values in each of the cells are physically consistent with one another. For example, if you are modelling a bridge to see whether the design will withstand high winds and earthquakes, you make sure that before you impose any external forces onto the model structure other than gravity, that it meets all the expected stresses and strains of a static structure. Afterall, if the initial conditions of your model are incorrect, how can you rely on it to predict what will happen when external forces are imposed in the model?
Fortunately, for most computer models, the properties of the components are quite well known and the initial condition is static, the only external force being gravity. If your bridge doesn’t stay up on initialisation, there is something seriously wrong with either your model or design!
With climate models, we have two problems with initialisation. Firstly, as previously mentioned, we have very little data for time zero, whenever we chose that to be. Secondly, at time zero, the model is not in a static steady state as is the case for pretty much every other computer model that has been developed. At time zero, there could be a blizzard in Siberia, a typhoon in Japan, monsoons in Mumbai and a heatwave in southern Australia, not to mention the odd volcanic explosion, which could all be gone in a day or so.
There is never a steady state point in time for the climate, so it’s impossible to validate climate models on initialisation.
The best climate modelers can hope for is that their bright shiny new model doesn’t crash in the first few timesteps.
The climate system is chaotic which essentially means any model will be a poor predictor of the future – you can’t even make a model of a lottery ball machine (which is a comparatively a much simpler and smaller interacting system) and use it to predict the outcome of the next draw.
So, if climate models are populated with little more than educated guesses instead of actual observational data at time zero, and errors accumulate with every timestep, how do climate modelers address this problem?
History matching
If the system that’s being computer modelled has been in operation for some time, you can use that data to tune the model and then start the forecast before that period finishes to see how well it matches before making predictions. Unlike other computer modelers, climate modelers call this ‘hindcasting’ because it doesn’t sound like they are manipulating the model parameters to fit the data.
The theory is, that even though climate model construction has many flaws, such as large grid sizes, patchy data of dubious quality in the early years, and poorly understood physical phenomena driving the climate that has been parameterised, that you can tune the model during hindcasting within parameter uncertainties to overcome all these deficiencies.
While it’s true that you can tune the model to get a reasonable match with at least some components of history, the match isn’t unique.
When computer models were first being used last century, the famous mathematician, John Von Neumann, said:
“with four parameters I can fit an elephant, with five I can make him wiggle his trunk”
In climate models there are hundreds of parameters that can be tuned to match history. What this means is there is an almost infinite number of ways to achieve a match. Yes, many of these are non-physical and are discarded, but there is no unique solution as the uncertainty on many of the parameters is large and as long as you tune within the uncertainty limits, innumerable matches can still be found.
An additional flaw in the history matching process is the length of some of the natural cycles. For example, ocean circulation takes place over hundreds of years, and we don’t even have 100 years of data with which to match it.
In addition, it’s difficult to history match to all climate variables. While global average surface temperature is the primary objective of the history matching process, other data, such a tropospheric temperatures, regional temperatures and precipitation, diurnal minimums and maximums are poorly matched.
Even so, can the history matching of the primary variable, average global surface temperature, constrain the accumulating errors that inevitably occur with each model timestep?
Forecasting
Consider a shotgun. When the trigger is pulled, the pellets from the cartridge travel down the barrel, but there is also lateral movement of the pellets. The purpose of the shotgun barrel is to dampen the lateral movements and to narrow the spread when the pellets leave the barrel. It’s well known that shotguns have limited accuracy over long distances and there will be a shot pattern that grows with distance.  The history match period for a climate model is like the barrel of the shotgun. So what happens when the model moves from matching to forecasting mode?
Figure 5: IPCC models in forecast mode for the Mid-Troposphere vs Balloon and Satellite observations [5]
Like the shotgun pellets leaving the barrel, numerical dispersion takes over in the forecasting phase. Each of the 73 models in Figure 5 has been history matched, but outside the constraints of the matching period, they quickly diverge.
Now at most only one of these models can be correct, but more likely, none of them are. If this was a real scientific process, the hottest two thirds of the models would be rejected by the International Panel for Climate Change (IPCC), and further study focused on the models closest to the observations. But they don’t do that for a number of reasons.
Firstly, if they reject most of the models, there would be outrage amongst the climate scientist community, especially from the rejected teams due to their subsequent loss of funding. More importantly, the so called 97% consensus would instantly evaporate.
Secondly, once the hottest models were rejected, the forecast for 2100 would be about 1.5o C increase (due predominately to natural warming) and there would be no panic, and the gravy train would end.
So how should the IPPC reconcile this wide range of forecasts?
Imagine you wanted to know the value of bitcoin 10 years from now so you can make an investment decision today. You could consult an economist, but we all know how useless their predictions are. So instead, you consult an astrologer, but you worry whether you should bet all your money on a single prediction. Just to be safe, you consult 100 astrologers, but they give you a very wide range of predictions. Well, what should you do now? You could do what the IPCC does, and just average all the predictions.
You can’t improve the accuracy of garbage by averaging it.
An Alternative Approach
Climate modelers claim that a history match isn’t possible without including CO2 forcing. This is may be true using the approach described here with its many approximations, and only tuning the model to a single benchmark (surface temperature) and ignoring deviations from others (such as tropospheric temperature), but analytic (as opposed to numeric) models have achieved matches without CO2 forcing. These are models, based purely on historic climate cycles that identify the harmonics using a mathematical technique of signal analysis, which deconstructs long and short term natural cycles of different periods and amplitudes without considering changes in CO2 concentration.
In Figure 6, a comparison is made between the IPCC predictions and a prediction from just one analytic harmonic model that doesn’t depend on CO2 warming. A match to history can be achieved through harmonic analysis and provides a much more conservative prediction that correctly forecasts the current pause in temperature increase, unlike the IPCC models. The purpose of this example isn’t to claim that this model is more accurate, it’s just another model, but to dispel the myth that there is no way history can be explained without anthropogenic CO2 forcing and to show that it’s possible to explain the changes in temperature with natural variation as the predominant driver.
Figure 6: Comparison of the IPCC model predictions with those from a harmonic analytical model [6]

In summary:
Climate models can’t be validated on initiatialisation due to lack of data and a chaotic initial state.
Model resolutions are too low to represent many climate factors.
Many of the forcing factors are parameterised as they can’t be calculated by the models.
Uncertainties in the parameterisation process mean that there is no unique solution to the history matching.
Numerical dispersion beyond the history matching phase results in a large divergence in the models.
The IPCC refuses to discard models that don’t match the observed data in the prediction phase – which is almost all of them.
The question now is, do you have the confidence to invest trillions of dollars and reduce standards of living for billions of people, to stop climate model predicted global warming or should we just adapt to the natural changes as we always have?
Greg Chapman  is a former (non-climate) computer modeler.
Footnotes
[1] https://www.adividedworld.com/scientific-issues/thermodynamic-effects-of-atmospheric-carbon-dioxide-revisited/
[2] https://serc.carleton.edu/eet/envisioningclimatechange/part_2.html
[3] https://climateaudit.org/2008/02/10/historical-station-distribution/
[4]            http://www.atmo.arizona.edu/students/courselinks/fall16/atmo336/lectures/sec6/weather_forecast.html
[5] https://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
Whilst climate models are tuned to surface temperatures, they predict a tropospheric hotspot that doesn’t exist. This on its own should invalidate the models.
[6] https://wattsupwiththat.com/2012/01/09/scaffeta-on-his-latest-paper-harmonic-climate-model-versus-the-ipcc-general-circulation-climate-models/