A Technical Study of Relationships in Solar Flux, Water and other Gasses in the upper Atmosphere, Using the October, 2022 NASA & NOAA Data

From the attached report on climate change for October 2022 Data we have the two charts showing how much the global temperature has actually gone up since we started to measure CO2 in the atmosphere in 1958? To show this graphically Chart 8a was constructed by plotting CO2 as a percent increase from when it was first measured in 1958, the Black plot, the scale is on the left and it shows CO2 going up by about 32.4% from 1958 to October of 2022. That is a very large change as anyone would have to agree.  Now how about temperature, well when we look at the percentage change in temperature also from 1958, using Kelvin (which does measure the change in heat), we find that the changes in global temperature (heat) is almost un-measurable at less than .4%.

As you see the increase in energy, heat, is not visually observably in this chart hence the need for another Chart 8 to show the minuscule increase in thermal energy shown by NASA in relationship to the change in CO2 Shown in the next Chart using a different scale.

This is Chart 8 which is the same as Chart 8a except for the scales. The scale on the right side had to be expanded 10 times (the range is 50 % on the left and 5% on the right) to be able to see the plot in the same chart in any detail. The red plot, starting in 1958, shows that the thermal energy in the earth’s atmosphere increased by .40%; while CO2 has increased by 32.4% which is 80 times that of the increase in temperature. So is there really a meaningful link between them that would give as a major problem?

Based to these trends, determined by excel not me, in 2028 CO2 will be 428 ppm and temperatures will be a bit over 15.0o Celsius and in 2038 CO2 will be 458 ppm and temperatures will be 15.6O Celsius.

The NOAA and NASA numbers tell us the True story of the

Changes in the planets Atmosphere

The full 40 page report explains how these charts were developed .

Methane is saturated

Methane: Much Ado About Nothing

David Archibald

Thanks to Modtran, an online program maintained by the University of Chicago, we know that carbon dioxide’s heating effect is logarithmic.  The first 20 ppm of carbon dioxide heats the atmosphere by 1.5°C. At the current concentration of 412 ppm each extra 100 ppm is only good for 0.1°C. Carbon dioxide is tuckered out as a greenhouse gas.

But what of methane which is the excuse du jour for wrecking livelihoods, towns, industries and whole economies? Methane, with a half life of nine years in the atmopshere, is carbon dioxide’s little brother in the pantheon of the satanic gasses.

Witness this headline about antics in New Zealand:

We return to Modtran to see what that oracle will tell us about methane’s heating effect. This is the model output converted to degrees C:

While not as pronounced as carbon dioxide’s drop off in heating effect with concentration, the effect is still there such that at the current concentration of 1.9 ppm, each extra 0.1 ppm heats the atmosphere by 0.05°C. With the methane concentration currently rising by 0.1 ppm every 20 years, the atmosphere will get an extra 0.2°C of heating by 2100. The reader can decide whether or not he/she/it need be worried by this projection.

But methane has only been going up at that rate for a few years. The atmospheric concentration of carbon dioxide has measured since 1958. Methane measurements only started in the mid-1980s and this is what the data looks like:

There is a steep rise at the beginning but then from the early 1990s to 2010 the concentration went sideways for nigh on 20 years. The Cape Grim concentration is particularly flat. NASA has helpfully provided a graph of rate-of-change:

There are three years – 2000, 2001 and 2004 – in which the methane level went down. Let’s disregard the noise and look at the bigger picture evident. And that is the rate of increase declined for 20 years and then went up for 20 years. A few more decades of observations might show whether or not this is cyclic.

But farms that have been going for generations might be wiped out by unnecessary concern about methane while we are waiting for that data.  So we will make a stab at the underlying science. Two factors are likely involved.

Firstly plant productivity has been going up with the increase in the atmospheric carbon dioxide concentration. Parts of the West Australian desert now have 30% more plant matter than a scant 30 years ago. The same is true of the vast stretch of forest and tundra across northern Russia. Unless this vegetation is consumed by fire, its fate is to be the source of methane via termites or rotting. So the hand of Man is not necessarily involved in a rising methane level.

Secondly, the Sun was more active in the second half of the 20th century than it had been in the previous eleven thousand years. That stopped in 2006 with the end of the Modern Warm Period. The Sun has become less active as shown by this graph of solar extreme ultra violet produced by the University of Bremen:

Our current solar cycle, 25, is tracking lower than any of the previous four. The natural enemy of methane is ozone, the most reactive gas in nature. Ozone is produced in the upper atmosphere by radiation with wavelengths less than or equal to 242 nano metres acting on oxygen. So less ozone has been produced since 2006 and this is when the atmospheric methane level stopped falling and started rising again.

Case closed. Nothing to see here. Move along. Only idiots would get hung up on such a minuscule effect that we can’t change anyway. There are real problems coming at humanity that will take all our attention. Destroying the production base in the interim will only make our situation worse.

David Archibald is the author of The Anticancer Garden in Australia

Atmospheric physics Nitrous Oxide and Climate Atmospheric physics

From the CO2 Coalition

Download the entire PDF Nitrous Oxide

Gregory R. Wrightstone

Nitrous oxide (N20) has now joined carbon dioxide (CO2) and methane (CH4) in the climate alarm proponents’ pantheon of anthropogenic “demon” gases. In their view, increasing concentrations of these molecules are leading to unusual and unprecedented warming and will, in turn, lead to catastrophic consequences for both our ecosystems and humanity.

Countries around the world are in the process of greatly reducing or eliminating the use of nitrogen fertilizers based on heretofore poorly understood properties of nitrous oxide. Reductions of N2O emissions are being proposed in Canada by 40 to 45 percent and in the Netherlands by up to 50 percent. Sri Lanka’s complete ban on fertilizer in 2021 led to the total collapse of their primarily agricultural economy.

To provide critically needed information on N2O, the CO2 Coalition has published an important and timely paper evaluating the warming effect of the gas and its role in the nitrogen cycle. Armed with this vital information, policymakers can now proceed to make informed decisions about the costs and benefits of mandated reductions of this beneficial molecule.

This new paper joins previous CO2 Coalition reports on other greenhouse gases, carbon dioxide and methane.

Key takeaways from the paper:

  • At current rates, a doubling of N2O would occur in more than 400 years.
  • Atmospheric warming by N2O is estimated to be 0.064oC per century.
  • Increasing crop production requires continued application of synthetic nitrogen fertilizer in order to feed a growing population.

N2O and its warming potential

The first portion of the paper is highly technical and reviews the greenhouse warming potential of N2O. Like CO2, nitrous oxide is a linear, chemically inert molecule that absorbs infrared radiation. However, N2O has a longer lifetime in the atmosphere than CH4 because it is more resistant to chemical or physical breakdown. Increasing atmospheric concentrations of N2O likely contribute some amount of warming to the Earth’s atmosphere. To assess how much is likely, the authors consider well-validated radiation transfer theory and available experimental evidence rather than very complex general circulation climate models, which have proven unreliable.

The current N2O concentration at sea level is 0.34 parts per million (ppm) and increasing at a rate of about 0.00085 ppm/year. This rate of increase has been steady since 1985 with no indication of acceleration. A comparison with CO2, at a present concentration of approximately 420 ppm, is in order. For current concentrations of greenhouse gases, the radiative forcing per added N2O molecule, is about 230 times larger than the forcing per added CO2 molecule. This sounds bad, but what are the facts?

The rate of increase of CO2 molecules is approximately 2.5 ppm/year, or about 3,000 times larger than the rate of increase of N2O molecules. So, the contribution of nitrous oxide to the annual increase in forcing is 230/3,000 or about 1/13 that of CO2. If the main greenhouse gases CO2, CH4 and N2O have contributed about 0.1 C/decade of the warming of the Earth observed over the past few decades, this would correspond to about 0.00064 degrees Celsius per year or 0.064oC per century of warming from N2O, an amount that is barely observable. At the present rate of increase, a doubling of the N2O concentration would take more than four centuries and, according to Figure 5 of the paper, the increase in warming would be imperceptibly small.

The nitrogen cycle

Along with water and carbon, nitrogen is of key importance to plant life and the right proportion of it is critical for optimal growth. Carbon is available to plants from CO2 in the atmosphere; nitrogen must be made available in the soil. To this end various microorganisms and plant species, with the aid of symbiotic microorganisms, fix diatomic nitrogen (N2) from the atmosphere into the soil, where it enters complicated cycles of nitrogen-containing compounds that can move more or less freely in soil and serve many plants. Through the activity of microorganisms (recent work shows that archaea are of comparable importance to bacteria) the nitrogen cycle ends by releasing N2, and to a much lesser extent N2O, back into the atmosphere. Because of losses to the atmosphere and leaching to waterways, soil nitrogen needs to be replenished continuously to optimize plant growth.

Agricultural and natural vegetative growth contribute comparable amounts to the nitrogen cycle. Optimum crop growth requires large amounts of nitrogen. Some nitrogen is provided by animal manure and decaying plants. However, these sources of nitrogen are insufficient for the needs of agriculture to feed a growing world population.

Figure 14 from the paper compares the relationship between the increasing use of artificial nitrogen fertilizer and the increasing yields of various crops in the U.S. from 1866 onward. The strong correlation between nitrogen fertilization and crop yields is striking. Figure 13 shows a similar correspondence worldwide between the use of nitrogen fertilizer and the yield of cereal crops. Of course, changes in complicated processes cannot be ascribed to a single cause. Also of considerable importance in crop production are other mineral fertilizers like phosphorus and potassium, better plant varieties like hybrid corn and increasing concentrations of atmospheric CO2. However, the crucial role of nitrogen fertilizers in tremendously increasing crop yields is unmistakable.

Figure 14 – Crop yields for corn, wheat, barley, grass hay, oats and rye in the United States.

Figure 13 – Annual world production of nitrogen fertilizer used in agriculture (blue, in Tg)
and world production of all cereal crops (orange, in Gigatonnes) from 1961 to 2019

Feeding a world population that is growing at a rate of 1.1 percent per year is no trivial matter. Devastating famines from the past have been kept at bay during the last century by the fundamental scientific developments noted above. At the moment many governments, under the influence of ‘’green’’ pressure groups, exhibit a dangerous inclination to limit the use of nitrogen fertilizers to move farmers ‘’back to nature’’ in order to save the world from “climate disaster.” In the Netherlands, the government is considering forcing large numbers of farmers out of business to supposedly prevent catastrophic warming from N2O emissions. As this new paper shows, N2O emissions will have a trivial effect on temperature increases. Farmers themselves, not government bureaucrats, should determine the optimum amounts of nitrogen fertilizer to maximize crop yields.

Agriculture free of artificial fertilizers, despite it being highly labor-intensive and producing very low yields, may be feasible for a small niche of the world population willing and able to pay for it. However, it is inconceivable that the growing masses , or even the current world population, can be fed without the intelligent, science-based use of nitrogen and other fertilizers.

‘’Green’’ illusions cannot feed billions of people.

Wheat with and without nitrogen fertilizer – Deli Chen – University of Melbourne

The Dirty Secrets inside the Black Box Climate Models

By Greg Chapman
“The world has less than a decade to change course to avoid irreversible ecological catastrophe, the UN warned today.” The Guardian Nov 28 2007
“It’s tough to make predictions, especially about the future.” Yogi Berra
Global extinction due to global warming has been predicted more times than climate activist, Leo DiCaprio, has traveled by private jet.  But where do these predictions come from? If you thought it was just calculated from the simple, well known relationship between CO2 and solar energy spectrum absorption, you would only expect to see about 0.5o C increase from pre-industrial temperatures as a result of CO2 doubling, due to the logarithmic nature of the relationship.
Figure 1: Incremental warming effect of CO2 alone [1]
The runaway 3-6o C and higher temperature increase model predictions depend on coupled feedbacks from many other factors, including water vapour (the most important greenhouse gas), albedo (the proportion of energy reflected from the surface – e.g. more/less ice or clouds, more/less reflection) and aerosols, just to mention a few, which theoretically may amplify the small incremental CO2 heating effect. Because of the complexity of these interrelationships, the only way to make predictions is with climate models because they can’t be directly calculated.
The purpose of this article is to explain to the non-expert, how climate models work, rather than a focus on the issues underlying the actual climate science, since the models are the primary ‘evidence’ used by those claiming a climate crisis. The first problem, of course, is no model forecast is evidence of anything. It’s just a forecast, so it’s important to understand how the forecasts are made, the assumptions behind them and their reliability.
How do Climate Models Work?
In order to represent the earth in a computer model, a grid of cells is constructed from the bottom of the ocean to the top of the atmosphere. Within each cell, the component properties, such as temperature, pressure, solids, liquids and vapour, are uniform.
The size of the cells varies between models and within models. Ideally, they should be as small as possible as properties vary continuously in the real world, but the resolution is constrained by computing power. Typically, the cell area is around 100×100 km2 even though there is considerable atmospheric variation over such distances, requiring each of the physical properties within the cell to be averaged to a single value. This introduces an unavoidable error into the models even before they start to run.
The number of cells in a model varies, but the typical order of magnitude is around 2 million.
Figure 2: Typical grid used in climate models [2]

Once the grid has been constructed, the component properties of each these cells must be determined. There aren’t, of course, 2 million data stations in the atmosphere and ocean. The current number of data points is around 10,000 (ground weather stations, balloons and ocean buoys), plus we have satellite data since 1978, but historically the coverage is poor. As a result, when initialising a climate model starting 150 years ago, there is almost no data available for most of the land surface, poles and oceans, and nothing above the surface or in the ocean depths. This should be understood to be a major concern.
Figure 3: Global weather stations circa 1885 [3]

Once initialised, the model goes through a series of timesteps. At each step, for each cell, the properties of the adjacent cells are compared. If one such cell is at a higher pressure, fluid will flow from that cell to the next. If it is at higher temperature, it warms the next cell (whilst cooling itself). This might cause ice to melt or water to evaporate, but evaporation has a cooling effect. If polar ice melts, there is less energy reflected that causes further heating. Aerosols in the cell can result in heating or cooling and an increase or decrease in precipitation, depending on the type.
Increased precipitation can increase plant growth as does increased CO2. This will change the albedo of the surface as well as the humidity. Higher temperatures cause greater evaporation from oceans which cools the oceans and increases cloud cover. Climate models can’t model clouds due to the low resolution of the grid, and whether clouds increase surface temperature or reduce it, depends on the type of cloud.
It’s complicated! Of course, this all happens in 3 dimensions and to every cell resulting in considerable feedback to be calculated at each timestep.
The timesteps can be as short as half an hour. Remember, the terminator, the point at which day turns into night, travels across the earth’s surface at about 1700 km/hr at the equator, so even half hourly timesteps introduce further error into the calculation, but again, computing power is a constraint.
While the changes in temperatures and pressures between cells are calculated according to the laws of thermodynamics and fluid mechanics, many other changes aren’t calculated. They rely on parameterisation. For example, the albedo forcing varies from icecaps to Amazon jungle to Sahara desert to oceans to cloud cover and all the reflectivity types in between. These properties are just assigned and their impacts on other properties are determined from lookup tables, not calculated. Parameterisation is also used for cloud and aerosol impacts on temperature and precipitation. Any important factor that occurs on a subgrid scale, such as storms and ocean eddy currents must also be parameterised with an averaged impact used for the whole grid cell. Whilst the effects of these factors are based on observations, the parameterisation is far more a qualitative rather than a quantitative process, and often described by modelers themselves as an art, that introduces further error. Direct measurement of these effects and how they are coupled to other factors is extremely difficult and poorly understood.
Within the atmosphere in particular, there can be sharp boundary layers that cause the models to crash. These sharp variations have to be smoothed.
Energy transfers between atmosphere and ocean are also problematic. The most energetic heat transfers occur at subgrid scales that must be averaged over much larger areas.
Cloud formation depends on processes at the millimeter level and are just impossible to model. Clouds can both warm as well as cool. Any warming increases evaporation (that cools the surface) resulting in an increase in cloud particulates. Aerosols also affect cloud formation at a micro level.  All these effects must be averaged in the models.
When the grid approximations are combined with every timestep, further errors are introduced and with half hour timesteps over 150 years, that’s over 2.6 million timesteps! Unfortunately, these errors aren’t self-correcting. Instead this numerical dispersion accumulates over the model run, but there is a technique that climate modelers use to overcome this, which I describe shortly.
Figure 4: How grid cells interact with adjacent cells [4]

Model Initialisation
After the construction of any type of computer model, there is an initalisation process whereby the model is checked to see whether the starting values in each of the cells are physically consistent with one another. For example, if you are modelling a bridge to see whether the design will withstand high winds and earthquakes, you make sure that before you impose any external forces onto the model structure other than gravity, that it meets all the expected stresses and strains of a static structure. Afterall, if the initial conditions of your model are incorrect, how can you rely on it to predict what will happen when external forces are imposed in the model?
Fortunately, for most computer models, the properties of the components are quite well known and the initial condition is static, the only external force being gravity. If your bridge doesn’t stay up on initialisation, there is something seriously wrong with either your model or design!
With climate models, we have two problems with initialisation. Firstly, as previously mentioned, we have very little data for time zero, whenever we chose that to be. Secondly, at time zero, the model is not in a static steady state as is the case for pretty much every other computer model that has been developed. At time zero, there could be a blizzard in Siberia, a typhoon in Japan, monsoons in Mumbai and a heatwave in southern Australia, not to mention the odd volcanic explosion, which could all be gone in a day or so.
There is never a steady state point in time for the climate, so it’s impossible to validate climate models on initialisation.
The best climate modelers can hope for is that their bright shiny new model doesn’t crash in the first few timesteps.
The climate system is chaotic which essentially means any model will be a poor predictor of the future – you can’t even make a model of a lottery ball machine (which is a comparatively a much simpler and smaller interacting system) and use it to predict the outcome of the next draw.
So, if climate models are populated with little more than educated guesses instead of actual observational data at time zero, and errors accumulate with every timestep, how do climate modelers address this problem?
History matching
If the system that’s being computer modelled has been in operation for some time, you can use that data to tune the model and then start the forecast before that period finishes to see how well it matches before making predictions. Unlike other computer modelers, climate modelers call this ‘hindcasting’ because it doesn’t sound like they are manipulating the model parameters to fit the data.
The theory is, that even though climate model construction has many flaws, such as large grid sizes, patchy data of dubious quality in the early years, and poorly understood physical phenomena driving the climate that has been parameterised, that you can tune the model during hindcasting within parameter uncertainties to overcome all these deficiencies.
While it’s true that you can tune the model to get a reasonable match with at least some components of history, the match isn’t unique.
When computer models were first being used last century, the famous mathematician, John Von Neumann, said:
“with four parameters I can fit an elephant, with five I can make him wiggle his trunk”
In climate models there are hundreds of parameters that can be tuned to match history. What this means is there is an almost infinite number of ways to achieve a match. Yes, many of these are non-physical and are discarded, but there is no unique solution as the uncertainty on many of the parameters is large and as long as you tune within the uncertainty limits, innumerable matches can still be found.
An additional flaw in the history matching process is the length of some of the natural cycles. For example, ocean circulation takes place over hundreds of years, and we don’t even have 100 years of data with which to match it.
In addition, it’s difficult to history match to all climate variables. While global average surface temperature is the primary objective of the history matching process, other data, such a tropospheric temperatures, regional temperatures and precipitation, diurnal minimums and maximums are poorly matched.
Even so, can the history matching of the primary variable, average global surface temperature, constrain the accumulating errors that inevitably occur with each model timestep?
Consider a shotgun. When the trigger is pulled, the pellets from the cartridge travel down the barrel, but there is also lateral movement of the pellets. The purpose of the shotgun barrel is to dampen the lateral movements and to narrow the spread when the pellets leave the barrel. It’s well known that shotguns have limited accuracy over long distances and there will be a shot pattern that grows with distance.  The history match period for a climate model is like the barrel of the shotgun. So what happens when the model moves from matching to forecasting mode?
Figure 5: IPCC models in forecast mode for the Mid-Troposphere vs Balloon and Satellite observations [5]
Like the shotgun pellets leaving the barrel, numerical dispersion takes over in the forecasting phase. Each of the 73 models in Figure 5 has been history matched, but outside the constraints of the matching period, they quickly diverge.
Now at most only one of these models can be correct, but more likely, none of them are. If this was a real scientific process, the hottest two thirds of the models would be rejected by the International Panel for Climate Change (IPCC), and further study focused on the models closest to the observations. But they don’t do that for a number of reasons.
Firstly, if they reject most of the models, there would be outrage amongst the climate scientist community, especially from the rejected teams due to their subsequent loss of funding. More importantly, the so called 97% consensus would instantly evaporate.
Secondly, once the hottest models were rejected, the forecast for 2100 would be about 1.5o C increase (due predominately to natural warming) and there would be no panic, and the gravy train would end.
So how should the IPPC reconcile this wide range of forecasts?
Imagine you wanted to know the value of bitcoin 10 years from now so you can make an investment decision today. You could consult an economist, but we all know how useless their predictions are. So instead, you consult an astrologer, but you worry whether you should bet all your money on a single prediction. Just to be safe, you consult 100 astrologers, but they give you a very wide range of predictions. Well, what should you do now? You could do what the IPCC does, and just average all the predictions.
You can’t improve the accuracy of garbage by averaging it.
An Alternative Approach
Climate modelers claim that a history match isn’t possible without including CO2 forcing. This is may be true using the approach described here with its many approximations, and only tuning the model to a single benchmark (surface temperature) and ignoring deviations from others (such as tropospheric temperature), but analytic (as opposed to numeric) models have achieved matches without CO2 forcing. These are models, based purely on historic climate cycles that identify the harmonics using a mathematical technique of signal analysis, which deconstructs long and short term natural cycles of different periods and amplitudes without considering changes in CO2 concentration.
In Figure 6, a comparison is made between the IPCC predictions and a prediction from just one analytic harmonic model that doesn’t depend on CO2 warming. A match to history can be achieved through harmonic analysis and provides a much more conservative prediction that correctly forecasts the current pause in temperature increase, unlike the IPCC models. The purpose of this example isn’t to claim that this model is more accurate, it’s just another model, but to dispel the myth that there is no way history can be explained without anthropogenic CO2 forcing and to show that it’s possible to explain the changes in temperature with natural variation as the predominant driver.
Figure 6: Comparison of the IPCC model predictions with those from a harmonic analytical model [6]

In summary:
Climate models can’t be validated on initiatialisation due to lack of data and a chaotic initial state.
Model resolutions are too low to represent many climate factors.
Many of the forcing factors are parameterised as they can’t be calculated by the models.
Uncertainties in the parameterisation process mean that there is no unique solution to the history matching.
Numerical dispersion beyond the history matching phase results in a large divergence in the models.
The IPCC refuses to discard models that don’t match the observed data in the prediction phase – which is almost all of them.
The question now is, do you have the confidence to invest trillions of dollars and reduce standards of living for billions of people, to stop climate model predicted global warming or should we just adapt to the natural changes as we always have?
Greg Chapman  is a former (non-climate) computer modeler.
[1] https://www.adividedworld.com/scientific-issues/thermodynamic-effects-of-atmospheric-carbon-dioxide-revisited/
[2] https://serc.carleton.edu/eet/envisioningclimatechange/part_2.html
[3] https://climateaudit.org/2008/02/10/historical-station-distribution/
[4]            http://www.atmo.arizona.edu/students/courselinks/fall16/atmo336/lectures/sec6/weather_forecast.html
[5] https://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
Whilst climate models are tuned to surface temperatures, they predict a tropospheric hotspot that doesn’t exist. This on its own should invalidate the models.
[6] https://wattsupwiththat.com/2012/01/09/scaffeta-on-his-latest-paper-harmonic-climate-model-versus-the-ipcc-general-circulation-climate-models/

Study finds CO2 lags Temperature

Browse:Home/2022/November/07/New Paradigm-Shifting Study Finds Annual CO2 Flux Is Driven By Temperature-Dependent Sea Ice Flux

New Paradigm-Shifting Study Finds Annual CO2 Flux Is Driven By Temperature-Dependent Sea Ice Flux

By Kenneth Richard on 7. November 2022

Share this…

Annual carbon dioxide (CO2) and methane (CH4) change rates lag behind changes in sea ice extent by 7 months and 5 months, respectively. This robust correlation is consistent with the conclusion that CO2 (and CH4) changes are responsive to temperature, not the other way around.

It is commonly believed that the annual “squiggle” of the Mauna Loa CO2 cycle variations are driven by hemispheric seasonal contrasts in terrestrial photosynthesis.

But scientists (Hambler and Henderson, 2022) instead find it is variation high latitude temperatures affecting sea ice extent changes that dominate as drivers of the CO2 (and methane) annual fluxes, not photosynthesis.

They affirm temperature (T) changes lead CO2 change rates by about 7-10 months, suggesting the causality direction is T→CO2, and not CO2→T.

Temperature also drives sea ice peak melt vs. accumulation rates. This cause-effect directionality can also be clearly seen in analyses of sea ice flux vs. annual CO2 rate changes.

“The phase relationship between temperature and carbon dioxide has been examined to help elucidate the possible direction of causality and the lags we find between timeseries are consistent with carbon dioxide being the response variable.”
“Carbon dioxide is very strongly correlated with sea ice dynamics, with the carbon dioxide rate at Mauna Loa lagging sea ice extent rate by 7 months. Methane is very strongly correlated with sea ice dynamics, with the global (and Mauna Loa) methane rate lagging sea ice extent rate by 5 months. Sea ice melt rate peaks in very tight synchrony with temperature in each Hemisphere.”
Image Source: Hambler and Henderson, 2022

Why Climate Change is a Fraud

Armstrong Economics Blog/Climate Re-Posted Nov 1, 2022 by Martin Armstrong

This is one of the oldest methods to brainwash a population known to ancient history. The high priests had discovered the cycle of the heavens. They would pretend to turn the sun dark, for they managed to calculate the cycles when an eclipse would take place. They would call the people together and tell them what they will do, and they watched the moon block out the sun and believed that the high priest could control the heavens. Today, astrology really comes from the Babylonians who conducted a massive correlation study to predict the future.


There is a cycle to everything. The climate ALWAYS changes, and there are warming periods and cooling periods. These charlatans are no different than the Babylonian high priests pretending to block the sun with the moon on their command. Science was turned on its head after a discovery in 1772 near Vilui, Siberia, of an intact frozen woolly rhinoceros, which was followed by the more famous discovery of a frozen mammoth in 1787. You may be shocked, but these discoveries of frozen animals with grass still in their stomachs set in motion these two schools of thought since the evidence implied you could be eating lunch and suddenly find yourself frozen, only to be discovered by posterity.


The discovery of the woolly rhinoceros in 1772, and then frozen mammoths, sparked the imagination that things were not linear after all. These major discoveries truly contributed to the Age of Enlightenment, where there was a burst of knowledge erupting in every field of inquisition. Such finds of frozen mammoths in Siberia continue to this day. This has challenged theories on both sides of this debate to explain such catastrophic events. These frozen animals in Siberia suggest strange events are possible even in climates that are not that dissimilar from the casts of dead victims who were buried alive after the volcanic eruption of 79 AD at Pompeii in ancient Roman Italy. Animals can be grazing and then freeze abruptly. Climate change has been around for billions of years — long before man invented the combustion engine.

Even the field of geology began to create great debates that perhaps the earth simply burst into a catastrophic convulsion and, indeed, the planet was cyclical — not linear. This view of sequential destructive upheavals at irregular intervals or cycles emerged during the 1700s. This school of thought was perhaps best expressed by a forgotten contributor to the knowledge of mankind, George Hoggart Toulmin, in his rare 1785 book, “The Eternity of the World”:

” ••• convulsions and revolutions violent beyond our experience or conception, yet unequal to the destruction of the globe, or the whole of the human species, have both existed and will again exist ••• [terminating] ••• an astonishing succession of ages.”

Id./p3, 110


In 1832, Professor A. Bernhardi argued that the North Polar ice cap had extended into the plains of Germany. To support this theory, he pointed to the existence of huge boulders that have become known as “erratics,” which he suggested were pushed by the advancing ice. This was a shocking theory, for it was certainly a nonlinear view of natural history. Bernhardi was thinking out of the box. However, in natural science, people listen and review theories, unlike in social science, where theories are ignored if they challenge what people want to believe. In 1834, Johann von Charpentier (1786-1855) argued that there were deep grooves cut into the Alpine rock concluding, as did Karl Schimper, that they were caused by an advancing Ice Age.

This body of knowledge has been completely ignored by the global warming/climate change religious cult. They know nothing about nature or cycles, and they are completely ignorant of history or even that it was the discovery of these ancient creatures who froze with food in their mouths. They cannot explain these events nor the vast amount of knowledge written by people who actually did research instead of trying to cloak an agenda in pretend science.

Our model has projected we are entering another “grand minimum,” which will overtake the sun beginning in 2020 and will last through the 2050s, resulting in diminished magnetism, infrequent sunspot production, and less ultraviolet (UV) radiation reaching Earth. This all means we are facing a global cooling period on the planet that may span 31 to 43 years. The last grand-minimum event produced the mini-Ice Age in the mid-17th century. Known as the Maunder Minimum, it occurred between 1645 and 1715, during a longer span of time when parts of the world became so cold that the period was called the Little Ice Age, which lasted from about 1300 to 1850.

Most people have NEVER heard of the Beaufort Gyre, a massive wind-driven current in the Arctic Ocean that actually has far more influence over sea ice than anything we can throw into the atmosphere. The Beaufort Gyre has been regulating climate and sea ice formation for millennia. Recently, however, something has changed; it is not something that would create global warming but threatens a new Ice Age.

There is a normal cycle that appears to be about 5.4 years, where it reverses direction and spins counter-clockwise, expelling ice and freshwater into the eastern Arctic Ocean and the North Atlantic. The 5.4-year cycle is interesting for it is two pi cycle intervals of 8.6. The immediate cycle has suddenly expanded to two 8.6-year intervals, bringing it to 17.2 years as we head into 2022.

What you must understand is that this Beaufort Gyre now holds as much freshwater as all of the Great Lakes combined. Why is that important? Saltwater freezes at a lower temperature than the 32 degrees F at which freshwater freezes. The difference between the air temperature and the freezing point of saltwater is bigger than the difference between the air temperature and the freezing point of fresh water. This makes the ice with salt on it melt faster, which is why we salt the roads in an ice storm.

Now, think of the Beaufort Gyre as a carousel of ice and freshwater. Because it is now spinning both faster and in its usual clockwise direction, it has been collecting more and more freshwater from the three main sources:

  1. Melting sea ice
  2. Runoff from the Arctic Ocean from Russian and North American rivers
  3. Lower saltwater coming in from the Bering Sea

Indeed, Yale has warned that this current could “Cool the Climate in Europe,” which is precisely what we are witnessing. Cyclically, the Beaufort Gyre will reverse direction, and when it does, the clear and present danger will be the natural expulsion of a massive amount of icy fresh water into the North Atlantic. Remember now, freshwater freezes faster than saltwater.

This is not a theory. We have previous records of reversals in this cycle of the Beaufort Gyre from the 1960s and 1970s, where there was a surge of fresh Arctic water released into the North Atlantic that resulted in the water freezing. There has been a lot of work done on this subject, which, of course, is ignored by the climate change agenda that only seeks to blame human activity. Nevertheless, AAAS, of which I am a member, states plainly:

“Arctic sea ice affects climate on seasonal to decadal time scales, and models suggest that sea ice is essential for longer anomalies such as the Little Ice Age.” 

Socrates has been given just about every possible database I could find over the past 50 years. Because of the extended 17.2-year cycle in the Beaufort Gyre, the risk that a larger-than-normal expulsion of freshwater into the Atlantic can disrupt the Gulf Stream, which is the sole reason why Europe has been moderate in climate. But that has NOT always been the case. We know that the Barbarian invasions into Rome during the 3rd century were primarily driven by a colder climate in the north. The invasion of the Sea Peoples ended the Bronze Age, and those from the north migrated into the South, storming Mesopotamia and Northern Africa.


It is just not created by humans.

Perhaps we are now at the tipping point, and they cannot keep saying that the extremely cold winter is also caused by CO2 and global warming. The collapse of the gulf stream has nothing to do with CO2. This may result in a major confrontation that these people have been seriously wrong and what they are doing to the economy in trying to shut down fossil fuels at this point in time could result in tens of millions of deaths if the gulf stream collapses.

Climate Change = Military Strategy

Armstrong Economics Blog/Uncategorized Re-Posted Oct 28, 2022 by Martin Armstrong

COMMENT: Dear Mr. Armstrong,
As always, thank you so much for your incredible insight to what is happening in the world. You are the first news source I read in the morning because I know your site is always two steps ahead of everyone else! I wished to ask you about something you often reference: that many of these globalists are trying to destroy fossil fuels for “climate change”.

However, going by their actual behavior, it seems that they don’t *genuinely* believe in climate change, otherwise they wouldn’t be flying around in private jets, owning ocean-front property, or creating more carbon emissions than many small countries. Do you think they actually believe the earth is in danger or are they trying to force the majority of humanity back into a third world state because without the ability to travel, or heat/cool homes, communicate with one another, or have access to clean water, meat and nutritious food, etc., we not only will lose much of the population to sickness and starvation, but those who are left would become much more dependent on the state (them) and therefore easier to control?

Depopulation and crushing humanity into a smaller, weaker, more controllable feudal-system peasantry seems more like the actual reason for destroying fossil fuels, food security, and private ownership, with “climate change” merely being their flimsy excuse to do so. Do you think any of those pulling the strings on all this really believes what they’re saying about climate change?


REPLY: The elite could care less about climate change. They know it is laughable. Even John McCain was pushing it only because it was to hurt Russia cutting off its resources and he was pushing nuclear energy to replace fossil fuels only as a strategic chess move against Russia – his eternal enemy.

These ELITE people do not care about the climate. They all travel to Davos in private jets. Al Gore and UN Secretary-General Antonio Guterres at Davos in 2019 told delegates that humanity is “losing the race” against climate change. Al Gore conspires with Greenpeace and used Greta for their publicity stunts. I would say he is a believer, but do not count on his intelligence. He has never questioned the limited data.

The Elite have used climate change as the spearhead for the Great Reset, which is all about coming up with propaganda to cover up the Sovereign Debt default. Trust me. Many at the top are laughing their ass off at how they have sold this nonsense to people who just eat it up. John McCain was preaching nuclear energy back in the 2008 election to end dependency on foreign oil purely for military purposes – not the environment. They have brainwashed many and they just repeat the nonsense without the slightest investigation on their own. Oh, there is a drought, and skeletons are now revealed in Hoover Dam so that is proof of climate change! But they fail to take the next step and ask if that has that taken place before. There are natural cycles to climate so it always changes.

The Elite used Greta, and once they managed to get the press selling their BS and believing this changed the planet, they discarded her. They use people to achieve their goals and that is all this war is really about. To destroy the economy of Russia. Occupy the country and end fossil fuels so that Russia will be reduced to a vassal state. They have not probably reduced the population by 50% ANYHOW. The real powers do not care about Climate Change. They know this is all nonsense.

A Technical Study of Relationships in Solar Flux, Water and other Gasses in the upper Atmosphere, Using the September, 2022 NASA & NOAA Data

From the attached report on climate change for September 2022 Data we have the two charts showing how much the global temperature has actually gone up since we started to measure CO2 in the atmosphere in 1958? To show this graphically Chart 8a was constructed by plotting CO2 as a percent increase from when it was first measured in 1958, the Black plot, the scale is on the left and it shows CO2 going up by about 32.4% from 1958 to September of 2022. That is a very large change as anyone would have to agree.  Now how about temperature, well when we look at the percentage change in temperature also from 1958, using Kelvin (which does measure the change in heat), we find that the changes in global temperature (heat) is almost un-measurable at only .4%.

As you see the increase in energy, heat, is not visually observably in this chart hence the need for another Chart 8 to show the minuscule increase in thermal energy shown by NASA in relationship to the change in CO2 Shown in the next Chart using a different scale.

This is Chart 8 which is the same as Chart 8a except for the scales. The scale on the right side had to be expanded 10 times (the range is 50 % on the left and 5% on the right) to be able to see the plot in the same chart in any detail. The red plot, starting in 1958, shows that the thermal energy in the earth’s atmosphere increased by .40%; while CO2 has increased by 32.4% which is 80 times that of the increase in temperature. So is there really a meaningful link between them that would give as a major problem?

Based to these trends, determined by excel not me, in 2028 CO2 will be 428 ppm and temperatures will be a bit over 15.0o Celsius and in 2038 CO2 will be 458 ppm and temperatures will be 15.6O Celsius.

The NOAA and NASA numbers tell us the True story of the

Changes in the planets Atmosphere

The full 40 page report explains how these charts were developed .

<object class="wp-block-file__embed" data="https://centinel2012.files.wordpress.com/2022/10/blackbody-temperature-2022-09.pdf&quot; type="application/pdf" style="width:100%;height:600px" aria-label="<strong>blackbody-temperature-2022-09blackbody-temperature-2022-09Download

Humor & Self-Referral

Armstrong Economics Blog/Humor Re-Posted Oct 17, 2022 by Martin Armstrong

Comment: Hello Marty:

Hope we can all laugh again after this coming winter is in the rearview mirror…

The Indians on the Aamjiwnaang First Nation reservation in Grand Bend asked their new chief if the coming winter was going to be cold or mild.

Since he was a chief in a modern society, he had never been taught the old secrets. When he looked at the sky, he couldn’t tell what the winter was going to be like.

Nevertheless, to be on the safe side, he told his tribe that the winter was indeed going to be cold and that the members of the village should collect firewood to be prepared.

But, being a practical leader, after several days, he got an idea. He went to the phone booth, called the Canadian Weather Service and asked, ‘Is the coming winter going to be cold?’
‘It looks like this winter is going to be quite cold,’ the meteorologist at the weather service responded.

So the chief went back to his people and told them to collect even more firewood in order to be prepared.

A week later, he called the Canadian Weather Service again. ‘Does it still look like it is going to be a very cold winter?’

‘Yes,’ the man at Weather Service again replied, ‘it’s going to be a very cold winter.’

The chief again went back to his people and ordered them to collect every scrap of firewood they could find.

Two weeks later, the chief called the Canadian Weather Service again. ‘Are you absolutely sure that the winter is going to be very cold?’

‘Absolutely,’ the man replied.  ‘It’s looking more and more like it is going to be one of the coldest winters we’ve ever seen.’

‘How can you be so sure?’ the chief asked.

The weatherman replied, ‘The Indians are collecting an astounding amount of firewood !’