The US Blows Hot And Cold


  •   Sunday, March 12, 2023

Watts Up With That?

The world’s most viewed site on global warming and climate change

Land Surface Air Temperature Data

The US Blows Hot And Cold

1 day ago

Willis Eschenbach

130 Comments

Guest Post by Willis Eschenbach

I got to thinking about the raw unadjusted temperature station data. Despite the many flaws in individual weather stations making up the US Historical Climate Network (USHCN), as revealed by Anthony Watts’ SurfaceStations project, the USHCN is arguably one of the best country networks. So I thought I’d take a look at what it reveals.

The data is available here, with further information about the dataset here. The page says:

UNITED STATES HISTORICAL CLIMATOLOGY NETWORK (USHCN) Daily Dataset M.J. Menne, C.N. Williams, Jr., and R.S. Vose National Climatic Data Center, National Oceanic and Atmospheric Administration

These files comprise CDIAC’s most current version of USHCN daily data.

These appear to be the raw, unhomogenized, unadjusted daily data files. Works for me. I started by looking at the lengths of the various records.

Figure 1. Lengths of the 1,218 USHCN temperature records. The picture shows a “Stevenson Screen”, the enclosure used to protect the instruments from direct sunlight so that they are measuring actual air temperature.

This is good news. 97.4% of the temperature records are longer than 30 years, and 99.7% are longer than 20 years. So I chose to use them all.

Next, I considered the trends of the minimum and maximum temperatures. I purposely did not consider the mean (average) trend, for a simple reason. We experience the daily maximum and minimum temperatures, the warmest and coldest times of the day. But nobody ever experiences an average temperature. It’s a mathematical construct. And I wanted to look at what we actually can sense and feel.

First I considered minimum temperatures. I began by looking at which stations were warming and which were cooling. Figure 2 shows that result.

Figure 2. USHCN minimum temperature trends by station. White is cooling, red is warming.

Interesting. Clearly, “global” warming isn’t. The minimum temperature at 30% of the USHCN stations is getting colder, not warmer. However, overall, the median trend is still warming. Here’s a histogram of the minimum temperature trends.

Figure 3. Histogram of 1,218 USHCN minimum temperature trends. See Menne et al. for estimates of what the various adjustments would do to this raw data.

Overall, the daily minimum temperatures have been warming. However, they’re only warming at a median rate of 1.1°C per century … hardly noticeable. And I have to say that I’m not terrified of warmer nights, particularly since most of the warmer nights are occurring in the winter. In my youth, I spent a couple of winter nights sleeping on a piece of cardboard on the street in New York, with newspapers wrapped around my legs under my pants for warmth.

I can assure you that I would have welcomed a warmer nighttime temperature …

The truth that climate alarmists don’t want you to notice is that extreme cold kills far more people than extreme warmth. A study in the British Medical Journal The Lancet showed that from 2000 to 2019, extreme cold killed about four and a half million people per year, and extreme warmth only killed a half million.

Figure 4. Excess deaths from extreme heat and cold, 2000-2019

So I’m not worried about an increase in minimum temperatures—that can only reduce mortality for plants, animals, and humanoids alike.

But what about maximum temperatures? Here are the trends of the USHCN stations as in Figure 2, but for maximum temperatures.

Figure 5. USHCN maximum temperature trends by station. White is cooling, red is warming.

I see a lot more white. Recall from Figure 2 that 30% of minimum temperature stations are cooling. But with maximum temperatures, about half of them are cooling (49.2%).

And here is the histogram of maximum temperatures. Basically, half warming, half cooling.

Figure 6. Histogram of 1,218 USHCN maximum temperature trends.

For maximum temperatures, the overall median trend is a trivial 0.07°C per century … color me unimpressed.

Call me crazy, but I say this is not any kind of an “existential threat”, “problem of the century”, or “climate emergency” as is often claimed by climate alarmists. Instead, it is a mild warming of the nights and no warming of the days. In fact, there’s no “climate emergency” at all.

And if you are suffering from what the American Psychiatric Association describes as “the mental health consequences of events linked to a changing global climate including mild stress and distress, high-risk coping behavior such as increased alcohol use and, occasionally, mental disorders such as depression, anxiety and post-traumatic stress” … well, I’d suggest you find a new excuse for your alcoholism, anxiety, or depression. That dog won’t hunt.

My very best to everyone from a very rainy California. When we had drought over the last couple of years, people blamed evil “climate change” … and now that we’re getting lots of rain, guess what people are blaming?

Yep, you guessed it.

w.

As Always: I ask that when you comment you quote the exact words you’re discussing. This avoids endless misunderstandings.

Adjustments: This raw data I’ve used above is often subjected to several different adjustments, as discussed here. One of the largest adjustments is for the time of observation, usually referred to as TOBS. The effect of the TOBS adjustment is to increase the overall trend in maximum temperatures by about 0.15°C per century (±0.02) and in minimum temperatures by about 0.22°C per century (±0.02). So if you wish, you can add those values to the trends shown above. Me, I’m not too fussed about an adjustment of a tenth or two of a degree per century, I’m not even sure if the network can measure to that level of precision. And it certainly is not perceptible to humans.

There are also adjustments for “homogeneity”, for station moves, instrument changes, and changes in conditions surrounding the instrument site.

Are these adjustments all valid? Unknown. For example, the adjustments for “homgeneity” assume that one station’s record should be similar to a nearby station … but a look at the maps above show that’s not the case. I know that where I live, it very rarely freezes. But less than a quarter mile (1/8 km) away, on the opposite side of the hill, it freezes a half-dozen times a year or so … homogeneous? I don’t think so.

The underlying problem is that in almost all cases there is no overlap in the pre- and post-change records. This makes it very difficult to determine the effects of the changes directly, and so indirect methods have to be used. There’s a description of the method for the TOBS adjustment here.

This also makes it very hard to estimate the effect of the adjustments. For example:

To calculate the effect of the TOB adjustments on the HCN version 2 temperature trends, the monthly TOB adjusted temperatures at each HCN station were converted to an anomaly relative to the 1961–90 station mean. Anomalies were then interpolated to the nodes of a 0.25° × 0.25° latitude–longitude grid using the method described by Willmott et al. (1985). Finally, gridpoint values were area weighted into a mean anomaly for the CONUS for each month and year. The process was then repeated for the unadjusted temperature data, and a difference series was formed between the TOB adjusted and unadjusted data.

To avoid all of that uncertainty, I’ve used the raw unadjusted data. 

Addendum Regarding The Title: There’s an Aesop’s Fable, #35:

“A Man had lost his way in a wood one bitter winter’s night. As he was roaming about, a Satyr came up to him, and finding that he had lost his way, promised to give him a lodging for the night, and guide him out of the forest in the morning. As he went along to the Satyr’s cell, the Man raised both his hands to his mouth and kept on blowing at them. ‘What do you do that for?’ said the Satyr. ‘My hands are numb with the cold,’ said the Man, ‘and my breath warms them.’ After this they arrived at the Satyr’s home, and soon the Satyr put a smoking dish of porridge before him. But when the Man raised his spoon to his mouth he began blowing upon it. ‘And what do you do that for?’ said the Satyr. ‘The porridge is too hot, and my breath will cool it.’ ‘Out you go,’ said the Satyr, ‘I will have nought to do with a man who can blow hot and cold with the same breath.’”

The actual moral of the story is not the usual one that people draw from the fable, that the Man is fickle and the Satyr can’t trust him.

The Man is not fickle. His breath is always the same temperature … but what’s changing are the temperatures of his surroundings, just as they have been changing since time immemorial.

We call it “weather”.

The test that exonerates CO2


The test that exonerates CO2

6 days ago

Guest Blogger

521 Comments

By Javier Vinós

Most people don’t have a clear understanding of the greenhouse effect (GHE). It is not complicated to understand, but it is usually not well explained. It is often described as “heat-trapping,” but that is incorrect. Greenhouse gases (GHG) do not trap heat, even if more heat resides within the climate system due to their presence in the atmosphere. The truth is that after adjusting to a change in GHG levels, the planet still returns all the energy it receives from the Sun. Otherwise, it would continue warming indefinitely. So, there is no change in the energy returned. How do GHGs produce GHE?

GHGs cause the atmosphere to be more opaque to infrared radiation. As solar radiation heats mainly the ocean and land surface of the planet, GHGs absorb thermal emission from the surface at the lower troposphere and immediately pass that energy along to other molecules (typically N2 and O2) through collisions that occur much faster than the time it would take to re-emit the radiation. This warms the lower troposphere. The density and temperature decrease rapidly through the troposphere, so molecules are colder and more separated at the upper troposphere. Now GHGs have a chance to emit IR radiation so when they finally collide with another molecule, they are colder so GHGs have a cooling effect in the upper troposphere and stratosphere.

Because GHGs make the atmosphere more opaque to IR radiation, when they are present the emission to space from the planet normally does not take place from the surface (as happens in the Moon). Part of it still takes place from the surface through the atmospheric window, but most of it takes place from higher in the atmosphere. We can define a theoretical effective emission height as the average height at which the Earth’s outgoing longwave radiation (OLR) is being emitted. The temperature at which the Earth emits is the temperature at the effective emission height in the atmosphere. That temperature, when measured from space is 250 K (-23°C), not 255 which is the calculated temperature for a theoretical blackbody Earth. That temperature corresponds to a height of about 5 km, which we call the effective emission height.

The last piece we need to understand the GHE is the lapse rate, which in the troposphere is positive, meaning that temperature decreases with height. Without a positive lapse rate, the GHE does not work. Since GHGs cause the planet to emit from a higher altitude, due to making the atmosphere more opaque to IR radiation, that altitude is colder due to the lapse rate. The Earth still needs to return all the energy received from the Sun, but colder molecules emit less. So, the planet will go through a period when it will emit less than it should, warming the surface and the lower troposphere until the new height of emission achieves the temperature necessary to return all the energy, at which point the planet stops warming.

The GHE simply states that the temperature at the surface (Ts) is just the temperature of emission (Te) plus the lapse rate (Γ) times the height of emission (Ze).


Ts = Te + ΓZe

Held & Soden (2000) illustrated it in figure 1:



This is how the GHE actually works. An increase in CO2 means an increase in the height of emission. Since the temperature of emission must remain the same, the temperature from the surface to the new height of emission must increase. The increase is small but significant. As Held and Soden say:



“The increase in opacity due to a doubling of CO2 causes Ze to rise by ≈150 meters. This results in a reduction in the effective temperature of the emission across the tropopause by ≈(6.5K/km) (150 m) ≈1 K.”Held and Soden

So, the temperature at the surface must increase by 1K. That’s the direct warming caused by the doubling of CO2, before the feedbacks (mainly water vapor) kick in, further raising the height of emission.

This also has an interesting prediction. If the warming is due to an increase in CO2 when the increase takes place and the altitude of emission increases, the planet should emit less OLR as the new altitude is colder and a reduced OLR is the warming mechanism. Once the warming takes place, the OLR will become the same as before the GHG increase. It says so in Held and Soden’s figure 1 caption: “Note that the effective emission temperature (Te) remains unchanged.” Same Te, same OLR. So, if CO2 is responsible for the surface temperature increase, we should first expect less OLR and then the same OLR. If at any time we detect more OLR that would indicate another cause for the warming. Anything that makes the surface warmer, except GHGs, will increase the temperature of emission, increasing OLR.

So, this is the test:

– Surface warming but less or same OLR: CO2 is guilty as charged

– Surface warming and more OLR: CO2 is innocent

And the test results can be evaluated for example with Derwitte and Clerbaux 2018:



“decadal changes of the Outgoing Longwave Radiation (OLR) as measured by the Clouds and Earth’s Radiant Energy System from 2000 to 2018, the Earth Radiation Budget Experiment from 1985 to 1998, and the High-resolution Infrared Radiation Sounder from 1985 to 2018 are analyzed. The OLR has been rising since 1985, and correlates well with the rising global temperature.Derwitte and Clerbaux 2018

CO2 is innocent. Its fingerprint is not found at the crime scene. Something else is warming the planet and causing the increase in OLR.

Bibliography:

Dewitte, S. and Clerbaux, N., 2018. Decadal changes of earth’s outgoing longwave radiation. Remote Sensing, 10(10), p.1539.
https://www.mdpi.com/2072-4292/10/10/1539/pdf

Held, I.M. and Soden, B.J., 2000. Water vapor feedback and global warming. Annual review of energy and the environment, 25(1), pp.441-475.
https://www.annualreviews.org/doi/pdf/10.1146/annurev.energy.25.1.441

Stephens, G.L., O’Brien, D., Webster, P.J., Pilewski, P., Kato, S. and Li, J.L., 2015. The albedo of Earth. Reviews of geophysics, 53(1), pp.141-163.

5

A DIY Guide To Demystifying “Greenhouse Gas” Claims…The Science That Cuts Corners


Reposted from https://notrickszone.com/2023/01/14/a-diy-guide-to-demystifying-greenhouse-gas-claims-the-science-that-cuts-corners/

A DIY Guide To Demystifying “Greenhouse Gas” Claims…The Science That Cuts Corners

By P Gosselin on 14. January 2023

Share this…

By Fred F. Mueller

Do you feel helpless when trying to assess the veracity of “climate doom is looming” claims we are constantly bombarded with?

For ordinary citizens not having acquired at least a Ph.d. degree in atmospheric physics or comparable climate-relevant sciences, it seems nearly impossible to tell right from wrong when it comes to assess such claims. Do so-called greenhouse gases really reflect infrared energy back to earth in such quantities that this affects earth’s temperature?

Don’t give up trying to understand the relevant basics: there are rather simple ways to get an idea about what this is all about. Even without a scientific background, most people have at least a good common sense. And that’s all it takes to get a grasp of how vigorously and chaotically enormous energy fluxes slosh up and down, back and forth between earth’s surface and the skies.

Fig. 1. The setting sun illuminating a fairly thin veil of clouds from below – thus injecting energy into the space between the earths’s surface and cloud cover.

Part 1 – some basics

Let’s first clarify where the heat that allows us to live rather comfortably in our habitats is coming from and where it goes to. Despite the enormous energy content of the molten core of our planet, the bulk of our energy comes from the sun, which sends us energy mainly using three forms of electromagnetic radiation: visible, ultraviolet and infrared light.

At the top of the atmosphere, every square meter oriented towards the sun thus receives a fairly constant power influx of 1361 to 1362 W/m2. Although not being a real constant, this value is often referred to as the solar constant.

The alleged greenhouse effect

The notion of a “greenhouse effect” in our atmosphere has been used and misused incredibly often, resulting in an incredible mess of erroneous perceptions not only among the public, but even in the scientific world. A striking example for an obvious misrepresentation can be seen in the lead-in picture of the Wikipedia chapter on the topic, Fig. 2.

Fig. 2. The lead-in picture of the Wikipedia chapter about the “greenhouse effect” (Author: Efbrazil 2), CC 4.0)

This graphic highlights the extent to which Wikipedia gives the impression of having fallen prey to climate activism. The complex reality of transfers and transformations of energy on our planet involving soils, waters, gases, clouds, aerosols, heat storage, conduction and convection, chemical reactions and phase transformations, as well as a host of additional factors are simply swept under the carpet, attributing all their combined effects solely to the odious “greenhouse gases”.

This Wikipedia chapter is a saddening example for the downfall of an allegedly scientific encyclopedia actually spreading rather crude ideology under the guise of educating the public. The related chapter comprises more than 7,000 words and tries to underscore its claim of being “scientific” by a list of 80 citations including papers about the atmospheric conditions on far-away cosmic bodies such as Titan and Venus. But this cannot excuse the use of such a grossly misleading graphic as the lead-in picture for the abstract. Such tricks commonly used in tabloids or yellow journals. Wikipedia touts itself to be an encyclopedia addressing not only scientists but also laymen and the general public and should therefore care all the more not to disseminate content that may be misunderstood by people lacking a scientific background.

Fig. 3. This more detailed representation of the energy fluxes on earth elaborated by NASA is still misleading with respect to some decisive facts (Picture by NASA 3), public domain) Note: This graphic and the corresponding link have been withdrawn after completion of the article. In a subsequent part, the replacement graphic and its amendments will be treated in detail. Nevertheless, this graphic and its errors have been displayed for a prolonged time, thus warranting a suited discussion.

Although the more detailed Fig. 3 elaborated by NASA gives a better impression of the many different factors influencing energy transfer fluxes between earth’s surface and space, it still misleads in a subtle way that makes it unfit to convey a correct understanding of the vital facts. Let’s look at the main inconsistencies.

Mean values intended to mask natural variations

One of the favourite tricks of climate prophets of doom is to suggest that all major factors influencing our climate are more or less constant, with the sole exception of “greenhouse gases”. They intend to exploit the fact that the CO2-level of the atmosphere is rising while at least for the past some 150 years, meteorologists have also seen a moderate rise of the temperature levels they monitor on their stations. Though both trends are far from being in lockstep, this coincidence of trends has been declared to be the proof for a causality, although no clear mechanism or quantitative deduction could hitherto be established. Despite many striking discrepancies e.g. with respect to the natural cycles of CO2 or the absorption and sequestration of CO2 in our oceans, the perceived rise in temperatures has been almost exclusively attributed to CO2.

Misusing water vapor

Another diversion has been to declare that water vapor is simply reinforcing the leading role of CO2. This might be viewed as a real masterpiece of twisting reality since water vapor has not only a much higher efficiency with respect to absorbing (and re-emitting) infrared radiation (see Fig.4.), but is also exceeds the content of CO2 in the atmosphere by factors between 25 (= median concentration value at sea level) and up to 100!

Fig. 4. Comparing the spectral IR radiance of a surface with 14 °C with the overlapped absorption bands of CO2 (brownish) and water vapor (bluish) shows the highly superior absorption capacity of water vapor for the IR emission of soil or water at 14 °C – (which is the “mean” temperature on earth’s surface). Please mind the different scales of the x axes: linear for the spectral radiance, logarithmic for the absorption. (Graphics: SpectralCalc 4) (above), NASA, Robert Rohde 5) Public domain (below)).

 Notwithstanding these inconsistencies, the climate science community has in its vast majority adopted this approach. This might be attributable to the fact that the quantity of water vapor in the atmosphere is subjected to wild temporal and local variations between nearly zero – e.g. at high altitudes and very low temperatures – and sometimes up to 4% at sea level.

Cutting corners

Additionally, especially when transforming to clouds, water vapor tends to condense or freeze out of the atmosphere in ways that have up to now resisted any realistic attempt to describe them mathematically. Trying to establish realistic three-dimensional models of water vapor distribution over a certain location at a given moment and to calculate the resulting effects on absorption and re-emission of IR radiation thus remain a much more arduous task than using a single value for all and every condition, as can conveniently be done when attributing the whole “greenhouse effect” solely to CO2. And voilà, truckloads of complicated research work may simply be skipped. This approach also greatly reduces the scale of expenditures in data acquisition, manpower, computer time – and in waiting time before reaping academic awards. After all, the beacon for all climate science, the IPCC, is doing it too, e.g. by simply omitting water vapor from its account of “greenhouse gases”, see Fig. 5.

Pic.5. Contribution to observed climate change from 12 different drivers, as taken from the Summary for Policymakers of the sixth IPCC assessment report, adapted from figure SPM.2c (Graphic: Erik Fisk, CC 4.0 6))

The numerous advantages of such a cutting of (scientific) corners might be one of the main driving forces for the deplorable tendency towards the “single number fallacy” explained by Kip Hansen 7) as being “the belief that complex, complicated and even chaotic subjects and their data can be reduced to a significant and truthful single number.”

Unfortunately for us, that’s exactly what the official climate science is doing. Under the headline “One number to track human impact on climate”, NOAA scientists released the first AGGI 8) (aggregated greenhouse gas index) in 2006 as “a way to help policymakers, educators, and the public understand the cumulative impact of greenhouse gases on climate over time”.

The minuscule driving forces of “greenhouse gases”

When trying to assess the real impact of “greenhouse gases” on earths energy balance, the first step should be to assess the driving force they are alleged to exert on the input and output of energy fluxes. Corresponding parameters can be found in a table within the Wikipedia chapter about Greenhouse gases 9). They reveal that in the view of the leading climate scientists, just four gases have a relevant influence on the budget of energy exchange between incoming and outgoing radiation energy since the alleged start of “human- induced climate change” in 1750. These are:

Carbon dioxide with                          + 2.05    W/m2
Methane with                                      + 0.49    W/m2
Nitrous oxide with                             + 0.17    W/m2
Tropospheric ozone                           + 0.4      W/m2
===========
Total GHG contribution             +3.11     W/m2

This figure is extraordinarily small when comparing it with the enormous temporal and local variability of energy fluxes within our planet’s ocean/atmosphere/soil system within short time periods, and amounts to just a low single digit percentage of the daily variations. This will be treated in more detail in the following chapter.

Peculiarly enormous greenhouse effect range 

On a side note, it is interesting to see that the IPCC gives an enormous range for the greenhouse effect (TCR, Transient Climate Response or “climate sensitivity10)) of CO2, which is estimated to range “likely” between 1.5 and 4.5°C. The figure represents the alleged rise of earth’s mean temperature in °C for every doubling of the CO2 level of the atmosphere. Given this extraordinarily broad range of ± 50%, one might be surprised that IPCC, NOAA and Wikipedia authors advance temperature rise values for greenhouse gases calculated with up to three “significant” digits. This too might be attributable to the feeling of certainty about climate relevant figures instilled into the public by the “one number fits it all” mentality prevalent in our current climate science community.

References

  1. https://en.wikipedia.org/wiki/Solar_constant
  2. https://en.wikipedia.org/wiki/Greenhouse_effect
  3. http://science-edu.larc.nasa.gov/energy_budget/pdf/Energy_Budget_Litho_10year.pdf (Note: This link seems to have been deactivated very recently.)
  4. https://www.spectralcalc.com/blackbody_calculator/blackbody.php
  5. File:CO2 H2O absorption atmospheric gases unique pattern energy wavelengths of energy transparent to others.png – Wikimedia Commons
  6. https://commons.wikimedia.org/wiki/File:Physical_Drivers_of_climate_change.svg
  7. https://wattsupwiththat.com/2023/01/03/unknown-uncertain-or-both/
  8. https://research.noaa.gov/article/ArtMID/587/ArticleID/2877/Greenhouse-gas-pollution-trapped-49-more-heat-in-2021-than-in-1990-NOAA-finds
  9. https://en.wikipedia.org/wiki/Greenhouse_gas
  10. https://www.metoffice.gov.uk/research/climate/understanding-climate/climate-sensitivity-explained

Time to talk about capacity factors


It is Time to Talk About “Capacity Factors”

January 12, 2022

Dr. Lars Schernikau, energy economist and commodity trader, Switzerland/Singapore, https://www.linkedin.com/in/larsschernikau/

It is time to talk about “Capacity Factors”​

In electricity generation, capacity factor, utilization, and load factor are not the same.

A lot of confusion exists in the press and certainly in politics, and even amongst “energy experts”, about using the term “capacity factor”. It may be excused, since the distinction made in this article became only relevant with the penetration of variable “renewable” energy, such as wind and solar, in our energy systems.

  • Worldwide average solar natural capacity factor (CF) reaches about ~11-13%. Best locations in California, Australia, South Africa, Sahara may have above 25%, but are rare. (see www.globalsolaratlas.info, setting direct normal solar irradiance)
  • Worldwide average wind natural capacity factors (CF) reach about ~21-24%. Best off-shore locations in Northern Europe may reach above 40%. Most of Asia and Africa have hardly any usable wind and the average CF would be below 15%, except for small areas on parts of the coasts of South Africa and Vietnam. (see www.globalwindatlas.info, setting mean power density)

Natural capacity factors in Europe tend to be higher for wind than for solar. Wind installations in Northern Europe may reach an average of over 30% (higher for more expensive offshore, lower onshore), but less than 15% in India and less than 8% in Indonesia.

Average, and the emphasis is on average, annual solar PV capacity factors reach around ~10-11% in Germany, ~17% in Spain, ~25% in California, and may reach 14-19% in India, but they reach less than 15% in Indonesia’s populated areas. Carbajales-Dale et al. 2014 confirm higher capacity factors for wind than for solar; they estimate global average wind capacity factors to be around 21-24% and solar around 11-13% (see figure above).

The figure further below illustrates a two week period in May 2022 (when I wrote this chapter of our book on capacity factors), where the average wind capacity factor reached only ~5% for ALL German wind installations (on- and offshore).


To avoid confusion, I try to use “natural capacity factor” in my writing wherever possible

  •  The “natural capacity factor (CF)” is the % of the maximum possible output of the “power plant” (coal, gas, nuclear, solar, wind, hydro, etc), achieved under the natural conditions of the site, assuming no operational or technological failures or outages.
  • I define “utilization” is the % of the power plant’s workable capacity used on average over the year, which is only reduced because of technological, operational, economical outages or curtailments… completely independent of the CF
  • The “net load factor” – in my definition – is then the product of natural capacity factor x utilization

Thus, when we speak of the natural capacity factor, we are only referring to the nature-derived capacity factor, not the technological or operationally drivenutilization” (often referred to as uptime, plant load factor, or PLF). In other words, when technology fails, or a power plant is turned off on purpose, this will reduce the utilization but not the natural capacity factor.


As mentioned, the natural capacity factor is due to the site, not the solar PV installation. Thus, even a perfect PV material still needs to deal with natural capacity factors with an annual average of 10-25%, not counting for other losses from conditioning, transmission, balancing, or storing highly intermittent sources of electricity (Schernikau and Smith 2021). 

The press has mentioned several times that coal or gas have capacity factors of 60% or less on average. This is at best misleading, more likely knowingly wrong for political reasons. However, such a number is not the nature-derived capacity factor; it is the utilization which declines with higher penetration of wind and solar, and contributes to electricity system cost increases.

Utilization never should and cannot be compared to natural capacity factors, they are very distinct. Conventional power plants have near 100% natural capacity factors, but their operational and technological utilization often falls significantly below 90%, also but not only because of the priority given to wind and solar in the system. Because of their high CF, the net-load factor is only slightly lower than utilization for a convention power plant.

Because utilization of wind and solar is often near 100%, their net-load factor is often only slightly lower than their natural capacity factor.

Figure: Germany’s wind generation 25 April to 10 May 2022 during a 2-week wind lull
Source: Agora 2022, Figure 10 in Book “The Unpopular Truth… about Electricity and the Future of Energy”, http://www.unpopular-truth.com


Needless to say, the natural capacity factor of wind and solar (even for hydro, because of natural river flows) cannot be predicted or guaranteed for any given time frame. The natural capacity factor can be estimated on an annual basis but still varies widely even annually (see Europe in 2021) and is very erratic, sometimes for days and weeks reaching near 0% for wind and solar, even in top locations.

Thus, natural capacity factors worldwide are a direct result of the location of the wind or solar installation; they do not in any way depend on and cannot be influenced by the technology employed.

The last point is important… no technological advances can change the natural availability of wind, solar, or river flows and therefore influence the natural capacity factor for a given installation. Technology CAN and WILL improve how much usable electricity you get out of the natural input product (wind, solar, river flow, gas, coal, uranium, etc)… this is called conversion efficiency and their limits are discussed further below.

Since the easy locations have already been “used up”, one can expect average natural capacity factors to decline over time… contrary to what Net-Zero plans assume (see International Energy Agency (IEA), McKinsey & Company, or International Renewable Energy Agency (IRENA)).

  1. For a photovoltaic (PV) park, the natural capacity factor CF depends entirely on the intensity and duration of the sunlight, which is affected by seasonality and cloudiness, day and night, and the ability to maintain the PV panel surface’s transparency, e.g., dust in the Sahara or snow in winters.
  2. Wind farms’ natural capacity factors depend on the site’s wind speed distribution and the saturation speed of the wind turbine. The CF of a wind turbine is determined by the number of hours per year in which the wind farm operates at or above the saturation wind speed (Smith and Schernikau 2022). If the design wind saturation speed is set low, e.g., 4-5 m/s, the wind farm produces little energy, even for high capacity factors. Typically, wind saturation speeds are 12-15 m/s.

It now becomes obvious why the installed capacity needs to be much larger for wind and solar than for dispatchable power such as nuclear, coal, gas, or hydro. This significant relative increase in energy generation capacity to produce the same available, but unpredictable, energy output is coupled with a significantly higher raw material input and energy input factor for variable “renewable” energy which must be offset from any fuel savings.

#Germany is a good example: Total installed power capacity more than doubled in the past 20 years, essentially all consisting of wind and solar (see figure below)

  • Wind and solar installed capacity is now above 125GW, more than 150% higher than peak power demand in Germany of around 80GW
  • Germany’ conventional installed power capacity consisting of coal, gas, and nuclear still barely matches peak power demand
  • With all this capacity addition in Germany, wind and solar made up less than 30% of total electricity generation in 2021 and about 5% of total energy consumption

Figure: German installed power capacity, electricity production, and primary energy


Source: Schernikau Research and Analysis based on Fraunhofer 2022, AGE 2021, Agora 2022
Figure 7 in Book “The Unpopular Truth…about Electricity and the Future of Energy”, http://www.unpopular-truth.com

The low natural capacity factor of wind and solar installations – without any doubt – is one of the key reasons for their low net-energy efficiency (https://dx.doi.org/10.2139/ssrn.4000800).


On Conversion Efficiency

Below figure summarizes energy conversion efficiencies for wind and solar and the laws they follow. Conversion efficiency measures the ratio between the useful output of an energy conversion machine and the input, in energy terms, thus after accounting for capacity factor.

Figure: Lhttps://i0.wp.com/wattsupwiththat.com/wp-content/uploads/2022/12/image-202.png?w=974&ssl=1aws of physics limit technological improvements for wind and solar
Source: Schernikau and Smith Research and Analysis, Figure 11 in Book “The Unpopular Truth… about Electricity and the Future of Energy”, http://www.unpopular-truth.com

For more Details please see our book “The Unpopular Truth… about Electricity and the future of Energy” (on Amazon)… or www.unpopular-truth.com

This article can also be accessed at

https://www.linkedin.com/pulse/time-talk-capacity-factors-lars-schernikau

Climate Sensitivity from 1970-2021 Warming Estimates – Most Likely Upper Bound


Crossposted from https://wattsupwiththat.com/2022/12/20/climate-sensitivity-from-1970-2021-warming-estimates/

First note that a climate sensitivity of 1.0 means there is no sensitivity

Climate Sensitivity from 1970-2021 Warming Estimates

6 days ago

Guest Blogger

93 Comments

Reposted from Dr. Roy Spencer’s Global Warming Blog.

by Roy W. Spencer, Ph. D.

In response to reviewers’ comments on a paper John Christy and I submitted regarding the impact of El Nino and La Nina on climate sensitivity estimates, I decided to change the focus enough to require a total re-write of the paper.

The paper now addresses the question: If we take all of the various surface and sub-surface temperature datasets and their differing estimates of warming over the last 50 years, what does it imply for climate sensitivity?

The trouble with estimating climate sensitivity from observational data is that, even if the temperature observations were globally complete and error-free, you still have to know pretty accurately what the “forcing” was that caused the temperature change.

(Yes, I know some of you don’t like the forcing-feedback paradigm of climate change. Feel free to ignore this post if it bothers you.)

As a reminder, all temperature change in an object or system is due to an imbalance between rates of energy gained and energy lost, and the global warming hypothesis begins with the assumption that the climate system is naturally in a state of energy balance. Yes, I know (and agree) that this assumption cannot be demonstrated to be strictly true, as events like the Medieval Warm Period and Little Ice Age can attest.

But for the purpose of demonstration, let’s assume it’s true in today’s climate system, and that the only thing causing recent warming is anthropogenic greenhouse gas emission (mainly CO2). Does the current rate of warming suggest (as we are told) that a global warming disaster is upon us? I think this is an important question to address, separate from the question of whether some of the recent warming is natural (which would make AGW even less of a problem).

Lewis and Curry (most recently in 2018) addressed the ECS question in a similar manner by comparing temperatures and radiative forcing estimates between the late 1800s and early 2000s, and got answers somewhere in the range of 1.5 to 1.8 deg. C of eventual warming from a doubling of the pre-industrial CO2 concentration (2XCO2). These estimates are considerably lower than what the IPCC claims from (mostly) climate model projections.

Our approach is somewhat different from Lewis & Curry. First, we use only data from the most recent 50 years (1970-2021), which is the period of most rapid growth in CO2-caused forcing, the period of most rapid temperature rise, and about as far back as one can go and talk with any confidence about ocean heat content (a very important variable in climate sensitivity estimates).

Secondly, our model is time-dependent, with monthly time resolution, allowing us to examine (for instance) the recent acceleration in deep ocean temperature (ocean heat content) rise.

In contrast to Lewis & Curry and differencing two time periods’ averages separated by 100+ years, our approach is to use a time-dependent model of vertical energy flows, which I have blogged on before. It is run at monthly time resolution, so allows examination of such issues as the recent acceleration of the increase in oceanic heat content (OHC).

In response to reviewers comments, I extended the domain from non-ice covered (60N-60S) oceans to global coverage (including land), as well as borehole-based estimates of deep-land warming trends (I believe a first for this kind of work). The model remains a 1D model of temperature departures from assumed energy equilibrium, within three layers, shown schematically in Fig. 1.

One thing I learned along the way is that, even though borehole temperatures suggest warming extending to almost 200 m depth (the cause of which seems to extent back several centuries), modern Earth System Models (ESMs) have embedded land models that extend to only 10 m depth or so.

Another thing I learned (in the course of responding to reviewers comments) is that the assumed history of radiative forcing has a pretty large effect on diagnosed climate sensitivity. I have been using the RCP6 radiative forcing scenario from the previous (AR5) IPCC report, but in response to reviewers’ suggestions I am now emphasizing the SSP245 scenario from the most recent (AR6) report.

I run all of the model simulations with either one or the other radiative forcing dataset, initialized in 1765 (a common starting point for ESMs). All results below are from the most recent (SSP245) effective radiative forcing scenario preferred by the IPCC (which, it turns out, actually produces lower ECS estimates).

The Model Experiments

In addition to the assumption that the radiative forcing scenarios are a relatively accurate representation of what has been causing climate change since 1765, there is also the assumption that our temperature datasets are sufficiently accurate to compute ECS values.

So, taking those on faith, let’s forge ahead…

I ran the model with thousands of combinations of heat transfer coefficients between model layers and the net feedback parameter (which determines ECS) to get 1970-2021 temperature trends within certain ranges.

For land surface temperature trends I used 5 “different” land datasets: CRUTem5 (+0.277 C/decade), GISS 250 km (+0.306 C/decade), NCDC v3.2.1 (+0.298 C/decade), GHCN/CAMS (+0.348 C/decade), and Berkeley 1 deg. (+0.280 C/decade).

For global average sea surface temperature I used HadCRUT5 (+0.153 C/decade), Cowtan & Way (HadCRUT4, +0.148 C/decade), and Berkeley 1 deg. (+0.162 C/decade).

For the deep ocean, I used Cheng et al. 0-2000m global average ocean temperature (+0.0269 C/decade), and Cheng’s estimate of the 2000-3688m deep-deep-ocean warming, which amounts to a (very uncertain) +0.01 total warming over the last 40 years. The model must produce the surface trends within the range represented by those datasets, and produce 0-2000 m trends within +/-20% of the Cheng deep-ocean dataset trends.

Since deep-ocean heat storage is such an important constraint on ECS, in Fig. 3 I show the 1D model run that best fits the 0-2000m temperature trend of +0.0269 C/decade over the period 1970-2021.

Finally, the storage of heat in the land surface is usually ignored in such efforts. As mentioned above, climate models have embedded land surface models that extend to only 10 m depth. Yet, borehole temperature profiles have been analyzed that suggest warming up to 200 m in depth (Fig. 4).

This great depth, in turn, suggests that there has been a multi-century warming trend occurring, even in the early 20th Century, which the IPCC ignores and which suggests a natural source for long-term climate change. Any natural source of warming, if ignored, leads to inflated estimates of ECS and of the importance of increasing CO2 in climate change projections.

I used the black curve (bottom panel of Fig. 4) to estimate that the near-surface layer is warming 2.5 times faster than the 0-100 m layer, and 25 times faster than the 100-200 m layer. In my 1D model simulations, I required this amount of deep-land heat storage (analogous to the deep-ocean heat storage computations, but requiring weaker heat transfer coefficients for land and different volumetric heat capacities).

The distributions of diagnosed ECS values I get over land and ocean are shown in Fig. 5.

The final, global average ECS from the central estimates in Fig. 5 is 2.09 deg. C. Again, this is somewhat higher than the 1.5 to 1.8 deg. C obtained by Lewis & Curry, but part of this is due to larger estimates of ocean and land heat storage used here, and I would suspect that our use of only the most recent 50 years of data has some impact as well.

Conclusions

I’ve used a 1D time-dependent model of temperature departures from assumed energy equilibrium to address the question: Given the various estimates of surface and sub-surface warming over the last 50 years, what do they suggest for the sensitivity of the climate system to a doubling of atmospheric CO2?

Using the most recent estimates of effective radiative forcing from Annex III in the latest IPCC report (AR6), the observational data suggest lower climate sensitivities (ECS) than promoted by the IPCC with a central estimate of +2.09 deg C. for the global average. This is at the bottom end of the latest IPCC (AR6) likely range of 2.0 to 4.5 deg. C.

I believe this is still likely an upper bound for ECS, for the following reasons.

  1. Borehole temperatures suggest there has been a long-term warming trend, at least up into the early 20th Century. Ignoring this (whatever its cause) will lead to inflated estimates of ECS.
  2. I still believe that some portion of the land temperature datasets has been contaminated by long-term increases in Urban Heat Island effects, which are indistinguishable from climatic warming in homogenization schemes.

Urban Night Lighting Observations Demonstrate The Land Surface Temperature Dataset is ‘not fit for purpose’


Crossposted from https://wattsupwiththat.com/2022/12/18/urban-night-lighting-observations-challenge-interpretation-of-land-surface-temperature-observations/

Urban Night Lighting Observations Demonstrate The Land Surface Temperature Dataset is ‘not fit for purpose’

Foreword by Anthony:

This excellent study demonstrates what I have been saying for years – the land surface temperature dataset has been compromised by a variety of localized biases, such as the heat sink effect I describe in my July 2022 report: Corrupted Climate Stations where I demonstrate that 96% of stations used to measure climate have been producing corrupted data. Climate science has the wrongheaded opinion that they can “adjust” for all of these problems. Alan Longhurst is correct when he says: “…the instrumental record is not fit for purpose.”

One wonders how long climate scientists can go on deluding themselves about this useless and highly warm-biased data. – Anthony


Guest essay by Alan Longhurst – From Dr. Judith Curry’s Climate Etc.

The pattern of warming of surface air temperature recorded by the instrumental data is accepted almost without question by the science community as being the consequence of the progressive and global contamination of the atmosphere by CO2.   But if they were properly inquisitive, it would not take them long see what was wrong with that over-simplification: the evidence is perfectly clear, and simple enough for any person of good will to understand.

In 2006 NASA Goddard published two plots showing that the USA data[1] did not follow the same warming trend as the rest of the world. Rural data numerically dominate the USA archive, while urban data massively dominate almost everywhere else.   Observations began very early in the USA – being introduced by Jefferson in 1776 – and that emphasis had already then been placed on providing assistance to farmers.

They are consistent with the ‘global warming‘ that so worries us today being an urban affair, caused not by global CO2 pollution of the global atmosphere but by the heat of combustion of petroleum we burn in our vehicles, our homes and where we work – all of which is additive to the radiative consequences of our buildings and impermeable cement and asphalt surfaces. However, towns and cities in fact occupy only a very small fraction of the land surface of our planet, about 0.53% (or 1.25%, if their densely populated suburbs are included) according to a recent computation done with rule-based mapping. But it is in this very small fraction of land surfaces that most of the data in the CRUTEM or GISTEMP archives have been recorded.

Consequently, very few surface air temperature observations have been made in the small villages which, with their farms and grazing lands, are scattered in the otherwise uninhabited grassland. forest, mountain, desert and tundra.  Nor is it widely understood that our presence there has been associated with progressive change since the introduction of steel and steam to plough the grasslands and to cut forests for timber.[2] 

A measure of the brightness or intensity of night lighting, the BI index, was derived by NASA from the work of Mark Imhoff, who calibrated and ranked night lights in seven stable classes – one rural, two peri-urban and four urban.[3]   The BI indes for airport of Toulouse is at 59 and the central district of Cairo is at 167.  Care must be take with apparent anomalies similar to that of Millau which is an active little town of 20,000 people but it has a BI = 0, as does Gourdon which has only 4000.  This is because the MeteoFrance instruments at Millau have been placed on a bare hilltop on the far side of a deep, unbuilt valley adjacent to the town and so they record only the  conditions of the surrounding countryside.

It is not only in major cities that the effects of urbanisation can be detected; this effect can also be detected in data from some very small places that would otherwise be considered rural as at Lerwick, a port in the Orkney Islands with a population of <7000.  Here, the GHCN-M data from KNMI show a warming of about 0.9oC over the period 1978-2018, while during the same period the day/night temperature difference increased by 0.3oC.  Retention of heat at night is characteristic of urban warming.

But Gourdon, a compact little rural village not far from my home in western France has a BI of only 7 for a population of only 3900.  It is situated in farmland that was abandoned 150 years ago when the vines died, and it is now given over to sheep, goats and scrub vegetation.   Little hamlets in this region are now often dark at night and their road signs may warn you that you are entering a ´Starlit village´.

Despite its deep isolation, there is a manned Meteofrance data station in Gourdon which over a 60-year period has recorded a very gradual and small summer warming since mid-20th century, associated with perfectly stable winter conditions.

Since buildings and human activity have undoubtedly changed at Gourdon in this long period, perhaps especially by the growth of rural tourism, this effect was probably predictable.  The same is seen in data from other small places such as Lerwick, a port in the Orkney Islands with a population about twice that of Gourdon.  Here, GHCN-M data from KNMI show a warming of about 0.9oC over the period 1978-2018 while during the same period the day/night temperature difference increased by 0.3oC.

The BI values for night lighting are in no way influenced by fact that the thermometric data with which each is associated have later been merged with data from another station to achieve regional homogeneity.   Consequently, it is appropriate to associate them with night-light data in the hope of isolating the effects of local combustion of hydrocarbons in towns and cities, from what we must attribute to solar variation.    The consequences of homogenisation on the surface air temperature data is avoided here by the use of GHCN-M data from the KNMI site – which are as close to the original observations, adjusted only for on-site problems, as is now possible to get.

The urban warming phenomenon has been observed and understood for almost two hundred years.  Meteorologist Luke Howard (quoted by H.H. Lamb) wrote in 1833 concerning his studies of temperature at the Royal Society building in central London and also at Tottenham and Plaistow, then some distance beyond the town:

But the temperature of the city is not to be considered as that of the climate; it partakes too much of an artificial warmth, induced by its structure, by a crowded population, and the consumption of great quantities of fuel in fires: as will appear by what follows….we find London always warmer than the country, the average excess of its temperature being 1.579°F….a considerable portion of heated air is continually poured into the common mass from the chimnies; to which we have to add the heat diffused in all directions, from founderies, breweries, steam engines, and other manufacturing and culinary fire..’ [4]  

To Luke Howard’s list must now be added the consequences of the combustion of hydrocarbon fuels in vehicles, mass transport systems, power plants and industrial enterprises located within the urban perimeter, cement/asphalt surfaces and their relative contributions day and night.[5]

The energy budget of the agglomeration of Toulouse in southern France is probably typical of such places: anthropogenic heat release is of order 100 Wm2 in winter and 25 W m-2 in summer in the city core, and somewhat less in the residential suburbs.  Observations of resulting evolution of surface air temperatures in central Toulouse are compatible with the anticipated effect of the inventory of all heat sources seasonally.  Below the urban canopy layer, a budget for heat production and loss through advection into surrounding rural areas has been computed and it is found that this loss is important under some wind conditions.  In this and many other urbanisations, there is also an important seasonality of heat release by passing road traffic that forms a major component of the heating budget, since national highway systems commonly pass close to major centres of population.[6] 

Larger cities, larger effects: in the core of the city of Tokyo during the 1990s the seasonal heat flux range was 400-1600 W.m-2 and the entire Tokyo coastal plain appears to be contaminated by urban heat generated within the city, especially in summer when warming may extend to 1 km altitude, much higher than the simple nocturnal heat island over large cities.[7]   The long-term evolution of urban climates is well illustrated in Europe where, in the second half of the 20th century when their natural association with regional climate was abruptly replaced by a simple warming trend that took them almost 2oC above the base-line of the previous 250 years.

Although, globally, the energy from urban heat is equivalent to only a very small fraction of heat transported in the atmosphere, models suggest that it may be capable of disrupting natural circulation patterns sufficiently to induce distant as well as local effects on the global surface air temperature pattern.  Significant release of this heat into the lower atmosphere is concentrated in three relatively small mid-latitude regions – eastern North America, western Europe and eastern Asia – but the inclusion of this regional injection of heat (as a steady input at 86 model points where it exceeds 0.4W m2) has been tested in the NCAR Community Atmospheric model CAM3. 

Comparison of the control and perturbation runs showed significant regional effects from the release of heat from these three regions at 86 grid points where observations of fossil fuel use suggest that it exceeds 0.4 Wm-2.  In winter at high northern latitudes, very significant temperature changes are induced: according to the authors, ‘there is strong warming up to 1oK in Russia and northern Asia…. the north-eastern US and southern Canada have significant warming, up to 0.8 K in the Canadian Prairies’.

The suggestion that the global surface air temperature data – on which the hypothesis of anthropogenic climate warming hangs – are heavily contaminated by other heat sources is not novel.  The map below shows the locations of 173 stations used by MacKittrick and Michaels for a statistical analysis of the contamination of the global temperature archives by urban heat., using which they rejected the null hypothesis that the spatial pattern of temperature trends is independent of socio-economic effects which was, and still is, the position taken by the IPCC – for which MacKittrick was then a reviewer.[8]

In the present context, this study seemed worth repeating, so a file of 31 clusters of BI indices was gathered from the ‘Get Neighbours’ lists that are shown when accessing GISTEMP data.  These clusters comprise 1200 data files representing 776 towns or cities and 424 rural places – of which 355 are totally dark at night.  They therefore represent a wide range of individual station histories – many longer than 100 years – and are sufficient for the task.   Just 53 of the 540 rural sites listed are in Western Europe, the remainder being located in the vast, night-dark expanses of Asia – where the data based on the arctic island of Novaya Zemyla includes only three with significant night lights,  of which one is the city of Murmansk.

 The cluster centred southeast of Lake Baikal includes two cities (329,000 and 212,000 inhabitants having BIs of only 28 and 13) together with 39 small places – of which 28 are totally dark at night  – while that immediately to the west of Baikal includes 19 such places.   But not all bright locations have large populations, because intensive industrial farms – solar panel energised – can dominate regional night lighting as it does at in some Gulf States: an experimental farm alone here generates a BI of 122, while the 3012 people who live at Shiwaik generate a BI of 181.

The map below indicates the central locations  of 30 clusters in relation to the distribution of native vegetation type. [9]

                                  Central stations of each cluster 

        Place name                              Radius km   BI=0 BI>25  Npop<1K    N       E      

1   Gourdon, France                                  288       5           1               6         44.7  01.4

  2   Valentia Observatory, S. Ireland    400       14         2              14        51.9  10.2

  3   Santiago Compostella, Spain           406      7          23              2          42.9   06.4

  4   Muenster, Germany                         109      1          7               0          52.4   07.7

  5   Innsbruck, Austria                           107      9           2               4          42.3   11.4

  6   Bursa, Turkey                                  224     12         1                2          40.4   25.1 

  7   El Suez, Egypt                                 532       7       21                0      25.4   32.5

  8   Abadan                                             628        6       17                0         30.4   48.5

  9   Gdov, Russia                                    224      14         5             10          58.7   27.5

10  Saransk. W Russia                            434        9         9                1          54.1   45.2

11  Tobolsk, Russia                                 482         8        7                5          58.1   68.2

12  Lviv, Ukraine                                    293      10        5                2           49.8   23.9     

13  Simferopol, Crimea                           397       14        4                2           44.7   34.4            

14  Tulun , Russia                                    485      19        4               9            54.0   98.0

15  Tatarsk, Russia                                    308      14         1               6           55.2   75.9

16  Krasnojarsk, Russia                            391     13         2               7            56.0   92.7

17  Ostrov Gollomjanny, Russia i             277      38         2            24           79.5   90.6

18  Malye Kamakuki, Russia                     82      30         1            23            72.4   52.7

19  Kokshetay, Kazakstan                          460      15         3              2          53.3   69.4

20  Cardara, Russia                                   212       12         0              1           41.3   68.0

21  Nagov, Russia                                     696      30         0              4          31.4   92,1

22  Selagunly, Russia                                846      26        0              5            66.2  114.0

23  Loksak, Russia                                    493       31         0            11          54.7  130.0

24  Gyzylarbat, Russia                              636       20         5             5          38.9    56.3

25  Ust Tzilma, Pechora Basin                 451        16         1           7           65.4    52.3

26  Cape Kigilyak, Kamchatka               1055        37        0            9          73.3  139.9

27  Dashbalbar, Mongolia                       435       29         1            6           49.5  114.4    

28  Guanghua, China                               465       17         2             ?           32.3  111.7   

29  Youyang, S. Korea                            417       26          0            ?          28.3  108.7

30  Poona, N. India                                 681         4          7            0          18.5   73,8

31  C. India                                             601         1        17            0          23.2   71.3     

32  Mai Sariang, Burma                          57         10          4            1          68.2    97.9

33  Central Japan                                   203          5       13            1          34.4  132.6

These data may be used to investigate the supposed warming of Europe and Asia that so worries the public.  In far eastern Russia and neighbouring territories 8 clusters are listed  which include 296 place-names lacking any night-lighting at all, together with just five small towns having night-light indices of only 1.  In such places, it is the natural cycle of climate conditions – modified locally by progressive anthropogenic change in ground cover – that dominates the global pattern of air temperature, and in rural regions there is a rather simple relationship between population size and BI.

Towns and villages occupy only a very small fraction of the continental land surface of our planet, currently about 0.53% – or 1.25% if their densely-populated suburbs are included – according to a recent study using rule-based mapping.  Although it is peripheral to the present discussion, it must be emphasised that conditions in the sparsely-inhabited rural or natural regions are not static at secular scale – everywhere, including in Asia, grasslands and prairies have been grazed or ploughed, and forests clear-cut and replaced with secondary growth.

Consequently, the distribution of population is highly aggregated and associated – as it must be – with regional economic development.  This is illustrated in the images below which show that in western Europe access to the sea is critical, as it is in Japan, while in night-dark Ukraine and Russia it is the zones of temperate broadleaf forest and temperate steppe in which settlement and urban development has been most active.[10]  The arctic tundra belt is very sparsely populated but does includes a few industrialised cities, of which Archangelsk is the largest.

Although, globally, the energy from heat of combustion is equivalent to only a very small fraction of the energy transported in the atmosphere, models suggest that it may be capable of disrupting natural circulation patterns sufficiently to induce distant as well as local effects on the global SAT pattern derived from observations.  Significant release of this heat into the lower atmosphere is concentrated in three relatively small mid-latitude regions – eastern North America, western Europe and eastern Asia – but the inclusion of this regional injection of heat (as a steady input at 86 model points where it exceeds 0.4W m2) in the NCAR Community Atmospheric model CAM3 has important but distant regional effects, especially in winter. 

Comparisons of control and perturbation runs show significant regional effects from the release of heat from these three regions at 86 grid points at which observations of fossil fuel use suggest that it exceeds 0.4 Wm-2: specifically, in winter at high northern latitudes, very significant temperature changes are induced: according to the authors, ‘there is strong warming up to 1oK in Russia and northern Asia…. the north-eastern US and southern Canada have significant warming, up to 0.8 K in the Canadian Prairies’.  Especially in northern North America, where the instrumental record is excellent, this effect is readily observed night lighting is highly aggregated and associated – as it must be – with regional economic development.  This is illustrated in the image above which shows that in western Europe access to the sea is critical, as it is in Japan, while in night-dark Ukraine and Russia it is the zones of temperate broadleaf forest and temperate steppe in which settlement and urban development has been most active.[11]

In eastern Asia, 8 clusters include 268 places that are dark at night, together with just 47 having some night-lighting, mostly of intensity <20.  They include only one city (BI = 153).   In such regions, it is the multi-decadal cycle of solar brilliance that dominates the evolution of air temperature, modified by local effects of change in vegetation and ground cover.

But it is really a misuse of the term ‘rural’ to apply it to the small inhabited places scattered across northern Asia, for this implies some similarity with landscapes such as surrounds Gourdon, devoted now or in the past to farming and herding.  But small villages in asiatic Russia have nothing to do with rurality: their houses and streets have simply been set down in natural terrain – in the wildlands, if you will – that is subsequently ignored; there are no crops, gardens or greenhouses, and the activities of the population are not clear.  The wide unpaved streets bear very few motor vehicles – and there is no street lighting.  Many are described as administrative centres and some have a small dirt runway for light aircraft, while a few seem not to be connected to the rest of the world by dirt roads even seasonally,

Here are two small places in northern Siberia with very different seasonal temperature regimes, of which one is clearly well on its way to urbanisation.  Each lies between 65-70oN on the banks of the river Lena.

 Zhigansk is a long-settled little town founded in 1632 by Cossacks sent to pacify and tax the region; it is now an administrative centre housing 3500 people., laid out beside the river on a rectangular grid.  Until the Lena freezes, it has no road access to the outside in winter.

Kjusjur, just south of the mouth of the Lena in a subarctic environment, was founded in 1924 as the administrative centre for this region, and has a population of 1345; routine meteorological data began to be collected in 1924 and continues today.  About 100 small houses and one larger building are set on unpaved streets beside the stony bank of te river; it has neither runway nor river landing place, but rough tracks leave the settlement to north and south which must be impassable much of the year.[12]

Two motor vehicles can be seen in Kjusjur and a few small boats are pulled up on the beach, while there are about ten motor vehicles in Zhigansk and neither place has any street lighting. Zhigansk has a dirt airstrip with a radar installation that perhaps also houses the meteorological station.   Each has a temperature regime appropriate to its situation, and although it was what I was looking for, I am surprised by the strength of the response to urbanisation at Zhigansk.   I was also expecting that each would respond – at least in very general terms – to solar forcing, and so it does: the cooling of the 1940s and 50s which caused us so much concern in those years about a coming glaciation is clear.

  A compilation of arctic data and proxies took 64oN as the limit of the Arctic region, within which 59 stations were used to analyse the pattern of regional co-variability for SAT anomalies based on PCA techniques.[13]   This demonstrated quasi-periodicity of 50-80 years in ice cover in the Svalbard region: at least eight previous periods of relatively low ice cover can be identified back to about 1200.

Hindcasting climate states is not easy: a recent synthesis of tree-ring data from the Yamal peninsula rashly states that in Siberia the ‘industrial era warming is unprecedented…. elevated summer temperatures above those…for the past seven millennia‘.  However, documents and observations show that this is one generalisation too far.  In summer 1846, as recorded by H.H. Lamb, warming across the arctic extended from Archangel to eastern Siberia, where the captain of a Russian survey ship noted that the River Lena was hard to locate in a vast, flooded landscape and could be followed only by the ‘rushing of the stream’ which ‘rolled trees, moss and large masses of peat’ against his ship, that secured from the flood ‘an elephant’s head’.

The temperature reconstruction below is from annual growth of larches on the Yamal peninisula at the mouth of the Ob.[14]  It testifies that the early decades of the 19th century did indeed include a period of very cold conditions on the arctic coast, while supporting the reality of periods of warmth likely to caused melting of the permafrost of tundra regions.

 In any case, irruptions of warm Atlantic water into the eastern Arctic – including the present one – are well recorded in the archives of whaling, sealing and the cod fisheries.  The present period of a warm Arctic climate is not novel and there is an abundant record from the cod fisheries in the Barents Sea and beyond, not to speak of the documentation concerning the intermittence of open seas from the sealers and whalers in northern waters.

The surface air temperature data are dominated by observations made in towns and cities so that the secular evolution of the climate is determined not by the gaseous composition of the atmosphere, nor by solar radiation: instead, it is dominated by the consequences of our ever-increasing combustion of fossil hydrocarbons in motor cars, public transit and home heating systems, as well as in the industrial plants and factories  where most of us must work.  To this must be added the daily accumulation of solar heat in the stonework or cement of our buildings facing each other along narrow passages.

One conclusion is unavoidable from this simple exploration of the surface air temperature archive: as used today by the IPCC and the climate change science community the instrumental record is not fit for purpose: it is contaminated by data obtained from that tiny fraction of Earth’s surface where most of us spend our brief span of years indoors.  

Footnotes

[1] Hansen, NASA press release and J. Geophys. Res. 106, D20, 23947-23963.

[2] Ellis, E.C. et al. (2010) Glob. Ecol. Biogeog. 19, 589-606

[3] R.A. Ruedy (pers. comm)- see GISS notice dated Aug 28, 1998, at the Sources website

[4] from H.H. Lamb

[5] see for example, Li, X et al. (2020) Sci. Data 7, 168-177.

[6] Pigeon, G. et al. (2007) Int. J. Climat. 27, 1969-1981

[7] Ichinose, T.K et al. (1999) Atmosph. Envir. 33, 3897-3909, Fujibe, F. (2009) 7th Int. Conf. Urban Clim., Yokohama

[8] McKittrick, R.R. and P.J. Michaels (2004 & 2007) Clim. Res. 26 (2) 159-273 & J.G.R. (27) 265-268

[9] Map is from Gao and O’Neil (2020) NATURE COMMUNICATIONS |11:2302https://doi.org/10.1038/s41467-020-15788, image is from eomages.gf.nasa.go

[10] Map from Gao and O’Neil (2020) NATURE COMMUNICATIONS |11:2302https://doi.org/10.1038/s41467-020-15788, image is from eomages.gf.nasa.gov

[11] Ellis, E.C. et al. (date) Global Ecol. Geogr. 19, 589-60, and “Anthropogenic biomes: 10,000 BCE-2025 CE (doi.3390/land9050129v

[12] Images from Google Maps software

[13] Overland, J.A.. et al. (2003) J. Clim. pp-pp

19 Polyakov, I.V. et al. J. Clim. 16, 2067-77

There is no looming Climate Catastrophe


Crossposted from https://wattsupwiththat.com/2022/12/18/data-shows-theres-no-climate-catastrophe-looming-climatologist-dr-j-christy-debunks-the-narrative/

Video: https://youtu.be/qJv1IPNZQao

Dr John Christy, distinguished Professor of Atmospheric Science and Director of the Earth System Science Center at the University of Alabama in Huntsville, has been a compelling voice on the other side of the climate change debate for decades. Christy, a self-proclaimed “climate nerd”, developed an unwavering desire to understand weather and climate at the tender age of 10, and remains as devoted to understanding the climate system to this day. By using data sets built from scratch, Christy, with other scientists including NASA scientist Roy Spencer, have been testing the theories generated by climate models to see how well they hold up to reality. Their findings? On average, the latest models for the deep layer of the atmosphere are warming about twice too fast, presenting a deeply flawed and unrealistic representation of the actual climate. In this long-form interview, Christy – who receives no funding from the fossil fuel industry – provides data-substantiated clarity on a host of issues, further refuting the climate crisis narrative.

What if CO2 gas cannot exist at the top of atmosphere?


2022 / December / 22 / What If Real-World Physics Do Not Support The Claim Top-Of-Atmosphere CO2 Forcing Exists?

What If Real-World Physics Do Not Support The Claim Top-Of-Atmosphere CO2 Forcing Exists?

By Kenneth Richard on 22. December 2022

The longstanding claim is CO2 (greenhouse gas) top-of-atmosphere (TOA) forcing drives climate change. But it is too cold at the TOA for CO2 (or any greenhouse gas) to exist.

Image Sources: Schneider et al., 2020, NASA, UCAR, CGA

TOA greenhouse gas forcing is a fundamental tenet of the CO2-drives-climate-change belief system. And yet the “global-mean longwave radiative forcing of CO2 at TOA” (Schneider et al., 2020) may not even exist.

It is easily recognized that water vapor (greenhouse gas) forcing cannot occur above a certain temperature threshold because water freezes out the farther away from the surface’s warmth H2O goes.

According to NASA, the TOA is recognized as approximately 100 km above the surface. The temperature near that atmospheric height is about -90°C.

CO2 is in its solid (dry ice) form at -78°C and below.

Therefore, TOA CO2 radiative forcing cannot exist if CO2 cannot be a greenhouse gas at the TOA.

Supreme Court delves into North Carolina redistricting case with significant election implications


https://justthenews.com/nation/states/center-square/supreme-court-hear-arguments-wednesday-north-carolina-redistricting

Case focuses on how much authority state legislatures have on elections and whether courts can intervene.

The U.S. Supreme Court is set to hear oral arguments in a North Carolina redistricting case on Wednesday that could have broad implications for both the state and the nation.

Justices will hear Moore v. Harper on Wednesday to determine whether state courts can override lawmakers in the redistricting process.

North Carolina Republicans argue in a brief “the text of the Elections Clause provides the answer: it assigns state legislatures the federal function of regulating congressional elections.”

Justices on the Democrat-controlled state Supreme Court cited state constitutional provisions for blocking congressional districts created by the General Assembly last year to impose a different map devised by a court-appointed “special master.”

But Republicans argue that because the Constitution’s Election Clause “is supreme over state law, the States may not limit the legislature’s discretion.”

The outcome of the case will impact redistricting efforts in states across the country, and in the process could shift the balance of political power. But critics contend the case could also reverberate into other decisions, as well.

A brief submitted by the League of Women Voters argues a ruling in support of North Carolina Republicans’ independent state legislature theory “would not be confined to redistricting plans, applicable to only particular offices or legislative bodies.”

“Adopting the petitioners’ theory would open the door to the retroactive abrogation of all state court rulings that have invoked state constitutional grounds to strike down state statutes – but only as to federal elections,” the brief read.

In North Carolina, the 2022 congressional map adopted by the state Supreme Court resulted in a congressional delegation split 7-7 between Republicans and Democrats in the November election. Critics of the 2022 map approved by the General Assembly contend it would have produced 10 Republican seats in Congress. Republicans in the General Assembly plan to redraw the map in 2023, and the case will weigh on that process.

Republicans argue state courts have no role in policing federal elections.

“The Framers (of the U.S. Constitution) could have assigned the power over federal elections in the first instance to states, without specifying which entity of state government would have primary responsibility,” the brief argues. “But recognizing that prescribing the times, places, and manner of federal elections is fundamentally a legislative role, the Framers specified that this delegated power would be exercised by ‘the Legislature thereof.'”

The November election shifted control of the North Carolina Supreme Court from a 4-3 Democrat majority to a 5-2 Republican majority, suggesting that regardless of the outcome of Moore v. Harper, the state’s high court may be more likely to uphold a map from the Republican-controlled General Assembly.

The case will have an impact on the General Assembly’s ability to impose rules for free and fair federal elections, such as voter identification laws opposed by Democrats, mail voting and voter registration rules, among others.

Common Cause North Carolina contends a positive outcome for Republicans would “overturn our historic victory against gerrymandering from earlier this year and unleash voter suppression against the people of North Carolina.”

“It would make it harder to turn public demands into public policy that reflects the needs and values of our communities,” according to an announcement urging followers to “stop the North Carolina’s most brazen power grab yet.”