FBI Agent Testimony: Warnings of a “hack and leak” ahead of the Hunter Biden revelations


https://technofog.substack.com/p/fbi-agent-testimony-warnings-of-a

The Reactionary

FBI Agent Testimony: Warnings of a “hack and leak” ahead of the Hunter Biden revelations

Analyzing the deposition of FBI Agent Francis Chan

Techno Fog

21 hr ago

Jordan seeks testimony from Garland, Wray, others in House Judiciary probes  of DOJ, FBI | Fox News


Today, we learned more about the FBI’s influence operation during the 2020 election election. Yesterday, as part of a civil rights suit against the Biden Administration, Missouri Attorney General (and Senator-elect) Eric Schmitt released the deposition transcript of Dr. Anthony Fauci.

And just now, he just posted the transcript of FBI Supervisory Special Agent Elvis Chan. Read it here.

The importance of SSA Chan shouldn’t be understated. He was at the front-lines of the FBI’s efforts to curb speech on social media during the 2020 election. It was Chan who has been reported to have had “weekly meetings with major social media companies to warn against Russian disinformation attempts ahead of the 2020 election.”

Chan’s testimony provides insight into these efforts. Here are the highlights.

Chan and the 2016 DNC “Hack”

For starters, just so you get an idea of who you’re dealing with, Chan is a firm believer in the still-unproven theory that Russia hacked the DNC/DCCC and then leaked those materials “over the course of the 2016 election.” It gets better: Chan was the supervisor of a squad that helped investigate the 2016 DNC hack.

One can’t help but ask whether Chan, a “supervisor”, could have obtained the DNC server. Or if he even though to request it. As observed by our friend Stephen McIntyre, Chan was in contact with DNC/Hillary campaign lawyer Michael Sussmann about that hack.

Stephen McIntyre @ClimateAudit

Elvis Chan, the FBI agent named in connection with Facebook suppression of Hunter laptop coverage, was one of Michael Sussmann’s FBI contacts. In late Sept 2016, he was involved in assessing the zip file published by Guccifer 2 in Sept/Oct 2016. Small world. (Sussmann DX-147)

Image

Image

Julie Kelly 🇺🇸 @julie_kelly2

What a news day–FBI agents who warned Facebook to suppress coverage of Hunter Biden laptop identified in new court filing. Laura Dehmlow of the “Foreign Influence Task Force” and Elvis Chan, cyber analyst in San Francisco FBI field office. https://t.co/dj58tiO6yx3:32 AM ∙ Oct 8, 20221,509Likes893Retweets

In fact, Chan believes Russia could have influenced the 2016 presidential election:

The 2020 Election – Security Meetings between the US Government and Social Media Companies.

Leading up to the 2020 election, Chan was present during meetings between social media and tech companies, such as Facebook, Microsoft, Google, Twitter, Yahoo, and Reddit, and the U.S. Government, which was represented by CISA (Cybersecurity and Infrastructure Security Agency), Office of the Director of National Intelligence (ODNI), Department of Homeland Security, and the FBI.

The FBI and other agencies would provide the social media/tech companies with “strategic information” regarding foreign – and specifically Russian – “influence campaigns.” One example of this information sharing had to do with the Russian company “Internet Research Agency” (a troll farm that was indicted by Mueller before those charges were ultimately dismissed by Barr), which had moved their operations to Ghana and Nigeria. Chan and the FBI believe the Internet Research Agency “is trying to make inroads in western Africa.” A warning to the West: your memes are from Russians in Western Africa.

Chan testified that once the social media companies get this information they take down the accounts. The FBI doesn’t “control” what the companies do; they just provide “information” so the social media companies can take whatever steps they deem appropriate. One of those appropriate steps – one of those ways to “protect their platforms” – is to take down those accounts. In fact, Chan concluded in his thesis that the US government essentially assisted with “account takedowns.” It was a joint effort.

In other words, the social media companies don’t need directions from the US government to remove content because there’s an understanding between the parties. This would include content that the US government deemed to be foreign (Russian) “social media influence campaigns” that focus on current events or “amplify existing content.” This is all for the Russian government’s purpose to “sow discord”:

Chan also explained how the FBI would share the “disinformation” or “misinformation” with social media companies. It would take place around the time of quarterly meetings, if not more frequently through secure e-mails if the FBI field offices thought necessary. For example, the FBI might notify Facebook that a certain IP address is associated with the Internet Research Agency. The accounts flagged by the FBI are always removed by the social media companies, in large part because of pressure from Congressional Committees. As explained by Chan:

Around this same time, there were visits from Congressional staffers to pressure social media companies. Senior-level staffers have even visited Facebook, Google, and Twitter as part of these influence – or censorship – campaigns.

Chan continued:

Information about the 2020 Election and the suppression of American content.

During the 2020 election, the FBI’s San Francisco field office had an election “command post,” which flagged “disinformation” regarding “the time, place or manner of elections in various states.” The flagged content would be referred to the applicable social media platform where it was posted. It didn’t matter whether the content was from Americans or from a foreign actor. Chan explained that the FBI was conducting these actions upon instructions from the DOJ, which had informed them “that this type of information was criminal in nature.”

The social media companies sometimes disagreed. In fact, half the time the social media companies wouldn’t remove the content. Chan estimated a 50% success rate.

The Purported 2020 Russian “hack and leak” operation.

There’s a lot to cover here, and hopefully I provided enough background to get to the juicier stuff. Or at least the content that applies, in some ways, to the more concerning suppression of information during the 2020 election and after, such as the Hunter Biden laptop and COVID-19 information.

The Reactionary is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Here’s Chan discussing FBI warnings about a “2016-style DNC hack-and-dump operation” and gave instructions to “stay vigilant” about similar operations that may take place “before the [2020] election”:

Chan didn’t warn the companies based on actionable intelligence. Instead, the FBI gave this warning multiple times out of “an abundance of caution” and based on what allegedly transpired in 2016 with the DNC/DCCC hack.

At around that time, Chan wasn’t aware that the FBI had in its possession the Hunter Biden laptop. He only became aware when this information was published by news outlets. Hunter Biden, according to Chan, was never mentioned in the FBI meetings with the social media companies. Facebook, however, asked about the Hunter Biden information. The FBI’s response? “No comment.” Chan explains

Chan was presented with a declaration from Yoel Roth, the then-Twitter Head of Trust and Safety, in which Roth stated he was informed by people in the intelligence community to “expect” attacks on individuals linked to political campaigns. Chan’s recollection differed, stating that there was only the potential for such attacks.

Roth also stated in his sworn declaration there were rumors “that a hack-and-leak operation would involve Hunter Biden.” Chan couldn’t recall Hunter Biden ever being mentioned in those meetings.

Q: How would you interpret what he said when he says he learned that there were rumors that a hack-and-leak operation would involve Hunter Biden? What do you think he’s referring to?

A: Yeah, in my estimation, we never discussed Hunter Biden specifically with Twitter.

Chan also claimed ignorance to Roth’s contention that Twitter’s belief the Hunter Biden materials could have been hacked was based on “the information security community’s initial reactions.” Chan wasn’t sure if this included the FBI, or if Twitter reached out to the FBI about the Hunter Biden information.

But – if the material were “hacked” – the social media companies were put on notice that they needed policies to address that situation. A harmless question? More of a hint to get social media companies to agree to remove the content. Especially because the FBI had plans to ask the social media companies themselves to remove hacked content.

I hope it’s clear what happened. There’s not a smoking gun – there’s no direct e-mail from Chan or from the FBI to Twitter of Facebook, from what we’ve seen, to remove the Hunter Biden story. That’s by design: there didn’t need to be. The instructions from the US government about “hack and leak” operations were quite clear, and the agency did nothing to dissuade social media companies from “believing” the Hunter Biden materials where hacked. The beauty of this plan, if you can call it that, was that the FBI and Twitter (and Facebook) all gave themselves cover by pointing the finger to the other.

In a close election, that’s what we call tipping the scales.

Glenn Greenwald @ggreenwald

It’s a bit difficult to maintain the US Security State had no role in Twitter’s censorship regime when *the General Counsel of the FBI* – centrally involved in Russiagate and all sorts of politicized abuses – ended up as Twitter’s Deputy General Counsel, with paws in everything.

Matt Taibbi @mtaibbi

We can now tell you part of the reason why. On Tuesday, Twitter Deputy General Counsel (and former FBI General Counsel) Jim Baker was fired. Among the reasons? Vetting the first batch of “Twitter Files” – without knowledge of new management.10:43 PM ∙ Dec 6, 20221,424Likes431Retweets

In related news… Twitter General Counsel James Baker has been “exited.” The story is still developing, but part of the reason appears to be because he was “vetting” the Twitter Files which show FBI/Twitter communications and which inform the decision to ban the Hunter Biden story. More to come as that story develops…

Matt Taibbi @mtaibbi

We can now tell you part of the reason why. On Tuesday, Twitter Deputy General Counsel (and former FBI General Counsel) Jim Baker was fired. Among the reasons? Vetting the first batch of “Twitter Files” – without knowledge of new management.9:40 PM ∙ Dec 6, 202217,178Likes4,341Retweets

162 likes

58 Comments

NVSheepdog21 hr agoWhat about Seth Rich?
19ReplyGift a subscriptionCollapse

1 reply

Mac Timred21 hr agoThere is a red hot smoking gun right here, that the DOJ advised that disinformation would be criminal in nature. That right there is the smoking gun. THIS IS NOT LISA MONACO, THIS IS (IN THEORY) BILL BARR’S DOJ. Exactly who at the DOJ communicated this, and pursuant to orders from what chain of command?
11ReplyGift a subscriptionCollapse

3 replies

56 more comments…

It’s official: Durham is investigating the Clinton Campaign.Enter the Clinton Campaign Lawyers

Techno Fog

Dec 20, 2021

443

265

CIA Bombshell: The Sussmann data was “user created”Also: Confirmation of a frame-job against President-Elect Trump

Techno Fog

Apr 16

535

339

Durham’s latest: He has hundreds of e-mails between Fusion GPS and reportersAlso – DARPA Contracts and Classified data

Techno Fog

Apr 25

440

379See all

Ready for more?

© 2022 Techno Fog

PrivacyTermsCollection notice

Publish on Substack

Get the app

Substack is the home for great writing

Musk Unveils ‘Twitter Files’


Newsmax TV Published originally on Rumble on December 5, 2022 

Elon Musk’s release of the “Twitter Files” confirms what Republicans have already been saying. The social media giant was colluding with the F-B-I and the 2020 Biden campaign.

Twitter Weaponizes Against Dems ReeEEeE Stream 11-25-22


TheSaltyCracker Published originally on Rumble on November 25, 2022

This is what the deserve

DOJ Once Again Changes Trump Seizure Evidence List Dropping “Empty Classified Folders”, and Continues Refusing to Give President Trump Lawyers the Affidavit Used for Search Warrant


Posted originally on the conservative tree house on November 25, 2022 | Sundance

In a recent court filing [Document Here] President Trump through his legal counsel has requested Judge Cannon to unredact and unseal the search warrant affidavit used as the predicate for the FBI raid on Mar-a-Lago.  Apparently, the DOJ have yet to provide President Trump with the constitutionally required predicate documents to support their search.

Additionally, the DOJ previously leaked to media about “empty folders with classified banners” as part of the evidence cache they collected.  According to the filing the DOJ has since presented three different versions of their evidence collection list, with the most recent list dropping any claims of “two empty folders with classified banners.”

[Source, page 4]

While asking the court to provide the affidavit to the defense team, the lawyers for President Trump are noting the fourth amendment protects everyone against warrantless searches and seizures, and that same protection also guarantees the target the right to receive and review the claimed justification for the warrant.

The unredacted affidavit is obligated to be supplied so that it can be determined if the search warrant was legally valid and predicated.  General search warrants are not legally permitted.  The warrant must specify what is being searched and why.  The DOJ is fighting against this affidavit release.  The Trump lawyers are asking the judge to make a decision.

[Source with complete filing]

The issue of compartmented (siloed) information, specifically as a tool and technique of the aloof DC system to retain control and influence, is a matter we have discussed on these pages for several years.

Quite literally anything can be classified as a ‘national security interest’ in the deep state effort to retain the illusion of power over the proles, ie us. It is the exact reason why congress exempts themselves from laws and regulations written for everyone else.

In this case we are watching the DOJ National Security Division (DOJ-NSD) deny the production of the material that supports the framework of their search warrant.  Again, if Main Justice has nothing to hide, then why are they not willing to stand openly behind the predicate for their search.

Greenies Need Lithium Exponentially


The Daily Chart: Greenies Need to Take More Lithium

So we’re supposed to make the transition to an all-electric future, with our homes, cars and factories all powered by “renewable” sources, and stored in batteries for when the wind doesn’t blow and the sun doesn’t shine. Never mind the “nameplate” capacity factors for wind and solar—have any of the “green energy” advocates done some elementary math on how much more lithium we’ll need to scale up batteries? Here’s one estimate:

Ukraine, a Ponzi Scheme, and a Top Democrat Donor Raise Serious Questions


Repost

https://redstate.com/bonchie/2022/11/13/ukraine-a-ponzi-scheme-and-a-top-democrat-donor-raise-serious-questions-n658409

Ukraine, a Ponzi Scheme, and a Top Democrat Donor Raise Serious Questions

By Bonchie | 4:00 PM on November 13, 2022

AP Photo/J. Scott Applewhite, Pool

As RedState reported, crypto-exchange FTX collapsed after its much-lauded founder, Sam Bankman-Fried, appeared to make improper transfers of customer money. Somewhere between $1-2 billion of that amount has now gone missing and Bankman-Fried also has disappeared.

What makes this so interesting, though, isn’t just that a lot of really wealthy people got scammed. It’s that Bankman-Fried also happens to be one of the top donors to the Democratic Party. In fact, outside of George Soros, no one has done more to bankroll Democrat efforts since the 2020 election. Joe Biden alone received a whopping $5.2 million.

But here’s where things get even weirder. Apparently, while the United States was bankrolling Ukraine and its war effort, that country’s leaders were investing money into FTX.

It was also revealed that FTX had partnered with Ukraine to process donations to their war efforts within days of Joe Biden pledging billions of American taxpayer dollars to the country. Ukraine invested into FTX as the Biden administration funneled funds to the invaded nation, and FTX then made massive donations to Democrats in the US.

There are so many questions that arise from this. For example, why is Ukraine, which we are all assured is broke and needs US taxpayer money, playing around with a Democrat-linked crypto company? This wasn’t just about accepting donations through the portal. The report specifically says that Ukraine actively invested money in FTX.

While that was happening, FTX’s founder was handing out tens of millions of dollars, from the Bahamas, to help elect Democrats back in the United States. That is one of the shadiest things I’ve ever witnessed in politics.

Yes, the chain of custody regarding the funds involved is tough to know. When and where money was sent is something only an investigation of FTX’s internal operation can ascertain. Still, the appearances here are just horrific. Were Democrats funneling taxpayer money to Ukraine, only for some of it to be sent to FTX so it could be funneled back to Democrat campaigns? That’s a question that must be answered, and any attempt to gloss over it will raise major red flags.

I don’t think I’m going out on a limb by suggesting that if another company had been scamming people while bankrolling the Republican Party, it would be major news. There would be calls for investigations as far as the eye could see to figure out whether Republican politicians were using that company as a passthrough to avoid campaign finance laws. Never mind that simply receiving funds from a Ponzi scheme, even without ill intent, is really bad on its own.

This entire situation stinks to high heaven. It appears that Republicans will end up taking the House of Representatives. When that becomes official, GOP members need to dive headfirst into this and figure out what in the world happened. Because having a Democrat mega-donor get exposed like this while also having Ukraine tied up in the mix is too much to ignore.

Front-page contributor for RedState. Visit my archives for more of my latest articles and help out by following me on Twitter @bonchieredstate.


Copyright ©2022 RedState.com/Salem Media. All Rights Reserved.

The Dirty Secrets inside the Black Box Climate Models


By Greg Chapman
“The world has less than a decade to change course to avoid irreversible ecological catastrophe, the UN warned today.” The Guardian Nov 28 2007
“It’s tough to make predictions, especially about the future.” Yogi Berra
Introduction
Global extinction due to global warming has been predicted more times than climate activist, Leo DiCaprio, has traveled by private jet.  But where do these predictions come from? If you thought it was just calculated from the simple, well known relationship between CO2 and solar energy spectrum absorption, you would only expect to see about 0.5o C increase from pre-industrial temperatures as a result of CO2 doubling, due to the logarithmic nature of the relationship.
Figure 1: Incremental warming effect of CO2 alone [1]
The runaway 3-6o C and higher temperature increase model predictions depend on coupled feedbacks from many other factors, including water vapour (the most important greenhouse gas), albedo (the proportion of energy reflected from the surface – e.g. more/less ice or clouds, more/less reflection) and aerosols, just to mention a few, which theoretically may amplify the small incremental CO2 heating effect. Because of the complexity of these interrelationships, the only way to make predictions is with climate models because they can’t be directly calculated.
The purpose of this article is to explain to the non-expert, how climate models work, rather than a focus on the issues underlying the actual climate science, since the models are the primary ‘evidence’ used by those claiming a climate crisis. The first problem, of course, is no model forecast is evidence of anything. It’s just a forecast, so it’s important to understand how the forecasts are made, the assumptions behind them and their reliability.
How do Climate Models Work?
In order to represent the earth in a computer model, a grid of cells is constructed from the bottom of the ocean to the top of the atmosphere. Within each cell, the component properties, such as temperature, pressure, solids, liquids and vapour, are uniform.
The size of the cells varies between models and within models. Ideally, they should be as small as possible as properties vary continuously in the real world, but the resolution is constrained by computing power. Typically, the cell area is around 100×100 km2 even though there is considerable atmospheric variation over such distances, requiring each of the physical properties within the cell to be averaged to a single value. This introduces an unavoidable error into the models even before they start to run.
The number of cells in a model varies, but the typical order of magnitude is around 2 million.
Figure 2: Typical grid used in climate models [2]

Once the grid has been constructed, the component properties of each these cells must be determined. There aren’t, of course, 2 million data stations in the atmosphere and ocean. The current number of data points is around 10,000 (ground weather stations, balloons and ocean buoys), plus we have satellite data since 1978, but historically the coverage is poor. As a result, when initialising a climate model starting 150 years ago, there is almost no data available for most of the land surface, poles and oceans, and nothing above the surface or in the ocean depths. This should be understood to be a major concern.
Figure 3: Global weather stations circa 1885 [3]

Once initialised, the model goes through a series of timesteps. At each step, for each cell, the properties of the adjacent cells are compared. If one such cell is at a higher pressure, fluid will flow from that cell to the next. If it is at higher temperature, it warms the next cell (whilst cooling itself). This might cause ice to melt or water to evaporate, but evaporation has a cooling effect. If polar ice melts, there is less energy reflected that causes further heating. Aerosols in the cell can result in heating or cooling and an increase or decrease in precipitation, depending on the type.
Increased precipitation can increase plant growth as does increased CO2. This will change the albedo of the surface as well as the humidity. Higher temperatures cause greater evaporation from oceans which cools the oceans and increases cloud cover. Climate models can’t model clouds due to the low resolution of the grid, and whether clouds increase surface temperature or reduce it, depends on the type of cloud.
It’s complicated! Of course, this all happens in 3 dimensions and to every cell resulting in considerable feedback to be calculated at each timestep.
The timesteps can be as short as half an hour. Remember, the terminator, the point at which day turns into night, travels across the earth’s surface at about 1700 km/hr at the equator, so even half hourly timesteps introduce further error into the calculation, but again, computing power is a constraint.
While the changes in temperatures and pressures between cells are calculated according to the laws of thermodynamics and fluid mechanics, many other changes aren’t calculated. They rely on parameterisation. For example, the albedo forcing varies from icecaps to Amazon jungle to Sahara desert to oceans to cloud cover and all the reflectivity types in between. These properties are just assigned and their impacts on other properties are determined from lookup tables, not calculated. Parameterisation is also used for cloud and aerosol impacts on temperature and precipitation. Any important factor that occurs on a subgrid scale, such as storms and ocean eddy currents must also be parameterised with an averaged impact used for the whole grid cell. Whilst the effects of these factors are based on observations, the parameterisation is far more a qualitative rather than a quantitative process, and often described by modelers themselves as an art, that introduces further error. Direct measurement of these effects and how they are coupled to other factors is extremely difficult and poorly understood.
Within the atmosphere in particular, there can be sharp boundary layers that cause the models to crash. These sharp variations have to be smoothed.
Energy transfers between atmosphere and ocean are also problematic. The most energetic heat transfers occur at subgrid scales that must be averaged over much larger areas.
Cloud formation depends on processes at the millimeter level and are just impossible to model. Clouds can both warm as well as cool. Any warming increases evaporation (that cools the surface) resulting in an increase in cloud particulates. Aerosols also affect cloud formation at a micro level.  All these effects must be averaged in the models.
When the grid approximations are combined with every timestep, further errors are introduced and with half hour timesteps over 150 years, that’s over 2.6 million timesteps! Unfortunately, these errors aren’t self-correcting. Instead this numerical dispersion accumulates over the model run, but there is a technique that climate modelers use to overcome this, which I describe shortly.
Figure 4: How grid cells interact with adjacent cells [4]

Model Initialisation
After the construction of any type of computer model, there is an initalisation process whereby the model is checked to see whether the starting values in each of the cells are physically consistent with one another. For example, if you are modelling a bridge to see whether the design will withstand high winds and earthquakes, you make sure that before you impose any external forces onto the model structure other than gravity, that it meets all the expected stresses and strains of a static structure. Afterall, if the initial conditions of your model are incorrect, how can you rely on it to predict what will happen when external forces are imposed in the model?
Fortunately, for most computer models, the properties of the components are quite well known and the initial condition is static, the only external force being gravity. If your bridge doesn’t stay up on initialisation, there is something seriously wrong with either your model or design!
With climate models, we have two problems with initialisation. Firstly, as previously mentioned, we have very little data for time zero, whenever we chose that to be. Secondly, at time zero, the model is not in a static steady state as is the case for pretty much every other computer model that has been developed. At time zero, there could be a blizzard in Siberia, a typhoon in Japan, monsoons in Mumbai and a heatwave in southern Australia, not to mention the odd volcanic explosion, which could all be gone in a day or so.
There is never a steady state point in time for the climate, so it’s impossible to validate climate models on initialisation.
The best climate modelers can hope for is that their bright shiny new model doesn’t crash in the first few timesteps.
The climate system is chaotic which essentially means any model will be a poor predictor of the future – you can’t even make a model of a lottery ball machine (which is a comparatively a much simpler and smaller interacting system) and use it to predict the outcome of the next draw.
So, if climate models are populated with little more than educated guesses instead of actual observational data at time zero, and errors accumulate with every timestep, how do climate modelers address this problem?
History matching
If the system that’s being computer modelled has been in operation for some time, you can use that data to tune the model and then start the forecast before that period finishes to see how well it matches before making predictions. Unlike other computer modelers, climate modelers call this ‘hindcasting’ because it doesn’t sound like they are manipulating the model parameters to fit the data.
The theory is, that even though climate model construction has many flaws, such as large grid sizes, patchy data of dubious quality in the early years, and poorly understood physical phenomena driving the climate that has been parameterised, that you can tune the model during hindcasting within parameter uncertainties to overcome all these deficiencies.
While it’s true that you can tune the model to get a reasonable match with at least some components of history, the match isn’t unique.
When computer models were first being used last century, the famous mathematician, John Von Neumann, said:
“with four parameters I can fit an elephant, with five I can make him wiggle his trunk”
In climate models there are hundreds of parameters that can be tuned to match history. What this means is there is an almost infinite number of ways to achieve a match. Yes, many of these are non-physical and are discarded, but there is no unique solution as the uncertainty on many of the parameters is large and as long as you tune within the uncertainty limits, innumerable matches can still be found.
An additional flaw in the history matching process is the length of some of the natural cycles. For example, ocean circulation takes place over hundreds of years, and we don’t even have 100 years of data with which to match it.
In addition, it’s difficult to history match to all climate variables. While global average surface temperature is the primary objective of the history matching process, other data, such a tropospheric temperatures, regional temperatures and precipitation, diurnal minimums and maximums are poorly matched.
Even so, can the history matching of the primary variable, average global surface temperature, constrain the accumulating errors that inevitably occur with each model timestep?
Forecasting
Consider a shotgun. When the trigger is pulled, the pellets from the cartridge travel down the barrel, but there is also lateral movement of the pellets. The purpose of the shotgun barrel is to dampen the lateral movements and to narrow the spread when the pellets leave the barrel. It’s well known that shotguns have limited accuracy over long distances and there will be a shot pattern that grows with distance.  The history match period for a climate model is like the barrel of the shotgun. So what happens when the model moves from matching to forecasting mode?
Figure 5: IPCC models in forecast mode for the Mid-Troposphere vs Balloon and Satellite observations [5]
Like the shotgun pellets leaving the barrel, numerical dispersion takes over in the forecasting phase. Each of the 73 models in Figure 5 has been history matched, but outside the constraints of the matching period, they quickly diverge.
Now at most only one of these models can be correct, but more likely, none of them are. If this was a real scientific process, the hottest two thirds of the models would be rejected by the International Panel for Climate Change (IPCC), and further study focused on the models closest to the observations. But they don’t do that for a number of reasons.
Firstly, if they reject most of the models, there would be outrage amongst the climate scientist community, especially from the rejected teams due to their subsequent loss of funding. More importantly, the so called 97% consensus would instantly evaporate.
Secondly, once the hottest models were rejected, the forecast for 2100 would be about 1.5o C increase (due predominately to natural warming) and there would be no panic, and the gravy train would end.
So how should the IPPC reconcile this wide range of forecasts?
Imagine you wanted to know the value of bitcoin 10 years from now so you can make an investment decision today. You could consult an economist, but we all know how useless their predictions are. So instead, you consult an astrologer, but you worry whether you should bet all your money on a single prediction. Just to be safe, you consult 100 astrologers, but they give you a very wide range of predictions. Well, what should you do now? You could do what the IPCC does, and just average all the predictions.
You can’t improve the accuracy of garbage by averaging it.
An Alternative Approach
Climate modelers claim that a history match isn’t possible without including CO2 forcing. This is may be true using the approach described here with its many approximations, and only tuning the model to a single benchmark (surface temperature) and ignoring deviations from others (such as tropospheric temperature), but analytic (as opposed to numeric) models have achieved matches without CO2 forcing. These are models, based purely on historic climate cycles that identify the harmonics using a mathematical technique of signal analysis, which deconstructs long and short term natural cycles of different periods and amplitudes without considering changes in CO2 concentration.
In Figure 6, a comparison is made between the IPCC predictions and a prediction from just one analytic harmonic model that doesn’t depend on CO2 warming. A match to history can be achieved through harmonic analysis and provides a much more conservative prediction that correctly forecasts the current pause in temperature increase, unlike the IPCC models. The purpose of this example isn’t to claim that this model is more accurate, it’s just another model, but to dispel the myth that there is no way history can be explained without anthropogenic CO2 forcing and to show that it’s possible to explain the changes in temperature with natural variation as the predominant driver.
Figure 6: Comparison of the IPCC model predictions with those from a harmonic analytical model [6]

In summary:
Climate models can’t be validated on initiatialisation due to lack of data and a chaotic initial state.
Model resolutions are too low to represent many climate factors.
Many of the forcing factors are parameterised as they can’t be calculated by the models.
Uncertainties in the parameterisation process mean that there is no unique solution to the history matching.
Numerical dispersion beyond the history matching phase results in a large divergence in the models.
The IPCC refuses to discard models that don’t match the observed data in the prediction phase – which is almost all of them.
The question now is, do you have the confidence to invest trillions of dollars and reduce standards of living for billions of people, to stop climate model predicted global warming or should we just adapt to the natural changes as we always have?
Greg Chapman  is a former (non-climate) computer modeler.
Footnotes
[1] https://www.adividedworld.com/scientific-issues/thermodynamic-effects-of-atmospheric-carbon-dioxide-revisited/
[2] https://serc.carleton.edu/eet/envisioningclimatechange/part_2.html
[3] https://climateaudit.org/2008/02/10/historical-station-distribution/
[4]            http://www.atmo.arizona.edu/students/courselinks/fall16/atmo336/lectures/sec6/weather_forecast.html
[5] https://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
Whilst climate models are tuned to surface temperatures, they predict a tropospheric hotspot that doesn’t exist. This on its own should invalidate the models.
[6] https://wattsupwiththat.com/2012/01/09/scaffeta-on-his-latest-paper-harmonic-climate-model-versus-the-ipcc-general-circulation-climate-models/

Maricopa County Arizona Has Election Vote Counting and Tabulation Issues Again


Posted originally on the conservative tree house on November 8, 2022 | Sundance

The counties with ballot counting issues remain consistent over years until someone steps in and fixes the root cause of the problem, democrat election officials.  Nothing destroys election integrity faster than county election problems that repeat in the exact same precincts year after year.

Unfortunately, Maricopa County, Arizona, is one of those regional areas with major election integrity problems each voting cycle, this midterm 2022 election is no different.

According to multiple reports Maricopa County ballot tabulation machines are not working again.  Approximately 20% of the ballot tabulation machines in Maricopa County are not working which is causing delays, frustration and voter concern over the integrity of the election.  Voters have been told to leave their ballots in a box for tabulation later at a central location.  Many voters are not willing to ‘trust’ the process.

ARIZONA – Vote-counting machines weren’t working in about 20% of polling sites in Maricopa County, Arizona, as Election Day voting in the midterms began, county officials said.

The Maricopa County Recorder’s Officer said technicians were called to fix the tabulator machines that weren’t working, Fox10’s TV station in Phoenix reported. It’s not clear how many of the machines were malfunctioning in the state’s most populous county.

“About 20% of the locations out there where there’s an issue with the tabulator … they try and run (completed ballots) through the tabulator, and they’re not going through,” Maricopa County Board of Supervisors chairman Bill Gates said in a video posted on Facebook. Long lines of voters were appearing throughout the county as officials tried to reassure people that all votes would be counted. (read more)

STAY IN LINE and VOTE!