Thursday, August 26, 2010

Global Warming? Part II

I have some exposure to some of the models used by climate researchers at NOAA. I can tell you, the models are frequently ad hoc and contain numerous fudge factors and corrections to massage the data, throw out outliers, adjust that term during this time period, this term during that time period, etc. Further, many temperature measurements are based on proxies--e.g. assuming tree rings are wider during higher temperatures, but there's simply no way to determine how much wider per degree C.

I'm not saying that their models are wrong, just that, having implemented models like these before, I understand enough of the math to know that a minor mistake in a fudge factor meant to allow dissimilar measurements to be used as if they were from the same dataset can make a huge difference in the validity of the model. Not to mention simple errors in implementation that can have the results "look right" but still be completely wrong.

For example consider the story told by the data that turned out to be wrong.

In this case, the scientists found out that their ERSST model was producing warmer results, by about 0.2C, than other instruments. It turned out that in 2001, the satellite providing the data was boosted to a different orbit, and the model failed to take that into account. It took 10 years before anyone thought that there might be a problem! Up until then, everyone apparently assumed the earth had warmed by 0.2C suddenly in 2001. Worse, they assumed that the data for 1971-2000 was wrong and massaged it to fit the 2001+ data. "In early 2001, CPC was requested to implement the 1971–2000 normal for operational forecasts. So, we constructed a new SST normal for the 1971–2000 base period and implemented it operationally at CPC in August of 2001" (Journal of Climate).

Just the abstract to that particular paper reveals how fragile the models are, being based on assumptions piled on top of assumptions, and unveiling a tendency to massage data.

"SST predictions are usually issued in terms of anomalies and standardized anomalies relative to a 30-yr normal: climatological mean (CM) and standard deviation (SD). The World Meteorological Organization (WMO) suggests updating the 30-yr normal every 10 yr."

How can a normal be updated--the data is the data, and its normal is its normal? This sentence implies that the data is somehow massaged every ten years or so. There may be legitimate reasons to do so, but anytime you massage data, there have to be questions as to the legitmacy of the alteration.

"Using the extended reconstructed sea surface temperature (ERSST) on a 28 grid for 1854–2000 and the Hadley Centre Sea Ice and SST dataset (HadISST) on a 18 grid for 1870–1999, eleven 30-yr normals are calculated, and the interdecadal changes of seasonal CM, seasonal SD, and seasonal persistence (P) are discussed."

This says that data is being assembled from widely disparate data sources, with different measurement techniques, and that some of the data was made with instrumentation that simply cannot be validated (data from 1854?).

"Both PDO and NAO show a multidecadal oscillation that is consistent between ERSST and HadISST except that HadISST is biased toward warm in summer and cold in winter relative to ERSST."

Now we see that different data sets, ostensibly of the same population, disagree. And the fact that one data set exhibits bias to the extreme (too warm in summer and too cold in winter) raises questions about the proper use of this data. One scientist may be able to make a valid claim that the more stable data is in error and "correct" it to be more in line with the more volatile data; another scientist may do the opposite. And their personal bias will play a role as to which way they go.

No comments:

Is power needed to "implement principles"?

A "progressive" WSJ commenter stated What is the point of principles if you have no power to implement them? My response: Pri...