How To Prove Your Adjustments Are Correct

Some people have a mistaken impression that all you have to do to prove that the changes made to a temperature record is point to a published paper where they explain the method the changes are supposed to made with.

Let me say that again SUPPOSED to be made with.

The published papers on how to make the corrections to temperature records are not proof of how a change was made by a computer program, it is nothing more then a blueprint the computer program is suppose to follow. There is no standard used worldwide, different methods are used by different agencies which can get you different results. An example of this is that NIWA uses the record of Auckland in their “Proof” they are not fudging the data. At the same time NIWA is using that record NASA GISS toss it out of it’s analysis because it is an “urban” station they can’t correct due to not meeting the requirements laid down in the papers published. So for one agency that staion is an excellent record that is proof, for another it is not usuable at this time and gets dumped, all based on published research. This is not an isolated phenomenom, the way some one handles the data is based on where they work, which tend to favor the methods they publish over others.  Examples of this are the temperature products made by NASA GISS are based primarily on the works of Dr. James Hansen, the adjusted GHCN dataset on the works of Dr. Tom Petersen (this is about to change and will be discussed later.) and the products of the CRU are mostly based on the methods Dr. Phil Jones published. All these methods are different, all are published, all have strengths over the others and all have their weaknesses. So when the latest came out of New Zealand that a lawyer for NIWA stated they couldn’t comply with their version of a FOI request “because they don’t have a record of when and how the changes were made” is a bit puzzling for a simple reason. If you want to prove the changes you make are correct and no funny business is going on just publish the code used by the computers. Instead NIWA puts out an Excel spreadsheet with a “7 station composite” to show their methods were correct. Again that is not proof, you just showed the adjusted station data not how the computers adjusted that data. Then we got “well the method is published in this paper” answer, again that is not proof, that is just the blueprint for what the computer is suppose to do. All it takes is for a programmer to make a small mistake and you get garbage out, it doesn’t take a grand conspiracy nor deliberate action.

So all NIWA had to do was release their code, let people run the thing and see how each step in the process matches the published paper, but they didn’t do that. Why? Don’t know, but the circular argument of there is the paper and here is the output so there is the proof seems to be something that all these agencies that do analysis of the historical temperature record resort to……..except one.

Was it CRU? Nope just getting a station list out them was worse then pulling teeth.

Was it the NCDC that maintains the GHCN and USHCN datasets? Nope, but you can at least get the station lists and not just ‘raw’ and ‘adjusted’ data from them. They provide some intermediate datasets such as TOBs adjusted only data.

The only agency that I know of that has proven that what they claim the computer code does, in the way of adjustments, matches the papers it is based on is NASA GISS and their GISTemp program.

A couple of years ago GISS got in a row over how some temperature changes were done which I won’t go into here, but one outcome of this was that GISS published the code they use and the papers it is based on for free. You can download the code and papers from GISS at anytime at this link:

http://data.giss.nasa.gov/gistemp/

So now the computer code is out there and now us Skeptics can tear it apart and see how Dr. Hansen has his thumb on the scales so to speak and how he just makes willy nilly adjustements right? Right?

Nope. GISTemp has been independently verified to make the changes laid out by Dr. Hansen in his papers.

It is our opinion that the GISTEMP code performs substantially as documented in Hansen, J.E., and S. Lebedeff, 1987: Global trends of measured surface air temperature. J. Geophys. Res., 92, 13345-13372., the GISTEMP documentation, and other papers describing updates to the procedure.

Currently the ccc-gistemp code produces results that are almost identical to the GISS code. As we emulate the exact GISS algorithm more closely, our results get closer.

http://clearclimatecode.org/gistemp/

So there you go, that is how you prove that your temperature changes are what they are suppose to be. Release the code, let others inspect it and even correct it and/or improve it (see this story: http://clearclimatecode.org/nasa-giss-wants-to-use-our-code/ )and then when the intermediate files match what the papers say they are suppose to the question of “funny changes being made by a black box “ die. 

I’m not the only one that says this, here is an excerpt from an article in the Guardian:

One of the spinoffs from the emails and documents that were leaked from the Climate Research Unit at the University of East Anglia is the light that was shone on the role of program code in climate research. There is a particularly revealing set of “README” documents that were produced by a programmer at UEA apparently known as “Harry”. The documents indicate someone struggling with undocumented, baroque code and missing data – this, in something which forms part of one of the three major climate databases used by researchers throughout the world.

Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.

There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.

http://www.guardian.co.uk/technology/2010/feb/05/science-climate-emails-code-release

Some agencies have already realized this such as NCDC. Back in December questions arose for the changes made to the temperature record of Darwin Australia that according to the person doing the analysis did not match the procedures out lined in the papers published by Dr. Tom Peterson. This person contacted NCDC and surprisingly Dr. Peterson responded himself just 2 days before Christmas, for which many people thank Dr. Peterson for doing this during what must have been a very hectic time.

What was in the email response was very illuminating and shows that NCDC is going to go one step further then GISS. Where GISS just released their code and let you find the intermediate files and then work out if it matches the papers, NCDC is going to release the code and the intermediate data files. Here is some excerpts from that email, which was posted on WUWT.

 The homogeneity review paper explains the reasons behind adopting this complex reference series creation process. It did indeed maximize the utilization of neighboring station information. The downside was that there was a potential for a random walk to creep into the reference series. For example, if the nearest neighbor, the one with the highest correlation, had a fairly warm year in 1930, its first difference value for 1930 would likely be fairly high. The first difference value for 1931 would therefore likely be low as it probably was colder than that experienced in that very warm year preceding it. So the reference series would go up and then down again. The random walk comes in if the data for 1931 were missing. Then one gets the warming effect but not the cooling of the following year. The likelihood of a warm random walk and a cold random walk are equally possible. Based on the hundreds of reference series plots I looked at during my mid-1990s evaluation of this process, random walks seemed do be either non-existent or very minor. However, they remained a possibility and a concern.

Partly in response to this concern, over the course of many years, a team here at NCDC developed a new approach to make homogeneity adjustments that had several advantages over the old approaches. Rather than building reference series it does a complex series of pairwise comparisons. Rather than using an adjustment technique (paper sent) that saw every change as a step function (which as the homogeneity review paper indicates was pretty standard back in the mid-1990s) the new approach can also look at slight trend differences (e.g., those that might be expected to be caused by the growth of a tree to the west of a station increasingly shading the station site in the late afternoon and thereby cooling maximum temperature data). That work was done by Matt Menne, Claude Williams and Russ Vose with papers published this year in the Journal of Climate (homogeneity adjustments) and the Bulletin of the AMS (USHCN version 2 which uses this technique).

Everyone here at NCDC is very pleased with their work and the rigor they applied to developing and evaluating it. They are currently in the process of applying their adjustment procedure to GHCN. Preliminary evaluation appears very, very promising (though of course some very remote stations like St Helena Island (which has a large discontinuity in the middle of its long record due to moving downhill) will not be able to be adjusted using this approach). GHCN is also undergoing a major update with the addition of newly available data. We currently expect to release the new version of GHCN in February or March along with all the processing software and intermediate files which will dramatically increase the transparency of our process and make the job of people like you who evaluate and try to duplicate surface temperature data processing much easier.

We’re doing a lot of evaluation of our new approach to adjusting global temperature data to remove artificial biases but additional eyes are always welcome. So I would encourage you to consider doing additional GHCN evaluations when we release what we are now calling GHCN version 2.5 in, hopefully, February or March of 2010.

You can see the full email at this link: http://wattsupwiththat.com/2009/12/20/darwin-zero-before-and-after/

(Note: The email is in the comments section down neat the end.)

So there you go, if these agencies want to prove they are making the correct adjustments just release the code and let people see for themsleves.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: