GHCN V3 A First Look

Back around Christmas of 2009 Dr. Tom Peterson of the NCDC let it be known that the GHCN dataset was going through a revision in an Email to Willis Eschenbach. Willis posted what Dr. Peterson had to say in a comment on WUWT and you can read the whole thing there:

However I will copy the pertinent parts that apply to this post here:

They are currently in the process of applying their adjustment procedure to GHCN. Preliminary evaluation appears very, very promising (though of course some very remote stations like St Helena Island (which has a large discontinuity in the middle of its long record due to moving downhill) will not be able to be adjusted using this approach). GHCN is also undergoing a major update with the addition of newly available data. We currently expect to release the new version of GHCN in February or March along with all the processing software and intermediate files which will dramatically increase the transparency of our process and make the job of people like you who evaluate and try to duplicate surface temperature data processing much easier.


We’re doing a lot of evaluation of our new approach to adjusting global temperature data to remove artificial biases but additional eyes are always welcome. So I would encourage you to consider doing additional GHCN evaluations when we release what we are now calling GHCN version 2.5 in, hopefully, February or March of 2010.

Well as we now know what they called then GHCN v2.5 was actually named GHCN v3. Also we know that Dr. Peterson was over optimistic on the release date by over a year since GHCN v3 didn’t go “live” until May 2nd:

 Effective May 2, 2011, the Global Historical Climatology Network-Monthly (GHCN-M) version 3 dataset of monthly mean temperature will replace GHCN-M version 2 as the dataset for operational climate monitoring activities. Although GHCN-M version 2 will continue to be updated with recent observations until June 30, 2011, users are encouraged to begin using GHCN-M version 3.

Now prior to this news release GHCN v3 was available in a beta version and people started comparing the GHCN v3 beta  against GHCN v2 adj, however I held off waiting for the live version before I started looking. There was a couple of reasons for this:

1. By waiting for them to declare GHCN v3 “live” I wanted to see how well they matched to Dr. Peterson’s email in the transparency department, IE will they really be making it easy for the public to find the actual code and will they have files for the intermediate steps.

2. I want to compare not just V3 adjusted to V2 adjusted I wanted to look at the Raw files as well and compare them, something people comparing the beta version wasn’t doing.

On the transparency front so far it is a dismal failure. When NCDC declared GHCN v3 “live” they didn’t even have the Max and Min datasets available. On May 11th when I downloaded the GHCNv3 QCA and QCU files they were only for TAVG (more commonly known in climate science as the Tmean). Two days later they finally made the Tmax and Tmin datasets available.


Maximum (TMAX) and Minimum (TMIN) temperature data are now being posted to ftp.
However, these data will probably only be updated weekly as opposed to daily for
mean temperature (TAVG).

Also so far I haven’t found the codes and their idea of “showing” their work is a series of gif’s such as this one that can be seen at this link:

I also do not like how they are deviating from well established terms and conventions that most people knowledgeable in the Climate Science debate are used to. Besides changing from using Tmean to Tavg they also no longer name their datasets as Raw and adjusted. They now just label the datasets Quality Controlled Adjusted (QCA) and Quality Controlled Unadjusted (QCU).

You can look through the whole GHCN v3 ftp site here and make up your own mind about if they met Dr. Petersons Email statements on transparency.

Usually when comparing GHCN to another dataset you either look at the Grid products or by station for example you would compare the NASA GISS adjusted data for any given station to GHCN v2 Adj data. Now one thing that was noted about the GHCN v2 datasets was that even in the adjusted dataset there was multiple station records. Example look up Moscow (63827612000) in the v2 adjusted dataset and you find multiple stations listed under that Station number as seen by the station number flag. So I went looking for a station that wouldn’t have that many multiple locations for a given station number and I lucked out on my second try: London/Gatwick (65103776000). In the GHCN v2 Raw dataset there is only 2 locations listed (it also has a relatively short timeframe to work with) and only 1 in the Adjusted dataset.

So I downloaded the GISS final adjusted station data for London Gatwick and here I show what it looks like under the old way of comparison:

Figure 1

Now the first thing I noticed was that the GHCN v2 Adj dataset didn’t use the second location’s data found in the GHCNv2 Raw dataset, but GISS did. GISS combined the records in an intermediate step of their processing. So lets now compare GISS adj to the GHCN v2 Adj and the GHCN QCA.

Figure 2

When you make this comparison you see that the new GHCN v3 QCA dataset must be using both locations for that station number just like GISS since they also have the data all the way out to 1997. So it appears, that like GISS, NCDC is now stitching records together as part of their process of taking the raw, relatively unadjusted data and making a longer adjusted station dataset. Now this wouldn’t be a problem except for what I found in the next comparison: GHCN v2 Raw to GHCN v3 QCU:

Figure 3

Now if the GHCN v3 QCU dataset was truly unadjusted there is no way that you would find one single station record going from 1961 to 1997 since there was no single location that covered that time span. Also notice that the v3 so-called unadjusted data does not match the v2 Raw 00 location in the 1980’s. The two station locations only overlapped each other between the years of 1987 to 1991, so what is the reason to adjust the raw data between 1980 and 1987? The only thing that can be the cause of it is their “Quality Control” procedures.

So I went and looked at what NCDC had to say:

Removal of Station Duplicates: A unique feature of the  GHCN-M  version 2 dataset is the presence of duplicate station records for approximately one-third of its stations. The dataset contains 2,706 stations that have two or more separate sets of observations informally referred to as “duplicates.” The term notwithstanding, the two or more duplicate mean temperature series attributed to a single station are, in fact, similar  but not exact copies of each other. Duplicates occur because there are often multiple sources of temperature data for any given observing
station. For some stations included in GHCN-M version 2, data attributed to a single station were provided in ten or more different databases. These various sources of data often overlap in time, and while the values between sources are generally similar, they are often not identical. The
differences most commonly result from the many different ways in which monthly mean temperature can be calculated. In GHCN-M version 3, duplicates  are combined into single station series based on a process whereby the longer duplicate time series were given higher
Changes to the Quality Control Process: The GHCN-M version 3 quality control checks can be grouped into three general categories: basic integrity, outlier, and spatial consistency. Once an observation fails a quality control check, the value is excluded from subsequent checks during
that processing cycle. The quality control flags are included in the version 3 dataset for any value identified to be in error, providing information on the type of error associated with a value. The quality control flag is one of three types of metadata information included in the  GHCN-M
version 3 dataset. It is appended to each observation along with a measurement flag and a source flag. Details on the quality control, measurement, and source flags are available in the version 3 README file.

To myself that description doesn’t provide any reason to adjust the raw data for that station between 1987 to 1991 and call it unadjusted. I can see where they decided to move the stitching of the stations together into the QC process under the spatial consistency heading, however to do that you are still adjusting the data. Also there is nothing there in the definition of the QC procedures that would explain the 1980 to 87 data adjustment.

By NCDC doing this stitching of the multiple record and adjusting the data and calling it unadjusted is going to be a problem for GISS. GISS uses as the input to their computer analysis the GHCN v2 Raw dataset as of the moment as can be shown in this screencap:

Figure 4

By NCDC adjusting that data and stitching it together it is going to leave GISS only 3 options as I see them:

1. They try to figure out how NCDC fiddled with the data and undo it and then run it through their analysis program.

2. Try and get NCDC to provide them with the true unadjusted dataset, IE not stitched together and not adjusted in any way.

3. Get out of the temperature analysis business.

If  the second option is chosen and implemented without the public having access to that unaltered dataset, it would be a major step back from a transparency and trust perspective. Without that truly unaltered data the public wouldn’t be able to the analysis that eventually led to the email exchange between Willis Eschenbach and Dr. Peterson.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: