Blood Red Oceans?
February 2, 2010Posted by on
Over on his site EM Smith yesterday had this post:
Now this has stirred up some debate of what is going on with some wanting to just scoff it off as, well who does a null check on this stuff and so what it doesn’t effect the regular maps. So after thinking on this stuff overnight and playing around with the GISS page some I came to some conclusions.
1. There is a bug in the code used by the GISS computers for that site.
2. There is no set standard of integers for ‘no data’ in temperature databases. In some such as GHCN the number is 999, If you go to another area of the GISS site you can download data for a single station. When you do that you see that they use 999.9 not 9999 as the no data number. Other times the number isn’t even a positive it’s a negative number. So a simple thing as missing the decimal place or missing the – sign could screw things up if the code is looking for them. No – sign in front of the 9999 to the code means real data and passes it through.
3. This is not just a graphical bug, it is treating the numbers as real data not as ‘no data’. You can logically figure this out from this simple fact: The average anomaly is over 5000 and the 9999 appears on the temperature scale. If it was just a graphical bug where the code used Red instead of Grey as it is suppose to, the average anomaly would still read 0 and the temperature scale wouldn’t go to 9999. The 9999 is being used as data for those areas in the ‘Time Interval’ selection but is not being used in the ‘basleine’ selection (If it was used in both the oceans would be white and the Global average would be back to 0) and the code is using it to calculate the global average anomaly.
I played with the set up myself and did polar views that showed the entire planet bright red except for some dark red and posted the links to them in the comments section over on EM’s site.
UPDATE: I went and just tested those links and they are now dead with a message from GISS:
Surface Temperature Analysis: Maps
Anomalies for any period with respect to itself is 0 by definition – no map needed
I wonder if GISS realizes that by placing the settings to a null value and turning the infill down to 250km it is much easier to see the no data areas, even when it was buggy. Also of note no mention that the code to do such a null value test is buggy.
Bottom line the code has a bug, it does use the 9999 as data in some cases, GISS is now aware of this problem and killed links to the maps made by the bug, you can’t redo the maps you get the reminder message. GISS should have acknowledged that the maps were due to a bug and they are looking at fixing the code not the message they put up. It smacks to much of the “Hide the decline” attitude. So this leaves us with some questions
1. Was the bug just confined to Null value maps? I tend to think so since there is enough no data areas, especially with no SST data added, to show a huge increase in average anomaly if it wasn’t.
2. Is there more bugs in the code that does affect the regular maps? Don’t know at this point, however once you find one bug in a code that has been released you have to actively start bug hunting again or get burnt (See Microsoft and buggy code after release).
3. Was this code used to make maps in official presentations? Again we don’t know but as an example was this code used in Dr. Hansens recent presention? If the answer is yes you have to keep this bug in mind when looking at his maps.