Monthly Archives: February 2010

GISTemp Trend Map Data Analysis

In my last post (which if you haven’t read yet you really need to or you won’t get everything I’m saying here) I showed how, at least on the GISS Map Maker program, that the 1880-2009 gridded trends have a couple of hidden blemishes. For most of the land area “trends” have interpolated data used as the basis for its trend not observed data through out the time period. This applies either back around 1880 when there was very little stations outside of the US and Europe or to the “Great Dying of Thermometers” starting around 1990. In some cases like Central Africa and portions of Central South America the data is “infill” or as I like to call it a SWAG in both 2009 and 1880. That’s right in those areas the entire trend of its “warming” for that grid point is a guess with very little observable data from those areas over that almost 130 time frame.

That’s fine and dandy for those areas where it is complete or mostly infill but what about the areas where there is a lot of historical data to compare to?

You would think that the infill would have no impact on those grid cells right? Why would they need to infill since they have actual observed readings right?

Well we can check that using the data provided by GISS from their Map Maker program. You see by turning down the infill to 250km you get closer to the actual observed data for those areas. Now if they have actual observed data for the entire or close to entire time period in question for a specific grid then there should be no influence from infill and the turn back from 1200 to 250 would not change the value of the trend. Sure there is still some infill since it does say 250km “smoothing” but you expect that to be  a small portion of what is left and not every grid box or even a majority of them to change.

Now before I show the results of the comparison between the trends for 1200km and 250 km infill lets lay out what we are talking about.

First on the Gridded Global map from the GISS site as shown at this link:

http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2010&month_last=1&sat=4&sst=0&type=trends&mean_gen=0112&year1=1880&year2=2009&base1=1951&base2=1980&radius=1200&pol=reg

Now from this is an option to get the data and from the following link you get the page for the data for the map in the above link:

http://data.giss.nasa.gov/work/gistemp/NMAPS/tmp_GHCN_GISS_1200km_Trnd0112_1880_2009/GHCN_GISS_1200km_Trnd0112_1880_2009.txt

Now when you are looking at the data you see a bunch of columns with numbers. The first 2 columns is the label GISS gives for each box on the gridded map. They start at the bottom of the map in the lower left hand corner and go left to right from there filling the boxes. You repeat this for each layer going up the map. These boxes are centered on the Longitude and Latitude of the center of each grid box. That Long/Lat reading is the next two columns of numbers. The next column of numbers is the temperature trend GISS believes is the actual change in temperature for that grid. A 9999.0000 means no data.

So this is how I handled the data:

Step one copy the data from that page into a Spreadsheet. Now I got rid of the grid labels since I didn’t need them, but kept the Long/Lat numbers since that acted as a way to line up the numbers for 1200km and 250km. First thing I noticed is that there is 16,200 individual grid boxes and of those only 9,232 is used by the GISS land analysis. The other 6,968 is taken up by the Had SST anomaly data and is not needed for this so is turned off and comes up 9999.0000. From there I went and turned down the map from 1200km to 250 km infill and pulled up the data. I then copied it into the spreadsheet program and then started eliminating every grid where there wasn’t an overlap of data, this cut the number of used grids down to 2,778. So there is less then 17% of all grids and only 30% of all land grids that GISS has observed data for in the 1880-2009 trend. Now of this 2,778 grids you would not expect the majority of them to have had their trends change by going from 1200km infill to 250 km infill, however the answer is that almost every grid trend changed.

Differences in Trend between 1200 km and 250 km Infill

Now how I did this was take the data from the 250km infill and subtract the data from the 1200km infill and this gives you the difference between the two and in the correct direction of change. (I almost pulled a Mann when I first did it and had the thing inverted). What you notice is that one very big area of change in the negative direction is at the far right end. The Long/Lat where those differences occur is for a strip of land up where northern Russia meets the Arctic Ocean and what you see is that with 1200 km infill you had a warming trend of 1.62 degrees change into .61 Cooling trend. Other areas you see the opposite occur example there is a strip where the grids had 2.61 degrees of warming and when the infill was ripped out it went up to 3.39 degrees of warming.

So the bottom line is that we do not know with any certainty what the real trend is for the globe because of GISS infilling.

Advertisements

Interesting GISTemp Trend Maps

There has been much talk across the blogosphere about what happens in GISTemp when station data “drops” out. Such questions as how does it effect the gridded anomaly maps and does it effect the trends are being asked and people are attempting to answer. 

Now so far I have seen a few suggestions and attempts to prove or disprove what station drop out does. One suggestion was to prune the GHCN “raw” dataset of stations that don’t make it through the “Great Dying of Thermometers” and then run the version of GISTemp published on the GISS website and see what it spits out and compare it to what it gets when you run the full GHCN dataset. Now this is logical and should be the way it works but according to EM Smith, who has a copy of GISTemp up and running, GISTemp gets cranky when you prune the dataset it uses and crashes.

Now this then leaves us with various other attempts including some making up their own statistical method that is NOT the way GISTemp is suppose to operate (at least according to Dr. Hansen’s published papers and I’m not talking about RomanM). By changing the way the stations are put together in their grid cells then trying to see how loss of stations effect the output does not tells us how that effects GISTemp. If your method shows no difference, congratulations, publish it and maybe GISS will switch to your method. However until such time we need to know how loss of data effects the GISS method not yours, so you didn’t prove a damm thing about GISS. 

Now every one of these attempts are all looking at trying to simulate what GISTemp does in one way or another, instead of actually seeing what GISTemp really does do. That was when I had a bright idea: Why not see what GISTemp does by changing the settings on the GISTemp gridded map maker program on the GISS site.

Now this program allows you to change baselines, change the length of series, change if you use SST anomalies or not, see an Anomaly map or a trend map and what is critical for this to change the amount of infill from 1200km to 250km.

Why is the infill radius critical?

Because what are you doing by taking the infill out but simulating the loss of station data in GISTemp. 

Keep this in mind that at the 1200km range you get a lot of “estimated” data thrown in and at 250 you are closer, but not exactly to, seeing straight up data from a thermometer. Now awhile back I did a short animation called “March of the Thermometers”. What this was is GISS generated Anomaly maps at the 250km infill level, one frame per 10 years. This lets you see how thermometers spread and contracted around the globe, however this time around we want to see not the spread or loss of thermometers but the change in trends for each grid cell over time. Now the only way that GISS can do this is by comparing a temperature anomaly at one time to other temperature anomalies in the past for the same grid cell.

Now the contention is that in the 1950’s, 60′ and 70’s it gave one anomaly made up of x amount of reporting stations that were combined, now here in the 2000’s you no longer have x amount of stations in that grid reporting; you got y and that if you still had x amount the anomaly value for that grid cell would change, therefore it would change the trend. This is then compounded by the GIStemp Infill function, because in some grid cells you have actual observations from the 50’s and 60’s and now you got “estimated” data being compared to it and that is not a good comparison.

 At least that was the conjecture.

By playing around with the GISS map maker program and if it is an accurate reflection of the program GISS uses for their official gridded maps, I found something quite startling to say the least.

First here is the set up: 

GISS land analysis selected 

Ocean: None  

Map Type: Trend 

Time Interval: 1880 to 2009 

Base Period: 1951 to 1980 

Smoothing Radius: 1200km 

Projection: Regular 

That is what we are starting with and here what you get: 

Figure 1

Notice that most of the land area is covered except for one area that we know there has been observations from since at least the 1950’s: Antarctica. So right off the bat we got a mystery that I believe I solved shortly thereafter when I changed the Smoothing Radius aka the Infill from 1200 to 250km and got this:

Figure 2

Holey Toledo look at that most of the land area went to “No Data” gray!

Now this begged the question of why this happened and at first I didn’t know but there was something awfully familiar looking about that map. Then it hit me: this 250km “1880 – 2009 trend” map looked like one of the early 1900 anomaly maps. So I went and pulled them up until I got this one:

Figure 3

Notice this map is Anomaly not Trend, it is on the same baseline but it is only for 1900 – 1901, but doesn’t the coverage area look mighty close to being the same?

This led me to my own hypothesis: GISTemp only makes a trend for any given cell as long as it has data either by observation or Infill to compare the present or end time selected to some pre determined time period where GIStemp has data in the past.

If you go back and look at Fig’s 2 there is Antarctica again with no trend for it, even though we know there is data from the 1950’s for that area. The same applies for all of Central America, a lot of the missing area in South America and other places on that map. To show this we will change the parameters to this for Fig 4:

GISS land analysis selected

Ocean: None

Map Type: Trend

Time Interval: 1950 to 2009

Base Period: 1951 to 1980

Smoothing Radius: 1200km

Projection: Regular

Figure 4

Now what do we see? Areas that in Figure 1 that were gray now have trends in them including Antarctica and all I did was change the start date from 1880 to 1950, everything else is the same. Now we switch to 250 Km in Fig 5:

Figure 5

Now this time unlike what happened when the start date was 1880 I didn’t lose that much, you don’t see half of the land area turn gray.

So that only can lead to one conclusion that GISS will only have a gray “No Data” area on its gridded trend maps if they can’t infill from what they have both in the present and in the past. That is why there was so much more gray areas in the 1880 – 2009 250km trend map: GISS had nothing to compare too in the 1880/1900 time range. That is why Antarctica is blank on that map even though there is data from 1950 to the present for that area.

So bottom line according to the 1880 – 2009 GISS 250km trend map the only grid cells that matter are the ones that have observations back in the 1880/1900 time frame or can be infilled from observations during that time. Everything else is not used.

Oh Btw for those wanting to know if a change in data will change the trend the answer is yes. Go back to Figure 1 and look in the middle of the 1200km infilled area of Russia. Do you see any white in there or only oranges and reds? Now look at Figure 2 and you will see a section that is white where before it was orange. In that case more data (ie more infill) caused a warming bias in that grid cell. Now can it happen the other direction yep take a look at Florida in Figure 2 there is a band of yellow running through it and it’s solid white (ie cooler) in Fig 1.

UPDATE: For those interested you can download the gridded data in text form from the map pages on the GISS site. Now I went and did a quick check on 2 Grid cells that are lat 73 N by 73 E and 73 N by 75 E the trend for both those cells in the 1880-2009 1200km setting is a whopping 1.4509. When you change the setting to 250 km the trend for those cells drops to -.6114. In this case a loss of data causes cooling. OTOH when you look at cell 41 S by 145 E which at 1200km is .0405 and cell 41 S by 147 E also at 1200km is .0319. When you switch to less data by going to 250 Km the trend jumps up to .7877 and .4595 respectively showing warming.

I have now shown that a change in data does change the output and it can go either way and the change is quite large in the first example it was a decrease of 2.1° with a loss of data and in the second example there was increases of .76° and .42° with loss of data.

So GISTemp is effected by Station dropout.

Temperature Analysis: It’s Climate Science’s Own Pepsi Challenge

One thing Phil Jones, the CRU and others like to point out is how well X dataset matches to Y’s “independent” dataset. Well that is at best only a half truth, half lie statement. 

They are not independent since they all work from the same basic “raw” datasets. The CRU, GISS and NCDC did not set up their own thermometers all over the globe and take readings, they all rely on the same readings, from the same thermometers for their analysis. 

Where they are “independent” is how they do their analysis. CRU claims they use the methods spelled out in the multiple Jones et al papers and the Brohan et al 2006 paper. NCDC as of right now does their global analysis on the papers published by Dr. Petersen. GISS does their work based on the the Hansen et al 1987 and Hansen et al 2001 papers. 

The one I’m most familiar with is the Hansen 2001 paper and in it Dr. Hansen spells out clearly that each of these analysis methods are different each with their own inherent strengths and weaknesses. 

Since they treat the data differently they can and do reach different results. Then to top it off each country that supplies the raw data via CLIMAT reports do their own analysis of the data as well using their own methods, which leads to an important question? 

Is what these various countries sending in true “raw” data or data that they themselves have done work on such as preliminary homogenization adjustments? 

Who knows. NCDC can’t be 100% sure because they are not the ones doing the readings they just get what is sent to them and if they are not sent the correct metadata they wouldn’t necessarily know if they are getting “raw” or pre-processed data. 

So in the end when you look at for example the outputs from these various groups most people are “picking” one over the others as being the “best” or “most accurate”. As an example of the differences between datasets just head over to the Blackboard and follow Lucia’s posts on the monthly temperature anomalies. Example HadCrut has the Jan. Anomaly at .47° C, GISS had it at .71° C and NCDC had it at .6° C. 

So people seem to pick and choose which analysis they believe on what they usually see in a gridded map or in a line graph, not on the method of how the numbers were derived. They are placing faith that one method is “better and more accurate” then the others. 

To hopefully show this I’m taking a part of the work I’m doing going through the Canadian records and I will and put up two comparative charts that will contain anomalies computed on the 1961-90 baseline. These two charts will have three time series on them, one will be the GISS analysis, another the NCDC analysis and the third will be the Canadian numbers. The catch is I won’t tell you on the graph which is which. Then I will show you a graph of all the different thermometer series used for each station in the GHCN “raw” data file in a seperate graph to see what they all started with. 

Finally at the end I will reveal which is which and let people see how their own internal bias is working. Will they pick the same analysis for both graphs or will they switch analysis based on other factors (visual factors)? Basically this is the old 1980’s Pepsi Challenge where the person tried both Coke and Pepsi without knowing which was which and at the end was shown which “taste” they truly preferred. 

First lets start with Alert NWT Canada WMO # 71802: 

Figure 1

I’m not giving you trend lines either, you pick the series from what you see there. Now I will give you this from the GHCN “raw” file: 

Figure 2

Now I bet you noticed one feature from the “raw” data that corresponds very nicely with one of the adjusted anomaly series: That great big dip in 1973 but did you notice that all the analysis dismiss that huge increase in 1963 on the same thermometer as the 1973 dip? Also notice that the increase in 1990 seems to have vanished as well from the adjustments. I’ll let you think on how those two omissions might have effected the trend lines. 

Now we move onto Eureka NWT Canada WMO # 71917: 

Figure 3

Oh I may or my not have switched the analysis around between figure 1 and figure 3 so that what was Choice A in figure 1 may or may not be Choice A in figure 3. 

 Now lets look at the GCHN raw from this WMO ID:

Figure 4

Now I bet you noticed that my comparison graph didn’t go include the later years of the 2000’s but the raw shows that there is data in those years. The problem is that one of the analysis methods drops those years so I had to cut off the others so they matched.

So make your picks which one you think is the closet to the real temperature and see if you picked the same analysis for both comparison with out knowing before hand who did the analysis. 

Here is the answers: 

Figure 1 A = Canadian, B = GISS, C = NCDC

Figure 3 A = NCDC, B = Canada, C = GISS

Willis Eschenbach’s Reply to Dr. Curry

I am going to shamelessly grab Willis Eschenbach’s reply to Dr. Curry’s multi bolg article because IMHO you won’t find a more clear refutation of as I put it BS (Willis is more diplomatic then me, I also cut out his long his synopis of Dr. Curry’s letter).

Willis Eschenbach (13:50:31) :

[snipped the digest]

Having made such a digest, my next step is to try to condense it into an “elevator speech”. This is a very short statement of the essential principles of an idea. My elevator speech of Judith’s post would be this.

Climategate has destroyed the public trust in climate science. Initially skepticism was funded by big oil. Then a climate auditing movement sprang up. They were able to bring the climate establishment to its knees because people trusted them. Public and policy makers don’t understand the truth as presented by the IPCC. To rebuild trust, climate scientists need to better communicate their ideas to the public, particularly regarding uncertainty. The blogosphere can be valuable in this regard.

OK, now what’s wrong with this picture?

The biggest problem is in one of the core ideas. This is the claim that the problem is that climate scientists have not understood how to present their ideas to the public. Judith, I respect you greatly, but you have grabbed the wrong end of the stick. The problem is not how climate scientists have publicly presented their scientific results.

The problem is that 71.3% of what passes as peer reviewed science is simply junk science, as false as the percentage cited in this statement. In other words, the lack of trust is not a problem of perception. It is a problem of lack of substance. Results are routinely exaggerated. “Scientific papers” are larded with “may” and “might” and “could possibly”. Advocacy is a common thread in scientific papers. Codes and data are routinely concealed. A concerted effort is made to marginalize and censor opposing views.

And most disturbing, for years you and the other climate scientists have not said a word about this disgraceful situation. When Michael Mann had to be hauled in front of a congressional committee to force him to follow the simplest of scientific requirements, transparency, you guys were all wailing about how this was a huge insult to him. An insult to Mann? Get real. Mann is an insult to climate science, and you, Judith, didn’t say one word in public about that. Not that I’m singling you out. No one else stood up for climate science either. It turned my stomach to see the craven cowering of mainstream climate scientists.

The solution to that is not, as you suggest, to give scientists a wider voice or educate them in how to present their garbage to a wider audience.

The solution is for you to stop trying to pass off garbage as science. The solution is for you establishment climate scientists to police your own back yard. When Climategate broke, there was widespread outrage … well, widespread everywhere except in the climate science establishment. Other than a few lone voices, the silence was deafening. And you wonder why we don’t trust you? Because a whole bunch of you are guilty of scientific malfeasance, and the rest of you are complicit in the guilt by your silence.

And you still don’t seem to get it. You approvingly quote Ralph Cicerone about the importance of transparency … Cicerone?? That’s a sick joke.

You don’t get it. You think people made the FOI requests because we were concerned that the people who made the datasets were the people using them in the models. As the person who made the first FOI request to CRU, I assure you that is not true. I made the request to CRU because I was disgusted with Phil Jone’s reply to Warwick Hughes request for data. Jones famously said:

Why should I make the data available to you, when your aim is to try and find something wrong with it?

When I heard that, I was astounded. I thought, “Well, he’s gonna get his hand slapped hard by real scientists for that kind of anti-scientific statements”. So I waited for some mainstream climate scientist to speak out against that kind of scientific malfeasance … and waited … and waited. In fact, I’m still waiting. I registered my protest against this bastardisation of science by filing an FOI. When is one of you mainstream climate scientist going to speak out against this kind of malfeasance? It’s not too late to condemn what Jones said, he’s still in the news and pretending to be a scientist, when is someone going to take a principled stand?

But nobody wants to do that. Instead, you want to explain how trust has been broken, and figure out more effective communication strategies to repair the trust. You want a more effective strategy? Here’s one. Ask every climate scientist to grow a pair of huevos and get outraged in public about the abysmal practices of far, far too many mainstream climate scientists. Because the public is assuredly outraged, and you are all assuredly silent … and that is extremely damaging to you.

A perfect example is you saying above:

Such debate is alive and well in the blogosphere, but few mainstream climate researchers participate in the blogospheric debate. The climate researchers at realclimate.org were the pioneers in this …

For you to say this without also expressing grave concern about realclimate’s ruthless censorship of every opposing view is more of the same conspiracy of silence. Debate is not “alive and well” at realclimate as you say, that’s a crock. Realclimate continues to have an undeserved reputation that it is a scientific blog because you and other mainstream climate scientists are unwilling to bust them for their egregious flouting of scientific norms. When you stay silent about censorship like that, Judith, people will not trust you, nor should they. You have shown by your actions that you are perfectly OK with censoring opposing scientific views.

The key to restoring trust has nothing to do with communication. Steve McIntyre doesn’t inspire trust because he is a good communicator. He inspires trust because he follows the age-old practices of science — transparency and openness and honest reporting of results.

And until mainstream climate science follows his lead, I’ll let you in on a secret — I don’t want trust in climate science to be restored. I don’t want you learning better ways to propagandize for shoddy science. I don’t want you to figure out how to better inspire trust by hiding your unethical practices in new and innovative ways. I don’t want scientists learning to use clever words and communication tricks to get people to think that the wound is healed until it is actually healed. I don’t want you to use the blogosphere to spread your pernicious unsupported unscientific alarmism.

You think this is a problem of image, that climate science has a bad image. It is nothing of the sort. It is a problem of scientific malfeasance and complicity by silence. The public, it turns out, has a much better bullsh*t detector than the mainstream climate scientists do … or at least we’re willing to say so in public, while y’all cower in your holes with your heads down and never, never, never say a bad word about some other scientist’s bogus claims and wrong actions.

You want trust? Do good science, and publicly insist that other climate scientists do good science as well. It’s that simple. Do good science, and publicly call out the Manns and the Joneses and the Thompsons and the rest of the charlatans that you are currently protecting.

Once that is done, the rest will fall in line. And until then, I’m overjoyed that people don’t trust you. I see the lack of trust in mainstream climate science as a huge triumph for real science. Fix it by doing good science and by cleaning up your own backyard. Anything else is a coverup.

Judith, again, my congratulations on being willing to post your ideas in public. You are rara avis, and I respect you greatly for it.

w.

PS – a “monolithic climate denial machine”?? Puhleease, Judith, you’re talking to us folks who were there on the ground fighting the battle. Save that farrago for people who weren’t there, those who don’t know how it went down.

You can see Willis’ and many other replies over at WUWT:

http://wattsupwiththat.com/2010/02/24/on-the-credibility-of-climate-research-part-ii-towards-rebuilding-trust/

UPDATE: Anthony has posted up a guest post by Willis dealing with Dr. Curry’s letter. Willis says that it is an expansion on the comment I copied above. You can read Willis’ post here:

http://wattsupwiththat.com/2010/02/25/judith-i-love-ya-but-youre-way-wrong/

Dear Dr. Curry………..Bullsh%^

I was reading your piece Dr. Curry about loss of trust and at first I thought you might just be slightly misguided in your understanding until you uttered this absolute piece of BS:

Skeptical research published by academics provided fodder for the think tanks and advocacy groups, which were fed by money provided by the oil industry. This was all amplified by talk radio and cable news.

Dr. Curry you, just like every other scientist, keep harping about “Big Oil” funding skeptics, even when you try to use it in a backhanded way, but you continue to ignore the fact that it was you scientists that are rolling in “Big Oil” money. Here is some Inconvenient Truths for you Dr. Curry: 

The Climatic Research Unit (CRU) of the University of East Anglia was founded by Big Oil.

The CRU was founded in 1971 as part of the university’s School of Environmental Sciences. The establishment of the Unit owed much to the support of Sir Graham Sutton, a former Director-General of the Meteorological Office, Lord Solly Zuckerman, an adviser to the University, and Professors Keith Clayton and Brian Funnel, Deans of the School of Environmental Sciences in 1971 and 1972.[4][5] Initial sponsors included British Petroleum, the Nuffield Foundation and Royal Dutch Shell.[5] The Rockefeller Foundation was another early benefactor, and the Wolfson Foundation gave the Unit its current building in 1986. 

http://en.wikipedia.org/wiki/Climatic_Research_Unit 

The CRU continued to seek funding from Big Oil and even let Big Oil companies set the research agenda:

Mick Kelly and Aeree Kim (CRU, ENV) met with Robert Kleiburg (Shell International’s climate change team) on July 4th primarily to discuss access to Shell information as part of Aeree’s PhD study (our initiative) and broader collaboration through postgrad. student project placements (their initiative), but Robert was also interested in plans for the Tyndall Centre (TC). What ensued was necessarily a rather speculative discussion with the following points emerging.

1.Shell International would give serious consideration to what I referred to in the meeting as a ‘strategic partnership’ with the TC, broadly equivalent to a ‘flagship alliance’ in the TC proposal. A strategic partnership would involve not only the provision of funding but some (limited but genuine) role in setting the research agenda etc.

2. Shell’s interest is not in basic science. Any work they support must have a clear and immediate relevance to ‘real-world’ activities. They are particularly interested in emissions trading and CDM.

3. Robert seemed to be more interested in supporting overseas (developing world) than home/EU studentships, presumably because of the credit abroad and their involvement in CDM. (It is just possible this impression was partially due to the focus on Aeree’s work in the overall discussion but I doubt it.) It seems likely that any support for studentships would be on a case by case basis according to the particular project in question.

4. Finally, we agreed that we would propose a topic to this year’s MSc intake as a placement with Shell and see if any student expressed interest. If this comes off we can run it under the TC banner if it would help.

I would suggest that Robert and his boss are invited to the TC launch at the very least (assuming it will be an invite type affair). Question is how can we and who should take this a step further. Maybe a meeting at Shell with business liaison person, Mike H if time and myself if time? I’d like to/am happy to stay involved through the next stage but then will probably have to back off.

We didn’t cover the new renewable energy foundation.

Mick Kelly

11 September 2000

http://junkscience.com/FOIA/documents/uea-tyndall-shell-memo.doc

Just incase you don’t know the Rockefeller foundation gets most of its money from the shares it owns in Exxon/Mobile the great “boogeyman” that is suppose to be funding the skeptics.

So as shown from the very beginning in 1971 the CRU, the scientists working there and their research was funded by BIG OIL not the skeptics. Now that we have shown that it is the alarmists that are shilling “Big Oil’s” position and policies that would make them even more money, lets move on.

You see Dr. Curry the problem at this point isn’t loss of trust since climate scientists lost that years ago when as a collective group you buried your heads in the sand regarding the Wegman Report. 

Dr. Wegman pointed out in his report years ago the core issues that showed up in Climategate. Climategate isn’t discovery for most of us skeptics, it’s vindication that we are right. The time to regain that trust is long past now, your community has to atone. I will probably get flack for this but the only comparison I can give for what needs to happen to the climate sciences is what happened to Germany after WWII. For Germany it was de-nazification, for climate science its de-alarmisification.

So here it is Dr. Curry is what needs to be done IMO:

1. Every single climate scientist that is part of the self called “team” to be barred from science for the rest of their lives.

2. Every Scientist that called a skeptic a denier is barred from the climate sciences and working with public funds.

3. Every climate scientist that stood by and watched this trainwreck happen after they had been warned by Dr. Wegman in his report needs to sign a written apology to be printed in the leading papers around the world.

Harsh? You bet, but just like after WWII this is what needs to be done. You see Dr. Curry scientists just like you kept silent and did nothing to reprimand Dr. Mann when his “Hockey Stick” was shown to be nothing more then a product of flawed statistics. You stood silent when Dr. Wegman pointed out that “peer review” in climate science was “pal review” instead. You stood silent when the “team” kept trying to resurrect the “Hockey Stick”. You stood silent when Dr. Mann and the “team” continued to be shown using data in improper ways.

Just like how the German people stood by and watched what happened and did nothing the “mainstream” climate scientists did the same so asking for forgiveness now is too little, too late its time to pay the piper for the tune you let be played.

For those that want to read Dr. Curry’s article you can find it here:

http://noconsensus.wordpress.com/2010/02/24/discussion-of-trust/

or here:

http://wattsupwiththat.com/2010/02/24/on-the-credibility-of-climate-research-part-ii-towards-rebuilding-trust/

WMO Study On Hurricanes

The alarmists keep trotting out that Hurricanes are increasing, destroying more property and killing more people. They say this is occuring not just in the US but world wide and it is all due to CO2 increases caused by man.

Well the WMO sponsored a new study that (again) checked this claim, and this study has been published in Nature Geoscience. This new study has found (again):

. . . we cannot at this time conclusively identify anthropogenic signals in past tropical cyclone data.

You can read all about this over on Dr. Roger Pielke Jr.’s blog, where he has been trying to get this point through peoples heads for the last couple of months:

http://rogerpielkejr.blogspot.com/2010/02/updated-wmo-consensus-perspective-on.html

See not even the UN’s World Meteological Office backs this alarmist claim.

Revenge Of The Polar Bears!

This is I found over on Small Dead Animals and is just too good to pass up.

So here we got this woman in Berlin going for a  swim…….

In the  Polar Bear enclosure at the Berlin Zoo! Now what could go wrong……..

 

Oh not much except the Polar Bear enclosure has Polar Bears in it!

Please note that no Polar Bears were harmed from chowing on an idiot.

http://www.guardian.co.uk/world/gallery/2009/apr/12/animals-germany?picture=345847068

A Hypothetical Cow “Gored” to Death

Hopefully this post will go a long way to once and for all “Gore” to death the warmist hypothetical cow of explaining the “Great Dying of Thermometers” in the GHCN datase that it nothing more then a lag in reporting.

Background first:

This all started with a post on EM Smiths blog about how since basically the 1990’s there has been a huge drop off in the number of reporting stations used in GHCN.

http://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/

Now GHCN started out back in the late 80’s and in the 1990’s as a project of the National Climatic Data Center. There they gathered historical data from around the world and compiled it in 1991 into the first iteration of GHCN. Now this would take a lot of work and time as you transcribed paper copies into a digital format. GHCN then went through a second revision in 1997 into what we know now as GHCN V2 which can be found here:

http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/index.php

Now this new update shouldn’t have taken as much time to do since most of the transcribing was already done into digital format. Only newly discovered older records would have required this process and all new data added (such as their monthly updates) all come in as digital data and should be a simple process if you have a competent programmer.

So when EM Smith found that the number of reporting stations in the GHCN dataset went down from a high of roughly 6 to 7,000 stations to roughly around 1500 stations by 2008 something didn’t sit right and he looked at which ones went “missing”.

What he found was that thermometers that were at higher latitudes and elevations were the ones dropped. Now it is well known that the closer you get to the poles the lower your temperatures are and the same thing applies as you start getting further up in the atmosphere from sea level. This then led to an hypothesis on his part. Since these lower temperature thermometers were included in the “baseline” period, but not incorporated in the later periods it would induce a bias into the trend since it would artificially cause the later temperature anomalies to be artificially higher then they should. Thus giving a false trend.

Now before this hypothesis has been even tested warmists rushed out with a few hypothetical cows to explain the “missing” thermometers. One of these is that the reason there is a drop off is because there is lag in reporting. Now in some cases this does make sense such as areas were there is political unrest such as Central Africa or Iraq, which in my animation “March of the Thermometers” you can see they have dropped out. However that doesn’t in anyway shape or form explain Canada.

Now on the face of it the excuse of lag in reporting for Canada is farcical since we all know that Canadians have access to every piece of modern technology we here in the US have. To try and say that the remote stations in the rugged Canadian wilderness need to send their reports in on paper, because of technology is not just insulting to my intelligence but to the entire country of Canada.

To make it simple for the warmists even people living in the back hills of the Appalachians known as “Hillbilly’s” and “Rednecks” know you can get Internet access via satellite and I’m sure the Canadian government knows this too. So all they need is a generator or other power supply (can even be a solar panel), a laptop computer with a satellite Internet hookup and even the most remote place on earth can report in virtual “real” time.

Now to give a little of my background I was one of the technicians that operated the first Tele-radiology systems on the planet when I deployed with Fleet Hospital Six. Via landline or our backup Sat-phone we could send X-rays taken from our version of a MASH back to Bethesda Naval Hospital for experts to look at and they could talk to the Doc’s where we were via teleconferencing. All we did was scan the X-ray into a laptop and use that new fangled thing called the Internet. Now this was all done back in 1994 and I’m sure the Canadians have figure out the basics of this by now. So this isn’t rocket science people, this can be done with stuff you find at Radio Shack.

However that piece of logic is probably too much for warmists to understand so lets go with what the WMO has to say about this. You see the WMO set up a Global Telecommunications System (GTS) so that even the most remote and backwards countries could report in with a monthly CLIMAT report. So lets go to the WMO guide on operations to see what they say:

2.6.2 Logging and reporting of observations

Immediately after taking an observation at a manual station, the observer must enter the data into a logbook, journal, or register that is kept at the station for this purpose. Alternatively, the observation may be entered or transcribed immediately into a computer or transmission terminal and a database. Legislation or legal entities (such as courts of law) in some countries may require a paper record or a printout of the original entry to be retained for use as evidence in legal cases, or may have difficulty accepting database generated information. The observer must ensure that a complete and accurate record has been made of the observation. At a specified frequency (ranging from immediately to once a month), depending on the requirements of the NMHS, data must be transferred from the station record (including a computer database) to a specific report form for transmittal, either by mail or electronically, to a central office.

Climatological station personnel must ensure that there is a correct copy of the pertinent information in the report form. In the case of paper records, the necessities for good, clear handwriting, “clean” journals and report forms should be emphasized. Usually, more information, perhaps pertaining to unusual weather phenomena and occurrences, is entered in the local record than is required by the central office. The onstation record must be retained and readily accessible so that the station personnel can respond to any inquiries made by the central office regarding possible errors or omissions in the report form. Some services request observers to send logbooks to the national climate center for permanent archiving.

Some national climate centers will require the station personnel to calculate and insert monthly totals and means of precipitation and temperature so that the data may be more easily checked at the section or central office. In addition, either the climate center or observer should encode data for the CLIMAT messages (WMO/TDNo.1188), if appropriate. WMO has software to encode the data. The observer should note in the station logbook and on the report forms the nature and times of occurrence of any damage to or failure of instruments, maintenance activities, and any change in equipment or exposure of the station, since such events might significantly affect the observed data and thus the climatological record. Where appropriate, instructions should be provided for transmitting observations electronically. If mail is the method of transmission, instructions for mailing should be provided to the station as well as preaddressed, stamped envelopes for sending the report forms to the central climate office.

  Page 46/47 of http://www.wmo.int/pages/prog/wcp/documents/Guide2.pdf

Now I know that mail is slow but I doubt that it takes up to 18 years for observations taken in the early 90’s to reach NCDC. Also notice that WMO provides software for proper transmission of CLIMAT reports.

Now that should kill the hypothetical cow of “late reporting” there but lets keep looking:

3.3 Climate data management

Climatological data are most useful if they are edited, quality controlled, and stored in a national archive or climate center and made readily accessible in easytouse forms. Although technological innovations are occurring at a rapid pace, many climatological records held by NMHS are still in nondigital form. These records must be managed along with the increasing quantity of digital records. A Climate Data Management System (CDMS) is a set of tools and procedures that allows all data relevant to climate studies to be properly stored and managed.

The primary goals of database management are to maintain the integrity of the database at all times, and to ensure that the database contains all the data and metadata needed to meet the requirements for which it was established, both now and into the future. Database management systems have revolutionized climate data management by allowing efficient storage, access, conversion, and update for many types of data, and by enhancing security of the data.

A major step forward in climate database management occurred with the World Climate Data and Monitoring Programme (WCDMP) Climate Computing (CLICOM) project in 1985. This project led to the installation of climate database software on personal computers, thus providing NMHS in even the smallest of countries with the capability of efficiently managing their climate records. The project also provided the foundation for demonstrable improvements in climate services, applications, and research. In the late 1990s, the WCDMP initiated a CDMS project to take advantage of the latest technologies to meet the varied and growing data management needs of WMO Members. Aside from advances in database technologies such as relational databases, query languages, and links with Geographical Information Systems, more efficient data capture was made possible with the increase in AWS, electronic field books, the Internet, and other advances in technology.

Page 63 http://www.wmo.int/pages/prog/wcp/documents/Guide2.pdf

So here we see that the WMO mandated starting in 1985 that the records were to be kept in digital form. Now I’m sure Canada could have managed that feat in the last 25 years, let alone the “smallest countries” the WMO mentions. Then in the late 90’s the WMO initiated a project that uses such new fanlged things such as the Internet and Electronic Field Books for more efficient data capture. I’m sure Canada could implement that system in the last 10 years.

Now this hypothetical cow of “late reporting” or a lag is looking more and more dead but lets soldier on:

3.5 Exchange of climatic data

Exchange of data is essential for climatology. For states that are members of WMO the obligation to share data and metadata with other members, and the conditions under which these may be passed to third parties is covered under WMO Resolution 40 (CgXIII) for meteorological data, WMO Resolution 25 (CgXIV) for hydrological 1 data, and the Intergovernmental Oceanographic Commission Resolution XXII6 for oceanographic data. The Resolutions embody the concepts of “essential” and “additional” data, with a specification of a minimum set of data that should be made available in a nondiscriminatory manner and at a charge of no more than the cost of reproduction and delivery without charge for the data and products themselves. Members may decide to declare as “essential” more than the minimum set. The use of agreed upon international standard formats for data exchange is critical.

Beyond CLIMAT and related messages (see section 4.8.7), members are also asked to provide additional data and products that are required to sustain WMO programmes at the global, regional, and national levels and to assist other Members in providing meteorological and climatological services in their countries. Members supplying such additional data and products may place conditions on their reexport. Research and educational communities should be provided with free and unrestricted access to all data and products exchanged under the auspices of WMO for their noncommercial activities.

Members of WMO volunteer subsets of their stations to be parts of various networks, including the GCOS GUAN, the GCOS GSN, the Regional Basic Synoptic Network, and the Regional Basic Climatological Network. Nomination of stations in these networks implies an obligation to share the data internationally.

Data are also shared through International Council for Science World Data Centers (WDCs). The WDC system works to guarantee access to solar, geophysical, and related environmental data. It serves the whole scientific community by assembling, scrutinizing, organizing, and disseminating data and information. WDCs collect, document, and archive measurements and the associated metadata from stations worldwide and make these data freely available to the scientific community. In some cases WDCs also provide additional products including data analyses, maps of data distributions, and data summaries. There are climate related International Council for Science WDCs covering Meteorology, Paleoclimatology, Oceanography, Atmospheric Trace Gases, Glaciology, Soils, Marine Geology and Geophysics, Sunspots, Solar activity, SolarTerrestrial Physics, Airglow, Aurora, Cosmic Rays, as well as for other disciplines.

WMO is actively involved in the provision of data to a number of these International Council for Science WDCs, and there are a number of associated centers operated directly through WMO. The WMO centers include Ozone and Ultraviolet Radiation, Greenhouse Gases, Aerosols, Aerosol Optical Depth, Radiation, and Precipitation Chemistry. There are differences in data access policy for International Council for Science and WMO centers. International Council for Science data centers will exchange data among themselves without charge and will provide data to scientists in any country free of charge. WMO data centers must abide by data exchange Resolutions 40 and 25 that allow for some data or products to be placed in the WDCs with conditions attached to their use.

There are many other centers beyond the International Council for Science WDCs that operate under cooperative agreements with WMO or with individual NMHS. These centers include the Global Precipitation Climatology Center and Global Runoff Data Center (Germany); Australian National Climate Center; Canadian Climate and Water Information Data Center; Hadley Center (UK); and in the USA, LamontDoherty Earth Observatory of Columbia University, National Climatic Data Center, National Oceanographic Data Center, National Geophysical Data Center, NASA Goddard Distributed Active Archive Center, Tropical Pacific Ocean Observing Array, and University Corporation for Atmospheric Research.

Exchange of digital data is simple for many members because of the range of computer communications systems available. The Global Telecommunication System is the meteorological communication system with connections to virtually all countries of the world. As an operational system with a critical role in global weather forecasting, it provides reliable communication services, albeit sometimes with low bandwidth. Like the Internet, the Global Telecommunication System is based on a confederation of interconnected networks. However, as a closed system, it is free from the security breaches that often plague the Internet. Open communication linkages such as the Internet should be protected by the best available security software systems to minimize the danger of unwanted access and file manipulation or corruption.

It is highly unlikely that archived formats used for climatological data by one country would be the same as those used by another. The format documentation describing the data organization, element types, units, and any other pertinent information should accompany the data. In addition, if the digital data are compacted or in a special nontext format, it is extremely useful for the contributing archive center to provide “read” routines to accompany digital data requested from an archive.

International data exchange agreements allow for the global compilation of publications such as Climatic Normals, World Weather Records, and Monthly Climatic Data for the World. Bilateral or multilateral agreements are also important in creating and exchanging long term data sets, such as the Global Historical Climate Network, Comprehensive Aerological Reference, and Comprehensive OceanAtmosphere Data Sets compiled by the United States and the Hadley Centre global observations data sets compiled by the United Kingdom. These data sets are generally provided to research centers.

The current WMO information systems have been developed to meet a diverse set of needs for many different programmes and Commissions. The multiplicity of systems has resulted in incompatibilities, inefficiencies, duplication of effort, and higher overall costs for Members. An alternative approach planned to improve efficiency of the transfer of data and information among countries is the WMO Information System. It is envisioned that the WMO Information System will be used for the collection and sharing of information for all WMO and related international programmes. Nonmeteorological and nonclimatic environmental and geophysical data such as ecological, earthquake and tsunami data could be included. The WMO Information System vision provides guidance for the orderly evolution of existing systems into an integrated system that efficiently meets the international environmental information requirements of Members.

The WMO Information System will provide an integrated approach to routine collection and automated dissemination of observed data and products, timely delivery of data and products, and requests for data and products. It should be reliable, cost effective, and affordable for developing as well as developed Members; technologically sustainable and appropriate to local expertise; modular; and scalable, flexible, and extensible. It should be able to adjust to changing requirements, allow dissemination of products from diverse data sources, and allow participants to collaborate at levels appropriate to their responsibilities and budgetary resources. The WMO Information System should also support different user groups and access policies such as WMO Resolutions 40 and 25, data as well as network security, and integration of diverse data sets.

 

Pages 79-82 http://www.wmo.int/pages/prog/wcp/documents/Guide2.pdf

Now this section is very important and I highlighted the most important of that section (some of this might already be emphasized in the WMO document).

Guess who is the WDC for maintaining the temperature archives that the WMO is telling its member nations to send the data to?

World Data Center(WDC) for Meteorology, Asheville is one component of a global network of discipline subcenters that facilitate international exchange of scientific data. Originally established during the International Geophysical Year (IGY) of 1957, the World Data Center System now functions under the guidance of the International Council of Scientific Unions ( ICSU).

The WDC for Meteorology, Asheville is maintained by the U.S. Department of Commerce, National Oceanic and Atmospheric Administration (NOAA) and is collocated and operated by the National Climatic Data Center (NCDC).

In accordance with the principles set forth by ICSU, WDC for Meteorology, Asheville acquires, catalogues, and archives data and makes them available to requesters in the international scientific community. Data are exchanged with counterparts, WDC for Meteorology, Obninsk and WDC for Meteorology, Beijing as necessary to improve access. Special research data sets prepared under international programs such as the IGY, World Climate Program (WCP), Global Atmospheric Research Program (GARP), etc., are archived and made available to the research community. All data and special data sets contributed to the WDC are available to scientific investigators without restriction.

http://www.ncdc.noaa.gov/oa/wdc/index.php

That’s right the National Climatic Data Center (NCDC) Asheville NC. That is who is suppose to get every CLIMAT report sent over the GTS.

Why?

One reason is as mentioned by the WMO the GHCN dataset.

Various data sets and data products from international programs and/or experiments, including meteorological and nuclear radiation data for International Geophysical Year (IGY)(see IGY Annuals, Vol.26); meteorological data and data products from Global Atmospheric Research Program, World Climate Research Program, World Climate Data and Monitoring Program; and data (including data publications) exchanged with the WDC by participating countries. Quality control is performed and documentation prepared by designated experiment centers or contributors before submission to WDC.

Global Historical Climate Network (GHCN) dataset. GHCN is a comprehensive global baseline climate data set comprised of land surface station observations of temperature, precipitation, and pressure. All GHCN data are on a monthly basis with the earliest record dating from 1697.

Is it starting to dawn on the warmists now that all CLIMAT records are available to the makers of the GHCN dataset and further more that one of their primary jobs is to archive all the temperature data in the world?

There is no such thing as “lag” in reporting the data from Canada to the NCDC due to the GTS system once a CLIMAT report is transmitted NCDC has access to it.

However lets make sure this hypothetical cow gets turned completely into hypothetical hamburger. You see there is another organization that is getting those same CLIMAT reports that NCDC is suppose to be getting. That organization in the US Air Force and they combine the CLIMAT reports with weather data from their own AWN system and give the Qced dataset to NCDC.

National Climatic Data Center

DATA DOCUMENTATION  FOR

DATA SET 9950 (DSI-9950)

DATSAV2 SURFACE

January 6, 2003

Abstract: DATSAV2 is the official climatological database for surface observations. The database is composed of worldwide surface weather observations from about 10,000 currently active stations, collected and stored from sources such as the US Air Force’s Automated Weather Network (AWN) and the WMO’s  Global Telecommunications System (GTS). Most collected observations are decoded at the Air Force Weather Agency (AFWA) formerly known as the Air Force Global Weather Central (AFGWC) at Offutt AFB, Nebraska, and then sent electronically to the USAF Combat Climatology Center (AFCCC), collocated with NCDC in the Federal Climate Complex in Asheville, NC. AFCCC builds the final database through decode, validation, and quality control software. All data are stored in a single ASCII format. The database is used in climatological applications by numerous DoD and civilian customers.

AFCCC sorts the observations into station-date-time order, validates each station number against the Air Weather Service Master Station Catalog (AWSMSC), runs several quality control programs, and then merges and sorts the data further into monthly and yearly station-ordered files. AFCCC then provides the data to the collocated National Climatic Data Center (NCDC).

 

http://www1.ncdc.noaa.gov/pub/data/documentlibrary/tddoc/td9950.pdf

In 2003 while GHCN was losing stations the Air Force somehow came up with 10,000 active reporting stations and according to NASA today the number is up to 13,000: 

Contains worldwide surface observations (synoptic, airways, METAR, synoptic ship) for about 13,000 stations. All weather elements transmitted are retained; in some cases, computed/derived values are
incorporated into the record. Also available are “station files-individual station data sets for selected stations–that have … received more quality control. Elements reported are: Wind direction, Snowfall and snow depth data, Wind speed, Runway data, Barometric pressures, Hail data, Pressure tendency & change, Sunshine data, Dry bulb temperature, ground temperature and conditions, Dew point temperature, Maximum and minimum temperatures, Total sky cover, Ship data, Visibility, Sea surface temperature, Past and present weather, Wave data, Cloud layer data, Swell data, Ceiling, Ship ice reports, Precipitation data.

The DATSAV2 Surface Data is also available from NOAA/NESDIS/NCDC (National Climatic Data Center) in Asheville, NC.

http://gcmd.nasa.gov/records/GCMD_USAFETAC_SFFMG.html\ 

Now we see that this dataset has anywhere from 10,000 to 13,000 stations active. GHCN has 1500 stations active. Now I can tell you straight out the US military does not operate 8500 to 11,500 bases around the world, so most of the data is coming over the WMO’s GTS which the NCDC is suppose to get and archive the data from.

So how is it the Air Force is able to find and use these stations to make what NCDC calls the “official climatological dataset” but NCDC can’t seem to included them into GHCN, even after the Air Force has given the data to them?

So as shown the technological tools are there, Canada has the resources to use those tools, the NCDC is the designated world archive for the worlds temperatures, but can’t seem to find active stations outside of the 1500 they have in the GHCN, while at the same time the US Air Force is able to find over 10,000 active stations, update their dataset every three months and even share that data with NCDC, but NCDC still can’t get that data into GHCN after most of the work has been done for them.

Remember The Dying Lockerbie Bomber?

Remember the Lockerbie Bomber that was released because he had only 3 months to live? The same terrorist that was so close to deaths door he wasn’t a threat to anyone anymore?  The same guy that got a celebration from the Libyan Government after they promised Obama they wouldn’t? The one release on compassionate grounds supposedly and not because Libya blackmailed the UK?

Well guess what people he ain’t dead, he isn’t on Chemo anymore and his Libyan Doctors say he isn’t getting any closer to dying. This is in an article in the Daily Telegraph, a UK paper. According to them this scum bag is living in luxury in Tripoli and it doesn’t look like he is going to die any time soon.

http://www.telegraph.co.uk/news/worldnews/africaandindianocean/libya/7279123/Lockerbie-bomber-Megrahi-living-in-luxury-villa-six-months-after-being-at-deaths-door.html

After reading the article this is what I conclude:

The Doctor, that was paid for by the Libyan Government, lied about not only about how long this guy had to live, but quite possibly if the guy had terminal cancer at all. He was instructed by the Libyans to say he had only 3 months so he qualified for release.

As to the reason he was released: compassionate my ASS. The UK sold out to Libya over Oil plain and simple. If they had any compassion at all it would have been to the Pan Am 103 families and they could have shown it right after this piece of excrement was convicted by taking him outside and putting 2 in the back of his head and burying him in an unmarked grave, however they gave this guy a “life” sentence. I am thinking about emailing everyone of these putzs that freed him with  the definition of what “life” sentence means. It wouldn’t be a long one, and it would probably go something like this:

Sir or Madam:

As demonstrated in the Lockerbie Bomber case, you have a flawed understanding of what a “life” sentence means. So that an episode like this never happens again in the future I ask that you memorize the following sentence: A life sentence means the convict dies inside the prison and only leaves AFTER death.

For some reason I don’t think that is enough so I have a second idea.

To release a scumbag, murdering terrorist on compassionate grounds because he or she is dying, first three conditions must be met:

1. A Doctor or Doctors that work for the families of the victims is the physician/s that decides if the scumbag terrorist meets compassionate grounds not the Doctor paid for by the scumbags government.

2. The closet living relative of the scumbag must takes his or her place in prison.

3. The idiot politician that has more compassion for the scumbag terrorist then his or her victims also must take his/her place in prison.

The relative and the politician will remain in prison until one of two conditions are met:

1. The politician and the relative both die therefore filling the “life” part of the original sentence.

2. The released scumbag dies and is brought back to the country he/she was imprisoned in for autopsy. If the body isn’t brought back or if a substitute body is used, then the relative and the politician must then stay in prison until the guideline 1 is met.

If that was implemented I guarantee you will never see another terrorist released ever again.

As to the UK and Scottish politicians and that doctor I can only wish there is some type of class action case the families of the victims can bring against them since Obama ain’t going to do squat.

Miracle On Ice: 30 Years Later

 It has been 30 years since a group of young college kids beat the most dominant hockey team in the world, and arguably one of the most dominant teams in any sport, on their way to win the 1980 Gold Medal. What is more striking is that we are now into a second generation that does not know, deep down, how important that game really was.

I was 13 years old that Olympics and was part of the generation that grew up with the political defeat of America’s willpower in Vietnam. In 1972 the US gave up on its ally in South Vietnam and left, at the same time our President acted like a petty crook by having people break into an office of his political opponents (Watergate). Then in 1975 we watched on TV the fall of Saigon, of people begging us to not leave them behind, of helicopters flying people off the roof of the embassy and landing on aircraft carriers and once the helicopter was emptied and no more flights were needed they pushed them over the side to make more room. Then the man that replaced a disgraced President became best known for being a buffoon, a prat fall waiting to happen. During this time we had one economic crises after another with oil embargoes in 1973 and again in 1979.

1979 could be described as one of the darkest hours in the nation or at least it seemed that way. What I had just described prior was bad enough but things were about to get worse. We had elected a President that thought he could finally bring peace between the US and the Soviet Union and with that also in the Middle East. After initial success in the Middle East by bringing peace between Egypt and Israel things started falling apart. In Nov of 1979 the US Embassy in Iran was overrun and its staff was taken hostage, then in Dec the USSR invaded Afghanistan, basically thumbing their nose at the US. It was also during this time that Chrysler, one of the Big 3 Automakers had to be bailed out by the Government in Dec. As that was going on the steel industry was turning into a rust belt as US companies increasingly found themselves trying to compete while using obsolete plants and equipment.

In International sports things weren’t that great either. Where communist counties basically paid their athletes (they claimed they were in the armed forces and other such scams), the US used college athletes. In Basketball in the 1972 Olympics besides the terrorist attack that killed the Israeli athletes, the US Mens Basketball team was basically cheated out of the Gold Medal. In Gymnastics you saw the dominance of Soviet gymnasts such as Olga Korbet. On the Winter Olympic side the bright spot for the US has always been the women figure skaters, but that was it. Alpine skiing was dominated then as now basically by the Swiss and the Austrians, Nordic skiing by Norway and Sweden. Bobsled and Luge the US was always also rans.

Hockey was the worst for not just the US but also Canada. The two countries that founded the sport and where the original NHL teams formed, Hockey became an embarrassment. In 1960 the US won the Gold Medal in Hockey but from that point forward were looked on as a joke. The problem was the best young Canadian and US players basically skipped college play and went straight into the NHL and because professionals couldn’t play by rule Canada didn’t compete. So starting with the 1964 Olympics the Soviets won the Gold Medal. However that was just the beginning of the story, in 1972 NHL players took part in what was called the Summit Series against the Soviet “amateurs”.

The Series was played at a time when only amateurs were allowed to play in the Olympic Games. The Soviet players, who had Olympic experience, were amateurs by strict definition only, as they were elite players playing hockey full-time in their native country. Some were given other titular professions (e.g. army soldiers playing full-time for the Central Red Army hockey team) to maintain amateur status for Olympic eligibility. Team Canada featured the country’s best professional NHLers, who by virtue of this status were ineligible for Olympic competition. For this reason, Canada had ceased competing in the IIHF World Championships and Winter Olympics after 1969.

At the time, the National Hockey League, and also its best players, consisted largely of Canadians and was considered to be where the best hockey players played. The public consensus of hockey pundits and fans in North America was that other countries, the Soviets in this case, were simply no match for Canada’s best. The Soviets were not expected to even give the Canadians a challenge, and Canada was going into this series expected to win eight games to zero.

 http://en.wikipedia.org/wiki/Summit_Series

Canada barely won the series 4-3-1 but it was not a friendly series, as Canadian player and NHL star Bobby Clarke broke Valeri Kharmalov’s ankle with a two handed slash and it also showed how good the Soviets were.

In the middle years of the 70’s there was the “Super Series”.

In 1976 the Soviet Wings went 3-1 outscoring their opponents in the games they won 13 to 7. If it hadn’t been for the 12 to 6 lopsided loss to the Sabers things would have been worse.

Also in 76 you had the Central Red Army Team going 2-1-1 against NHL teams only losing to the defending Stanley Cup Champion Flyers and a tie with Montreal, but outscoring their opponents 16 to 12.

In 78 the Moscow Spartak won a series against NHL teams 3 games to 2 and outscored the NHL 14 goals to 12.

In 79 The Wings again played NHL teams and won the series 2-1-1 and outscored the NHL 21 to 16.

In 1980 just 3 days before the Olympics in an exhibition game between the Soviet national team and Team USA, the Soviets crushed Team USA. To show how bad the odds were going in the Soviets record was 27-1-1 in the last four Olympics and they outscored their opponents 175-44. Winning the Gold Medal wasn’t the goal in most people’s minds, it was just not looking like a joke.

So when they beat the Soviets to reach the Gold Medal game it truly was a miracle and it gave a nation new hope.