View 600 December 7 - 13, 2009 (original) (raw)

This week:

Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Sunday

read book now

TOP

Friday, December 11, 2009

Data and Climate Science

First, if you have any interest in the climate debate, you must read the careful analysis of the data from Darwin, Australia that we referred to last evening. I have studied this in some detail since Joanne recommended it, and it is important: not because it is a "smoking gun" demonstrating evil on the part of the climate analyzers, but because it raises questions that must be answered before the world spends trillions on remedies to climate change.

The analysis shows that the primary raw data show one trend; the adjusted "harmonized" data that were input into the models used to predict climate change show quite another. Now this may be a very reasonable adjustment -- but that adjustment has to be open, aboveboard, and justified. So far we have not seen any such explanations, and it has all been done in house, not openly.

The analysis also shows the difficulties of finding a temperature. Think on it. Suppose all of us put thermometers outside our houses somewhere. Today we can get electronic thermometers, or we can stay with a column of mercury with number painted on it. Either way we need a calibration: where do we paint the numbers? A mercury thermometer isn't self-calibrating as a mercury manometer would be. Millimeters or inches of mercury is itself a primary measure of pressure; but how high a sliver of mercury rises in a small tube as a function of temperature needs comparison with some known temperatures before you paint the numbers on. This is easy enough to do, but it must be done; keep this in the back of your mind as we look into the problem of determining just what the temperature is 0ut on my balcony.

Having found a calibrated thermometer that I can read to some given accuracy, where do I put it? I gather there is now some agreement on the structure for housing the thermometers, but certainly that wasn't standard in 1890 or 1900. I am not sure when the housing was standardized. But since we're not going to try to project the temperature on the balcony back to past times, it doesn't matter just now; just that I use the standard housing.

Now, when do I take the measurements? It's certainly cooler at 8 AM than at 4 PM. How many measurements do I take? Since my goal is a single measurement for "how hot is it on your balcony today?" and to project that into data on "How hot was it on your balcony this year?", we have to agree on just what the "daily" temperature means: an average of how many measurements per day? Given that I don't intend to spend my life on this project, I am not likely to take measurements every hour. Perhaps 2 a day? At Noon and at Midnight? Is that enough? But when I look on line for digital recording thermometers I find I can afford a gizmo that samples the temperature continuously, and I can let it do a daily average. What formula it uses isn't apparent but it's consistent. Suppose I use that: it's going to give something different from what I got back when I was eyeballing a mercury thermometer at Noon and Midnight. If I want some continuity in my temperature measurements, I have to figure out how to translate the automatic thermometer data to indicate what I would have got had I been using a mercury thermometer twice a day. I need to "harmonize" the data, and that, I presume, is what is happening in the Australian data recalculations: but whatever they were doing in Darwin, it needs to be made explicit. What data were changed, by how much, and for what reasons?

Of course we're still not through. Eventually I come up with some definition of just what was the temperature on my balcony for any given day, and I suppose I could just take those daily figures and average them for my estimate of the annual temperature. Does anyone have an estimate of what confidence I should have in that figure? Given that I could calculate my "annual temperature on the balcony" in a lot of ways, which one do I rely on? And if I come up with a "better" way to do that, how do I compare the new and better data with the older data? To what accuracy do we have confidence in this? A tenth of a degree? Half a degree? A degree?

And now I want "the daily temperature in Studio City." So long as I only have the data from my balcony I would suppose that one will have to do: I can compare it to the figure(s) I get from the newspapers or on the weather channel. They're different. Now how do the newspapers and TV reporters get "the temperature in Studio City"? Generally they take a single measure, I think in Studio City from somewhere on the CBS lot. Why is theirs better than mine? Now to get the temperature in Los Angeles, we can either take a measure from somewhere downtown and use that, or we can average temperatures taken in Studio City, and Van Nuys, and Venice Beach, and San Pedro, and Brentwood, and -- well, you get the idea. Do we weigh them all the same and just average them? Are they all taken the same way, with standard housing of a thermometer and measures taken at the same time every day? Suppose the temperature from one of my points is just a lot different from all the others, and I go look and find that someone in the past couple of months has put in an air conditioner and the exhaust fan from that blows across my temperature measurement station. Do I remove the Brentwood data points? For how long? Do I take them out all the way back to their beginning, or only since they started looking odd?

And of course if I want a national temperature it gets worse, and to get the temperature of the Earth I have to decide a bunch of other stuff including temperatures at various altitudes, and various depths in the sea. But of course most of my sea temperature from 1950 on were taken by putting a thermometer in a bucket and hauling it up and down on a line. That's the best data I have: but how confident can I be in its accuracy? To a half degree? Surely on better than that. Perhaps to a degree?

The Australia analysis goes further. It shows that all of Australia has something like 30 input stations, and none of them have been continuous since 1900.

For the Earth: "Surface Area: Land area, about 148,300,000 sq km, or about 30% of total surface area; water area, about 361,800,000 sq km, or about 70% of total surface area."

Australia 2,968,000 Sq. Miles (7,687,000 Sq. Km) or 5.2% of the land area.

So: 5% of the Earth's temperature is determined by 50 (actually it's more like 30, but call it 50) thermometers reporting daily. .05X = 50 so we have about 1,000 thermometers to determine the Earth's land temperature. Since the land is 30% of the earth's surface, .30X = 1000 and we have 3,333 thermometers to determine the entire temperature of the earth. (I doubt we have that many, but it'll do for this.) That means 3,333 data points ever hour, or 29,200,000 data points a year. At 8 bytes per data point we're talking about 2 gigabytes of data per year; meaning that everyone reading this has the capacity to store that much data, and probably the computing power to do daily averages and print out trend curves. It's too late to do that for past years, but I propose that given the enormous economic importance of climate trends, the IPCC should publish all the raw data: uncorrected, not homogenized, just the numbers you'd get if you went out on the porch and read the thermometer (or dropped your thermocouple over the side of a boat, or whatever it is they do to get the numbers); and also publish the corresponding "corrected" or "homogenized" number that is fed into the models. That's publishing a few gigabytes of data per year, or some 10 megabytes a day. Let everyone on earth look at the data, and do things like calculate differences between raw and corrected data. We can all look at the trends and differences.

Given the trillions at stake the costs of doing this are trivial. I doubt that it will be done, but shouldn't it be?

=======================

IPCC says warmest October on record

As to why I would like to see the data published for all to see, try this:

http://www.telegraph.co.uk/comment/
columnists/christopherbooker/3563532/
The-world-has-never-seen-such-freezing-heat.html

When NASA Goddard says that last October was the warmest October in human history, it would be useful to have some instant checks on it. Turned out that Russia gave some erroneous reports that changed the trend.

I find it interesting to note that apparently the climate models weigh temperatures taken in Russia rather heavily. All of Asia is 30% of the land mass, so the area reported by Russia is what, half that? So at 15% of the land mass, or 15% of 30%, or 4.5% of the data points meaning 109,500 data points of the 2,433,333 data points per month we estimated as going into the average. Not a lot of data to manipulate, and one does wonder if anyone looks at the results: few of us would have thought that last October was all that warm. Didn't seem that warm to me, and my impression from the radio and TV was that it was actually pretty cold. Why would the 4.5%, or even 15% of world data points taken from Russia be enough to take October from pretty cold to the hottest October in human history? Doesn't seem reasonable to me.

But Hansen and NASA Goddard thought it was warm, and the Chairman of the IPCC who speaks for the science consensus didn't see anything odd about the data, and he was quick to use the "warmest October" in his speech opening the Copenhagen conference.

With Copenhagen opening to that key note, is it any wonder I think we need more openness with the data? And it wouldn't cost all that much to publish it; and we all have computers good enough to examine it once it is published.

Or is that what the IPCC is afraid of?

read book now

Friday TOP Current Mail