View 636 August 16 - 22, 2010 (original) (raw)

Friday, August 20, 2010

There are a couple of items brought to my attention by mail that seem very much worth looking into:

Cyberattacks via automobile worms might cripple traffic flow...

Jerry,

Stumbled across this quite by chance this morning. Plausible.

http://earlywarn.blogspot.com/
2010/08/city-crippler-car-worms.html

-John G. Hackett

It's worse than it appears on first look. All our modern "smart cars" are vulnerable to wireless attacks that could stick the accelerator at full on, or lock the brakes, or disable all the cars on the road at rush hour. I don't know how vulnerable our military vehicles are to this sort of thing. I wonder how long it will be before the first anti-auto electronic terrorist attacks?

====================

The other was from Mike Zawitowski, which reminds me of something we have written about before.

<http://www.sciencenews.org/view/
feature/id/57091/title/Odds_are,_its_wrong>

When scientists make mistakes using statistics

And is quite frightening. It says among other things

There is increasing concern,� declared epidemiologist John Ioannidis in a highly cited 2005 paper in PLoS Medicine, �that in modern research, false findings may be the majority or even the vast majority of published research claims.�

Ioannidis claimed to prove that more than half of published findings are false, but his analysis came under fire for statistical shortcomings of its own. �It may be true, but he didn�t prove it,� says biostatistician Steven Goodman of the Johns Hopkins University School of Public Health. On the other hand, says Goodman, the basic message stands. �There are more false claims made in the medical literature than anybody appreciates,� he says. �There�s no question about that.�

Nobody contends that all of science is wrong, or that it hasn�t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. �A lot of scientists don�t understand statistics,� says Goodman. �And they don�t understand statistics because the statistics don�t make sense.�

The misuse -- indeed culpable misunderstanding -- of statistical analysis applies even more to the "social sciences" which are shot through with "significant" findings that can lead to far reaching consequences, but on analysis are not repeatable or reliable and are more likely to have arisen by chance than by any actual real world mechanisms. One of the big problems with analyzing the "HIV = AIDS" hypothesis came from the nature of the evidence.

When I was in graduate school in psychology we were required to take a statistics course given by the Psychology Department. It was the terror of many students, particularly the clinical psychology students. It used a textbook written by the professor who taught it, and was in fact a far better and more meaningful course than those taught in most university social science departments. It was also fairly trivial, largely consisting of learning cookbook techniques, including tricks that could be used on a Monroe Calculator so that you could get a number of required calculations done with one entry. (Ah, if we had only had small computers in those days!) There were lectures on "the null hypothesis" and such like, but there was almost no discussion as to how probability works (or indeed if it works at all). Those who wondered about all this were told they could go read L. Savage's Foundations of Statistics. Few got past the first chapter. (You can find substantial parts of the book at Google Books if you want a sample.)

In my case I wanted to study with Paul Horst, and he required his students to go to the Math department to take advanced math courses, which turned out to be the best thing that ever happened to me since I was required to learn all the mathematical tools of Operations Research (including of course probability theory and linear algebra), but that's another story. But even in the math department the probability courses were more concerned with proving theorems than in understanding the relationship between data and reality. Fortunately I was eventually able to get through Savage, and that led to Bayes, and eventually to a realization that it's very difficult to prove general rules when you don't have a full understanding of the mechanisms involved. That is: scientific method consists of formulating falsifiable hypotheses. The problem is, unless you know (or are testing) the exact mechanisms involved, how do you know that an hypothesis has been falsified? The data are often inexact and are sometimes based on inferences from other inexact data. Clearly an example here is Global Warming: what data do we need to falsify the hypothesis? But that is probably simple compared to some social science theories.

Now of course much "scientific" theory is never subjected to the test of falsification to begin with; now suppose that a good half of the "findings" of science probably are false.

Clearly the better we understand a phenomenon, the more narrow a hypothesis we can make, and thus the more refined our definition of data. Also clearly the better the quality of our data, the more closely we can test the falsity of our hypotheses.

I'll stop rambling now. The important point here is that a lot of "science" isn't very good, because the hypotheses being tested don't make sense, and attempting to resolve the problems in experimental design when the data are fuzzy will make you head ache. If you want to know more on that, start with the Wiki entry on liklihood http://en.wikipedia.org/wiki/Likelihood_principle and read it (just assume the math is correct; you don't really need to be able to follow the equations) and see what happens when you're trying to measure simple voltage fluctuations all of which are below 100 volts with a 100 volt meter...

=================

We have become a nation of taboos. Dr. Laura is leaving her program because she pronounced the dread N-word on the air. Not as an epithet directed toward some human being, which would at least be rude and which has implications, but simply pronouncing it. Jennifer Anniston used the word "retard" on the air, and now she must be hounded into submission. You can't say "stupid" even in casual conversation. I suppose no one should ever read Charles Erskine Scott Woods Heavenly Discourse because one of the chapters concerns what happens when God becomes disgusted and decrees that the stupid shall not enter Heaven. So must that book be banned. Of course it was banned in its day for being blasphemous, and the very people who would ban it now stood up in its defense.

Saying those awful words is hurtful, and mean, and anyone who says 'retard' must be destroyed, and we know all this because we have the statistics on the number of people who are harmed by people saying things like that, and it's all science, and after all there are no stupid people, and no one is retarded. This is America where all the children are above average. And everyone is entitled to be protected from being offended. It's offensive for people even to know words like retard and stupid.

"Oyez! Oyez! Oyez! All persons having business before the Honorable, the Supreme Court of the United States, are admonished to draw near and give their attention, for the Court is now sitting. God save the United States and this Honorable Court!" How long will the Marshall be allowed to continue that insidious practice?

==============

DH asks "So what is a 'disorder' or 'syndrome'?

That's easy - it's anything that an insurance company will pay a "licensed professional" to "treat".

Any wonder that they breed between releases of the DSM?

David Smith

==============

read book now

Friday TOP Current Mail