The RISKS Digest, Volume 8 Issue 75 (original) (raw)
The RISKS Digest
Volume 8 Issue 75
Tuesday, 30th May 1989
Forum on Risks to the Public in Computers and Related Systems
ACM Committee on Computers and Public Policy,Peter G. Neumann, moderator
Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
Contents
Aegis and the Iranian Airbus shootdown
SRI attacked by kamikaze squirrels?
Computer electrocutes chess player who beat it!
Mariner I — no holds BARred
Peter Neumann neumann@csl.sri.com
Sat, 27 May 1989 15:34:33 PDT
Paul Ceruzzi has written a truly outstanding book for the new show that opened two weeks ago at the Smithsonian National Air and Space Museum. The exhibit and the book are both entitled "Beyond the Limits — Flight Enters the Computer Age". Both are superb. Go for it (them).
Paul has dug into several cases treated previously in RISKS and in issues of
the ACM Software Engineering Notes, and has been able to resolve several
mysteries. In particular he considers the case of Mariner I, about which
various inaccurate stories have been told. Intended to be the first US
spacecraft to visit another planet, it was destroyed by a range officer on 22
July 1962 when it behaved erratically four minutes after launch. The alleged
missing hyphen' was really a missing
bar'. I quote from Paul's book, pp.
202-203:
During the launch the Atlas booster rocket was guided with the help of two radar systems. One, the Rate System, measured the velocity of the rocket as it ascended through the atmosphere. The other, the Track Ssytem, measured its distance and angle from a tracking antenna near the launch site. At the Cape a guidance computer processed these signals and sent control signals back to the tracking system, which in turn sent signals to the rocket. Its primary function was to ensure a proper separation from the Atlas booster and ignition of the Agena upper stage, which was to carry the Mariner Spacecraft to Venus.
Timing for the two radar systems was separated by a difference of forty-three
milliseconds. To compensate, the computer was instructed to add fourty-three
milliseconds to the data from the Rate System during the launch. This
action, which set both systems to the same sampling time base, required
smoothed, or averaged, track data, obtained by an earlier computation, not
the raw velocity data relayed directly from the track radar. The symbol for
this smoothed data was ... R dot bar n' [R overstruck
.' and `_' and
subscript n], where R stands for the radius, the dot for the first derivative
(i.e., the velocity), the bar for smoothed data, and n for the increment.
The bar was left out of the hand-written guidance equations. [A footnote cites interviews with John Norton and General Jack Albert.] Then during launch the on-board Rate System hardware failed. That in itself should not have jeopardized the mission, as the Track System radar was working and could have handled the ascent. But because of the missing bar in the guidance equations, the computer was processing the track data incorrectly. [Paul's EndNote amplifies: The Mariner I failure was thus a {\it combination} of a hardware failure and the software bug. The same flawed program had been used in several earlier Ranger launches with no ill effects.] The result was erroneous information that velocity was fluctuating in an erratic and unpredictable manner, for which the computer tried to compensate by sending correction signals back to the rocket. In fact the rocket was ascending smoothly and needed no such correction. The result was {\it genuine} instead of phantom erratic behavior, which led the range safety officer to destroy the missile, and with it the Mariner spacecraft. Mariner I, its systems functioning normally, plunged into the Atlantic.
Another false incarceration
Peter Neumann neumann@csl.sri.com
Sat, 27 May 1989 16:22:24 PDT
In his testimony on 18 May 1989 to the Subcommittee on Civil and Constitutional Rights of the Committee on the Judiciary of the U.S. House of Representatives, relating to the National Crime Information Center (see Bob Morris, RISKS-8.27), David D. Redell (redell@src.dec.com) cited another case of false incarceration (see the case of Roberto Perales Hernandez, noted by Rodney Hoffman in RISKS-8.71, as well as various cases noted earlier — such as that of Terry Dean Rogan):
``Only last week, a case in California demonstrated the potential benefit of easy access to stored images. Joseph O. Robertson had been arrested, extradited, charged, and sent to a state mental facility for 17 months. During that entire time, mug shots and fingerprints were already on file showing clearly that he was the wrong man, but no one had taken the trouble to check them.''
Perfecting Peopleware [Governing Magazine]
Wed, 24 May 89 17:04 EDT
Perfecting Peopleware
by Rob Gurwitt
[extracted from Governing Magazine, May, 1989.]
A few years back, Liz Krueger noticed a pattern to the calls she was
receiving from New York City's network of activists for the poor. They were phoning her to report that their clients, in rising numbers, were having trouble getting their welfare checks and food stamps on time. Far from being scattered at random throughout the city's five boroughs, it turned out, the troubles clustered around particular "income maintenance centers," city offices that handle public assistance payments.
All the centers involved, Krueger discovered, had been tied in to the new
Welfare Management System, a massive computerized network through which, at state orders, the city was to manage public assistance, food stamps and Medicaid disbursements. "I could see very clearly that you started to have crises in centers approximately six weeks after they started to go into conversion," says Krueger, associate director of the Community Food Resource Center, a Manhattan advocacy group. New York's social welfare delivery efforts, in short, were being derailed by computer problems.
This was not a case of hardware gone on the fritz or of software paralyzed
by bugs, however. It was a problem with the human part of the system.
In the years before the state imposed the new Welfare Management System on
them, city welfare officials had used their own computerized system to keep track of recipients and payments. But the new system worked differently, and social service workers were suddenly faced with hundreds of new codes to learn
codes describing school-age children, or young mothers who had dropped out of school, or able-bodied men looking for work. At their terminals in hectic offices, under pressure to keep up with their immense caseloads, city workers were, not surprisingly, making mistakes.
When someone trying to describe, say, a 35-year-old mother of five instead
entered the code for a 20-year-old male high school dropout, the system would check the profile against existing city and state records, and find that the worker's entry contradicted information about the recipient in those other data bases. Until the discrepancy could be cleared up, which might take weeks, no payment would go out.
Advocates for the pool say that welfare centers recorded error rates as
high as 30 or 40 percent; the city says that the rate only went as high as 20 percent. Whatever the case, thousands of New York City's poorest residents were suddenly cut off from desperately needed income. "When the rent's due", says Krueger, "landlords are not interested in hearing stories about the city and the state having computer problems, and supermarkets are not interested in the notion of credit."
For all concerned, it was a bruising and expensive lesson in technological
change. What the city was discovering - as other jurisdictions had before it, and yet others are still doomed to do - was that in making a computer system work, technical success is meaningless on its own. Unless handled with care and forethought, knotty human and organizational problems can confound a new system as effectively as any latter-day Luddite.
[The article continues for several pages of well-thought-out and well-written commentary on people/computer interfaces and the organizational issues of introducing a new computer system.]
Aegis and the Iranian Airbus shootdown (RISKS-8.74)
Steve Philipson steve@aurora.arc.nasa.gov
Fri, 26 May 89 20:08:32 PDT
In RISKS-8.74, Peter Neumann summarized comments made by Matt Jaffe on the design of the Aegis system, in the context of the Vincennes / Iranian Airbus incident. I had the opportunity to read a copy of the de-classified report on this incident and feel obliged to make a few clarifications. Unfortunately, I do not have a copy of the report, so I can't quote from it, thus these observations are from memory.
An initial IFF interrogation showed a Mode II return AND a discreet ID code that had been in use by aircraft confirmed to be Iranian F-14s in earlier operations. One explanation was:
[...] What may have happened was that the airliner taxied near enough to an F-14 on the ground as to preclude the system from recognizing that there were in fact TWO aircraft.
The computer records showed that the radar "range gate", i.e. the area which it was listening to, had been set on the airport for an extended period of time. It was considered quite likely that due to atmospheric ducting effects, an F-14 on the ground and powered-up may have responded to a radar interrogation after the Airbus was airborne. This caused association of this single return with the unknown aircraft, and pre- disposed the crew to believe that it was an F-14.
Once the airliner was airborne, its lack of further mode II activity would not preclude the display of the old Mode II code. Aircraft may fail to respond to an IFF interrogation (of any code) for a variety of reasons and yet operators (both civil and military) want to have the last received code remain displayed.
The above does not agree with the official report. The Vincennes interrogated the unknown aircraft several times, and received Mode III replies with altitude information. The Mode II return was not repeated, and the Mode III replies were clearly displayed. Each new return was shown with the current altitude return.
It seems that a single junior officer misinterpreted his display to indicate that the aircraft was descending and accelerating on an attack profile towards the Vincennes. He made several call-outs to that effect even though his display showed otherwise. It seems likely that he had mentally fixed on the idea that the unknown aircraft was a hostile threat, and was seeing on his display what he believed would be happening rather than what was happening. A more senior officer accepted these statements and relayed the information to the captain. The senior officer was stationed at a console that also displayed the altitude and mode information, but he did not independently verify it.
The captain was receiving inputs from several sources to the effect that the unknown aircraft was descending and accelerating. He chose to fire on that information. Unfortunately, those sources were all basically the same source — the statements of the junior officer.
The situation in the ship's combat center must be appreciated to understand this incident. The ship was maneuvering radically and in engagement with highly maneuverable small surface boats. Small bore fire from the boats was impacting the hull of the Vincennes, and fire was being returned. The crew perceived a hostile aircraft threat to be closing on their ship. They thought the aircraft was off of assigned airways as their displays of the airway were several miles off from their correct position (I don't recall the reason why), and they did not have information on the schedules of departures from this airport. The intercom lines were very active and some people were shouting. This was an atmosphere of extreme tension and confusion — just as we might expect in battle. Even so, at least one officer called "possible com-air" (commercial airliner) several times, but his calls did not gain enough credence to prevent the firing of missiles.
The Aegis system is not just a safety-critical system, it is a battle system. As such, it must be evaluated as to how well it reaches its objectives in a battlefield environment. The outcome here was mixed — the ship was protected, but an innocent was destroyed. It is clear that a battlefield is no place for innocents. It may also be the case that a system like the Aegis cannot be used where there is much civilian traffic mixed-in with fighting craft. Perhaps no system can function in this environment without a chance of such an accident occurring. The battle doctrine may simply be incompatible with civilian traffic. This is nothing new — WWII pilots made a point of staying away from friendly ships to avoid being shot down. If identity was in doubt, it was preferable to shoot down a friendly aircraft than to lose a ship.
No equipment necessary for the Vincennes mission could have prevented a manual decision from being difficult, nerve-wracking, and error-prone.
This is true, but a system with a less ambiguous display of critical information could make errors less likely. Better definition of roles and responsibilities in the combat center could also help to minimize error. This is not just a computer system problem, but a human problem. There are problems in trusting technology under adverse circumstances, but there are also problems with trusting human beings. When we build systems, we must take into account both the strengths and weakness of computers AND of people.
Radio Frequency interference
"J. Michael Berkley" jmberkley@watnext.waterloo.edu
Tue, 23 May 89 20:30:14 EDT
An interesting article from the Kitchener-Waterloo Record:
"THUNDER BAY (CP) - An accident that killed a miner in Northern Ontario in March has 'frightening' implications for other mine workers, a coroner said Thursday.
"Gerry Urchel, 37, fell 24 metres to his death after being pushed off a ledge by a piece of radio-controlled machinery, a coroner's inquest heard.
"Two machines had been set accidentally on the same radio frequency, the inquest was told. One machine - a scoop tram - lurched forward after it picked up a radio signal meant for the other machine."
There is more in the article, but this is the part that is relevant to RISKS.
I must admit, I am surprised that there are no safeguards against this sort of thing already. What kind of safeguards are possible in this situation and are the safeguards reliable?
Mike Berkley, University of Waterloo, PAMI Lab
SRI attacked by kamikaze squirrels?
David L. Edwards dle@csl.sri.com
Mon, 29 May 1989 20:06:25 PDT
It seems that SRI's "no-single-point-of-failure" power system failed at the hands, er the paws, of a squirrel. It was an unsuccessful attempt to ensure that we all got today off.
The power was off for approximately 9 hours. CSL was fully operational by 6:30 PM. We experienced no hardware problems as of yet but the next 72 hours will be the test.
The network is operational and most hosts around the institute are running. SRI-NIC is currently down but being repaired. KL is down. -dle
[This is at least the third time in my SRI experience that a squirrel has done SRI in. Since the previous incident — which took us down for something like four days — we have established a cogeneration plant which together with isolation, standby generators, and PG&E were supposed to guarantee us safe power. The paws that refresh would have been nice for a holiday, except that David came in to minimize the damage on restoral, and this was the second Sunday in a row that an all-day power outage had kept me from trying to catch up on-line, in the midst of travelling. PGN]
Computer electrocutes chess player who beat it!
Gene Spafford spaf@cs.purdue.edu
Mon, 29 May 89 12:50:22 -0500
14 March 1989 issue of "Weekly World News" [one of those supermarket tabloids]
Computer Charged with Murder After Frying Chess Champ, by Ragan Dunn
A Soviet super-computer has been ordered to stand trial for the murder of chess champion Nikolai Gudkov — who was electrocuted when he touched the metal board that he and the machine were playing on! "This was no accident — it was cold-blooded murder," Soviet police investigator Alexei Shainev told reporters in Moscow. "Niko Gudkov won three straight games and the computer couldn't stand it. When the chess master reached for his knight to begin play in the fourth game, the computer sent a lethal surge of electricity to the board surface. The computer had been programmed to move its chess pieces by producing a low-level electric current. "Gudkov was electrocuted while a gallery of hundreds watched."
The decision to put the computer on trial stunned legal experts around the world. [I hope computer experts are also shocked, so to speak. --spaf] But the Soviets are convinced that the computer had the pride and intelligence to develop a hatred for Gudkov — and the motive and means to kill him. The mind-boggling murder drama unfolded during a six-day chess marathon between the M2-11 supercomputer and Gudkov, a world class chess player.
According to reports, Gudkov defied all odds [Calculated by the same supercomputer, no doubt. --spaf] and beat the machine in three consecutive games. And when they prepared to begin their fourth, a deadly dose of electricity flowed up into the electronic board and zapped Gudkov dead. Soviet authorities initially thought that the surge of electricity was caused by a short-circuit. But an examination of the computer revealed no problems.
It was later determined that the machine diverted the flow of electricity from its brain to the chess board to ensure a victory over Gudkov. [This implies that Soviet semiconductors work at voltages of a few hundred volts, or maybe their supercomputers are tube-based? --spaf]
"The computer was programmed to win at chess and when it couldn't do that legitimately, it killed its opponent," said investigator Shalnev. "It might sound ridiculous to bring a machine to trial for murder. [!!] But a machine that can solve problems and think [sic] faster than any human must be held accountable for its actions."
Rudi Hagemann, the Swiss legal scholar, agreed with the Soviet cop. He said that the development of artificial intelligence has come so far in recent years that certain computers and some robots "must be considered human."
It isn't clear how the Soviets will punish the computer if it is found guilty when it goes to court this spring. [Send it to a Gulag for reprogramming? --spaf]
But Hagermann says the machine will probably be reprogrammed or dismantled altogether.
[I don't think there's much to say here, except in the way of warning: next time you accuse the system of cheating at rogue, don't say it too loudly! -spaf]
[This reminds me of the WWN story from 10 July 1984 about the 58-year-old
Chinese man, Chin Soo Ying, who had designed a computer system in 1950
(based on the British Colossus) to express words of love and emotions.
The article related how after he had built a new machine in the 80s,
he was electrocuted by the old machine. His wife was convinced that
Chin was murdered by the old machine, which then committed suicide. (The
WWN hadline was "Jealous Computer Zaps its Creator".) I recall this in
the interest of perspective on the current story, and its source. PGN]
Please report problems with the web pages to the maintainer
Top