Date: Fri 22 Jul 1988 00:06-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@mc.lcs.mit.edu Us-Mail: MIT Mail Stop 38-390, Cambridge MA 02139 Phone: (617) 253-2737 Subject: AIList Digest V8 #21 To: AIList@mc.lcs.mit.edu Status: R AIList Digest Friday, 22 Jul 1988 Volume 8 : Issue 21 Today's Topics: Does AI kill? -- Fourth in a series ... ---------------------------------------------------------------------- Date: 19 Jul 88 16:47:02 GMT From: mcvax!ukc!etive!aiva!ken@uunet.uu.net (Ken Johnson) Subject: Re: does AI kill? In article <12400014@iuvax> smythe@iuvax.cs.indiana.edu writes: > I don't know which event involved the > Sheffield, but there was no misidentification in either case. I had also heard the story that the Exocet, having been sold to the Argentinians by the Europeans, was not identified as a hostile missile. Subsequently the computers were `reprogrammed' (media talk for giving them a wee bit of new data.) Presumably if you sell arms to your enemies this is what you must expect. -- ------------------------------------------------------------------------------ >From Ken Johnson, AI Applications Institute, The University, EDINBURGH Phone 031-225 4464 ext 212 Email k.johnson@ed.ac.uk ------------------------------ Date: 20 Jul 88 06:31:00 EDT From: "CUGINI, JOHN" Reply-to: "CUGINI, JOHN" Subject: Does AI kill? [ please excuse if this is repetition - our mailer has been berserk lately ] Two points I haven't seen made so far... 1. Are AI systems to be held to a standard of perfection? I don't know of *any* kind of system, constructed by humans, that doesn't fail - airplanes crash, walkways collapse, nuclear power plants explode. And, yes, people die ... so the issue isn't whether AI/computer systems will ever fail, causing loss of life - they will, be assured. But so would the non-computer sustems which are the alternative. Moreover, there will be instances (maybe the Vincennes was one, maybe not) in which a non-AI system would've made a better choice. Big deal. The serious question is, on average, will the performance of a system be enhanced (wrt whatever criteria you like - saving lives, etc.) or degraded by use of AI components. The Vincennes critics should make the case (if they can) that the AEGIS system caused the shoot-down, that it wouldn't have occurred otherwise, and that AEGIS has no compensating effects (maybe it's already saved 291 lives by deterring Iranian attacks on American ships...). 2. For what it's worth, I think it's a cheap shot to use this incident as a excuse to hold forth on one's political views (on AI list). The AI-component debate is proper, but I'd urge some self-restraint before we start pontificating on SDI, etc etc. The "larger lesson" of the Vincennes, like many other "lessons" of recent history, seems to depend much more on one's prior political views than on any unambiguous interpretation of events. John Cugini ------------------------------ Date: Wed, 20 Jul 88 10:02 EST From: Subject: Vincennes and AI Lets examine the faults of the recent airbus downing in the gulf. A commercial airbus supposedly carrying innocent civilians was shot down by an Aegis cruiser. Evidently, there is reason to suspect that the airbus did not carry proper IFF transponder identification. The Aegis was not able to properly identify the aircraft as friendly, and since it was recently involved in a skirmish, shot the aircraft down. Did the Navy realize that a commercial airbus with incorrect IFF transponders could be shot down in the Persian Gulf? If not, then there was a severe problem in the Navy's understanding of how their equipment operates. From what I have heard, the Aegis is designed to shoot down large numbers of unfriendly aircraft, not neccessarily designed to avoid shooting down nearby unidentified aircraft. I don't see how the Navy could have placed such a killing system in the Gulf without the realization that there would be danger to civilian aircraft in the tight waters. Furthermore, it has often been noted in the literature that in wars, innocent civilians are killed. Allow me to define war as a state of hostility between states which allows at least one of those states to injure some part of the other state's population. It would be nice if we could have nice clean wars, but unfortunately, they do not exist. But I have strayed off the subject. I find this incident a possible example of mal-understanding of how a system works in a situation, if this incident did indeed suprise anyone in the Pentagon. The systems in the Persian Gulf are going to have a hard time identifying friend and foe, and there may indeed be increased use of commercial transponders on military aircraft in the region if this is possible. --As we noted in Vietnam, even "real intelligence" cannot always discover who is friend, and who is foe. Thomas Edwards ins_atge@jhuvms ------------------------------ Date: 20 Jul 88 22:38:03 GMT From: smithj@marlin.nosc.mil (James Smith) Subject: Re: does AI kill? In article <1376@daisy.UUCP>, klee@daisy.UUCP (Ken Lee) writes: > Computer-generated mistakes aboard the USS Vincennes may lie at the root > of the downing of Iran Air Flight 655 last week, according to senior > military officials being briefed on the disaster. > ... > The officials said Rogers believed the machines - which wrongly > identified the approaching plane as hostile - and fired two missiles at > the passenger plane, knocking it out of the sky over the Strait of > Hormuz. > ... > Some obvious questions right now are: > 1. is AI theory useful for meaningful real world applications? > 2. is AI engineering capable of generating reliable applications? > 3. should AI be used for life-and-death applications like this? >The blame lies entirely on the captain of the >USS Vincennes and his superiors for using the system in a zone where >commercial flights are flying. In all of this debating over whether-or-not AEGIS should be used in the gulf, and whether-or-not Captain Rogers erred in his decision to shoot at IAF 655, one crucial point has been overlooked - there is _no other_ combat direction system (in our, or any other navy) which can even begin to cope with the volume of information that is efficiently processed and displayed by AEGIS. The commanding officer of a pre-AEGIS ship would have had far less time from target detection to shoot decision; he would also have had a less-precise radar track and IFF information, and would have had to make the shoot/no-shoot decision at a greater range than Captain Rogers. This is not a problem of applying AI but, rather, a problem requiring an immediate life-or-death decision made, not in the laboratory or the office, but in what Klausewitz referred to as the *fog of war*. Jim Smith UUCP: smithj!marlin!nosc DDN: smithj@marlin.nosc.mil If we weren't crazy, we'd all go insane - Jimmy Buffet ------------------------------ Date: 21 Jul 88 01:14:44 GMT From: uvaarpa!virginia!uvacs!cfh6r@umd5.umd.edu (Carl F. Huber) Subject: Re: does AI kill? In article <470@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes: > ... >Now, before y'all start firing cruise missiles at me, I am *NOT*, I repeat >NOT praising the system that killed 280 people. Let's be careful. That system didn't kill anyone. ------------------------------ Date: 21 Jul 88 11:30:12 GMT From: amelia!prandtl.nas.nasa.gov!msf@ames.arpa (Michael S. Fischbein) Subject: Re: does AI kill? In article <1054@marlin.NOSC.MIL> James Smith writes: >In all of this debating over whether-or-not AEGIS should be used in the gulf, >and whether-or-not Captain Rogers erred in his decision to shoot at >IAF 655, one crucial point has been overlooked - there is _no other_ >combat direction system (in our, or any other navy) which can even begin >to cope with the volume of information that is efficiently processed >and displayed by AEGIS. Nonsense. The Navy Tactical Data System (NTDS), the pre-AEGIS computerized sensor/fire control computer system had very similiar capabilities as far as tracking and individual's displays go. AEGIS supports the large `wallboard' displays which were not supported by NTDS until recently; budget constraints have prevented retrofitting NTDS ships with the large screens. AEGIS is only present on the Ticonderoga class AAW cruisers with the SPY-1 radar; other primary AAW combatants use NTDS and other radar systems, such as the SPS-48E and will continue to do so. > The commanding officer of a pre-AEGIS ship would >have had far less time from target detection to shoot decision; he >would also have had a less-precise radar track and IFF information, and IFF has little or nothing to do with AEGIS; I would appreciate any reference that compares the SPY-1 to the SPS-48 (current models of each) and shows significantly greater precision to either. The SPY-1 is faster as it doesn't rotate, but from my (admittedly slightly out-of-date) personal experience, they can offer comparable performance. >would have had to make the shoot/no-shoot decision at a greater range >than Captain Rogers. > >This is not a problem of applying AI but, rather, a problem requiring >an immediate life-or-death decision made, not in the laboratory or >the office, but in what Klausewitz referred to as the *fog of war*. Absolutely true. If you haven't been there, people, try to think about what it was like on the ship. mike Michael Fischbein msf@ames-nas.nas.nasa.gov ...!seismo!decuac!csmunix!icase!msf These are my opinions and not necessarily official views of any organization. ------------------------------ Date: 21 Jul 88 17:24:59 GMT From: lakesys!tomk@csd1.milw.wisc.edu (Tom Kopp) Subject: AI...Shoot/No Shoot It seems a lot of this argument upon the AI systems (or whatever you wish to call them....information sorters...AI routines...whatever) Stems from how the Captain acted up on it in the situation. Someone brought up the phrase "Shoot/No Shoot" and that reminded me that Police officers in many areas go through a special shoot/no shoot test regarding the use of their firearms. Does anybody know if Naval Command Candidates go through similar testing on a simulated ship, or what? Looked at in this light, I can't see where he had any choice BUT to shoot, based upon what his computers were telling him. If it were indeed a loaded passenger get w/ civilians on board, then I of course, regret the action, but that doesn't change the situation. He couldn't possibly get a visual fix on the target, his computers were warning him of a threat, and an unidentified aircraft on a 100% direct course to his ship was descending at an angle that would very soon bring it into attack position. I still don't understand WHY the computers mis-lead the crew as to the type of aircraft, especially at that range. I know that the Military has tested heavily some proposed radar gear for the Tomcat (and possibly other planes) that is capable of target identification based upon the radar signature, and having the target head on only helps. It counts the number of blades on the turbofan and thus knows what kind of engine it is, and thus narrows the possibility down to a very few aircraft, often even to one. Either this is not installed on the AEGIS ships or it was malfunctioning....It also may not even be completed yet....I read about it in Aviation Week/Space Technology or something a year or so ago....forgot just where I saw it... ------------------------------ Date: Thu 21 Jul 88 11:51:54-PDT From: Conrad Bock Subject: Re: Does AI kill? Overautomation can kill if the task is to decide when to kill. We have all experienced programs that try to be too smart. Both industry and military institutions would like to take the human out of the loop (more control for the people at the top), hence overautomation. I believe this is the critique of Dreyfus and Dreyfus. Conrad Bock ------------------------------ End of AIList Digest ********************