Date: Mon 18 Jul 1988 00:20-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@mc.lcs.mit.edu Us-Mail: MIT Mail Stop 38-390, Cambridge MA 02139 Phone: (617) 253-2737 Subject: AIList Digest V8 #12 To: AIList@mc.lcs.mit.edu Status: R AIList Digest Monday, 18 Jul 1988 Volume 8 : Issue 12 Today's Topics: does AI kill? - Responses to the Vincennes downing of an Iranian Jetliner ---------------------------------------------------------------------- Date: 12 Jul 88 22:21:17 GMT From: amdahl!nsc!daisy!klee@ames.arpa (Ken Lee) Subject: does AI kill? This appeared in my local paper yesterday. I think it raises some serious ethical questions for the artificial intelligence R&D community. ------------------------------ COMPUTERS SUSPECTED OF WRONG CONCLUSION from Washington Post, July 11, 1988 Computer-generated mistakes aboard the USS Vincennes may lie at the root of the downing of Iran Air Flight 655 last week, according to senior military officials being briefed on the disaster. If this is the case, it raises the possibility that the 280 Iranian passengers and crew may have been the first known victims of "artificial intelligence," the technique of letting machines go beyond monitoring to actually making deductions and recommendations to humans. The cruiser's high-tech radar system, receivers and computers - known as the Aegis battle management system - not only can tell the skipper what is out there in the sky or water beyond his eyesight but also can deduce for him whether the unseen object is friend or foe and say so in words displayed on the console. This time, said the military officials, the computers' programming could not deal with the ambiguities of the airline flight and made the wrong deduction, reached the wrong conclusion and recommended the wrong solution to the skipper of the Vincennes, Capt. Will Rogers III. The officials said Rogers believed the machines - which wrongly identified the approaching plane as hostile - and fired two missiles at the passenger plane, knocking it out of the sky over the Strait of Hormuz. ------------------------------ The article continues with evidence and logic the AI should have used to conclude that the plane was not hostile. Some obvious questions right now are: 1. is AI theory useful for meaningful real world applications? 2. is AI engineering capable of generating reliable applications? 3. should AI be used for life-and-death applications like this? Comments? Ken -- uucp: {ames!atari, ucbvax!imagen, nsc, pyramid, uunet}!daisy!klee arpanet: atari!daisy!klee@ames.arc.nasa.gov STOP CONTRA AID - BOYCOTT COCAINE ------------------------------ Date: 13 Jul 88 15:23:00 GMT From: uxe.cso.uiuc.edu!morgan@uxc.cso.uiuc.edu Subject: Re: does AI kill? I think these questions are frivolous. First of all, there is nothing in the article that says AI was involved. Second, even if there was, the responsibility for using the information and firing the missile is the captain's. The worst you could say is that some humans may have oversold the captain and maybe the whole navy on the reliability of the information the system provides. That might turn out historically to be related to the penchant of some people in AI for grandiose exaggeration. But that's a fact about human scientists. And if you follow the reasoning behind these questions consistently, you can find plenty of historical evidence to substitute 'engineering' for 'AI' in the three questions at the end. I take that to suggest that the reasoning is faulty. Clearly the responsibility for the killing of the people in the Iranian airliner falls on human Americans, not on some computer. At the same time, one might plausibly interpret the Post article as a good argument against any scheme that removes human judgment from the decision process, like Reagan's lunatic fantasies of SDI. ------------------------------ Date: 13 Jul 88 19:18:47 GMT From: ucsdhub!hp-sdd!ncr-sd!ncrlnk!rd1632!king@ucsd.edu (James King) Subject: RE: AI and Killing (Long and unstructured) Does AI Kill Some of my opinions: In my understanding of the Aegis system there is NO direct connect between the fire control and the detection systems. The indirect connect between these two modules is now (and hopefully will be for some time) a human being. A knowledge-based system interface to sensor units on the Vincennes was not responsible for killing anyone. It was a human's position to make the decision, based on multiple sensors and decision paths (experience), whether or not to fire on the jet. The computer system onboard the Vincennes was responsible for evaluating a sensor reading(s) and providing recommendations as to what the reading was and possibly what action to take. Is this fundamentally any different than employing a trained human to interpret radar signals and asking for their opinion on a situation? There are two answers to the above. One is that there is little difference in the input or the output of human or the computer system. Two the difference IS in arriving at the output. The computer is limited by the rules in the computer system and the fact that a computer system may not possess a complete understanding of the situation (i.e. area tensions, ongoing stress in battle situations, prior situational experience that includes "gut" reactions or intuition, etc.) Of course speed of execution is critical but that's another topic. The human can rely on emotions, peripheral knowledge, intuition, etc. to make a decision or a recommendation. But the point is that each is concluding something about the sensor output and reporting it to the commander. (period) I feel that the system contributed to the terrible mistake by providing inaccurate interpretation of a sensor signal(s) and making an incorrect recommendation(s) to the commander - BUT it was still a human decision that caused the firing. BTW I do not (at present) condemn the commander for the decision he made - as best I know (as he did) the ship's captain made the best decision under the circumstances (danger to his ship) and through interpretation of his sensor evaluations (Aegis as one type). But I disagree with placing oneself in that position of killing another. Our problems (besides having our fingers on the trigger) stem from the facts that we have to have a trigger in the first place and secondly that we have to rely on sensory input to make a decision - where certain sensory inputs may be in error. The "system" is at fault - not just one component of that system. To correct the system we fix the components. So we make the Aegis system "smarter" so it contains knowledge about the patterns that an Air Bus gives as a cross section (or signature) on a radar sensor - what then? We have refined the system somewhat. Do we keep refining these systems continuously so they grow through experience as we do? Oh no! We've started another discussion topic. -------------------------------------------------------------------------- A couple thoughts on Mr Lee's questions: - Whether AI can produce real applications: - The conventional/commercial view of AI is that of expert systems. Expert systems in and of themselves are just software systems which were programmed using a computer language(s) like any other software system. Some pseudo-differences are based on how we view the development of this software. We are now more aware of the need for embedding expert's knowledge in these systems and providing automated methods for "reasoning" through this information. - Through being aware people ARE developing systems that use expert knowledge in very focused domains (i.e. fault diagnosis, etc.). - The same problem exists in a nuclear plant. Did the fault diagnosis system properly diagnose the problem and report it? More importantly did the operator act/reason on the diagnosis properly? - Where would the blame end? Is it the expert system's fault or is it the sensor's fault - or is it human error? - Systems that learn and reason based on non-original programmed functionalities have not been developed/deployed but we'll see ... - Whether AI can be used in life-death situations: - If you were in a situation in which a decision had to be made within seconds, i.e. most life and death situations, would you: 1. Rely on a toss of a coin? 2. Make a "shot in the dark" decision? 3. Make a quickly reasoned decision based on two or three inferences in your mind? OR 4. Use the decision of a computer (if it had knowledge of the situation's domain and could perform thousands of logical inferences/second)? - One gets you even odds. Two gets you a random number for the odds. Three gives you slightly better odds based on minimal decision making. And four provides you with a recommendation based on the knowledge of maybe a set of experts and with the speed of computer processing. - If you're an emotional person you probably pick two. Maybe if you have a quick, "accessable" hunch you pick three. But if you're a logical, disciplined, person you would go with the greatest backing which is four (and a combination of one through three if the commander is experienced!). - Which one characterizes a commander in the Navy? - Personally I'm not sure which I'd take. Jim King NCR Corporation j.a.king@dayton.ncr.com Remember these ideas do not represent the position or ideas of my employer. ------------------------------ Date: 13 Jul 88 22:07:19 GMT From: ssc-vax!ted@beaver.cs.washington.edu (Ted Jardine) Subject: Re: does AI kill? In article <1376@daisy.UUCP>, klee@daisy.UUCP (Ken Lee) writes: > This appeared in my local paper yesterday. I think it raises some > serious ethical questions for the artificial intelligence R&D community. > ... [Washington Post article about Aegis battle management computer > system being held responsible for providing erroneous conclusions > which led to the downing of the Iranian airliner -- omitted] > > The article continues with evidence and logic the AI should have used to > conclude that the plane was not hostile. > > Some obvious questions right now are: > 1. is AI theory useful for meaningful real world applications? > 2. is AI engineering capable of generating reliable applications? > 3. should AI be used for life-and-death applications like this? > Comments? First, to claim that the Aegis Battle Management system has an AI component is patently ridiculous. I'm not suggesting that this is Ken's claim, but it does appear to be the claim of the Washington Post article. What Aegis is capable of doing is deriving conclusions based on an analysis of the radar sensor information. Its conclusions, while I wouldn't consider them AI, may be so considered by someone else, but without a firm basis. Let's first agree that a mistake was made. And let's also agree that innocent lives were lost, and tragicly so. What the key issue here is, I believe, that tentative conclusions were generated based on partial information. There is nothing but good design and good usage that will prevent this from occurring, regardless whether the system is AI or not. I believe that a properly created AI system would have made visible the conclusion and it tentative nature. I believe that AI systems can be built for meaningful real world application. But there is a very real pitfall waiting along the road to producing such a system. It's the pitfall that permits us to invest some twenty years of time and some multiple thousands of dollars (hundreds of thousands?) into the training and education of a person with a Doctorate in a scientific or engineering discipline but not to permit a similar investment into the creation of the knowledge base for an AI system. Most of the people I have spoken to want AI, are convinced that they need AI, but when I say that it costs money, time and effort just like anything else, the backpedaling speed goes from 0 to 60 mph in less time than a Porsche or Maseratti! I think we need to be concerned about this issue. But I hope we can avoid dumping AI technology down the drain because it won't give us GOOD answers unless we INVEST some sweat and some money. (Down off the soap box, boy! :-) TJ {With Amazing Grace} The Piper aka Ted Jardine CFI-ASME/I Usenet: ...uw-beaver!ssc-vax!ted Internet: ted@boeing.com -- TJ {With Amazing Grace} The Piper aka Ted Jardine CFI-ASME/I Usenet: ...uw-beaver!ssc-vax!ted Internet: ted@boeing.com ------------------------------ Date: 14 Jul 88 00:06:10 GMT From: nyser!cmx!billo@itsgw.rpi.edu (Bill O) Subject: Re: does AI kill? In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes: >Computer-generated mistakes aboard the USS Vincennes may lie at the root >... >If this is the case, it raises the possibility that the 280 Iranian >passengers and crew may have been the first known victims of "artificial >intelligence," the technique of letting machines go beyond monitoring to >actually making deductions and recommendations to humans. >... >is out there in the sky or water beyond his eyesight but also can deduce >for him whether the unseen object is friend or foe and say so in words >displayed on the console. > This is an interesting question -- it touches on the whole issue of accountability in the use of AI. If an AI system makes a recommendation that followed (by humans or machines), and the end result is that humans get hurt (physically, financially, psychologically, or whatever), who is accountable: 1) the human(s) who wrote the AI system 2) the human(s) who followed the injurious recommendation (or who, by inaction, allowed machines to follow the recommendation) 3) (perhaps absurdly) the AI program itself, considered as a responsible entity (in which case I guess it will have to be "executed" -- pun intended). 4) no one However, as interesting as this question is (and I'm not sure where the discussion of it should be), lets not jump to the conclusion that AI was involved in the Vincennes incident. We cannot assume that the Post's writers know what AI is, and the "top military officials" may also be confused, or may have ulterior motives for blaming AI. Maybe AI is involved, maybe it isn't. For instance, a system that simply matches radar image size and/or characteristics -- probably coupled with information from the FOF transponder signal -- to the words friend or foe "printed on the screen" is very likely not AI by most definitions. Perhaps the Iranians were the first victims of "table look-up", (although I have my doubts about "first"). Does anyone out there know about Ageis -- does it use AI? (Alas, this is probably classified). Bill O'Farrell, Northeast Parallel Architectures Center at Syracuse University (billo@cmx.npac.syr.edu) ------------------------------ Date: 14 Jul 88 06:32:58 GMT From: portal!cup.portal.com!tony_mak_makonnen@uunet.uu.net Subject: Re: does AI kill? no AI does not kill, but AI-people do. The very people that can write computer programs for you should be the last to decide how when and how much the computer should compute. The reductive process coexist in the human brain with the synthetic process. The human in the loop could override a long computation by bringing in factors that could not practically be foreseen: 'why did the Dubai tower say..?, 'why is the other cruiser reporting a different altitude" ; these small doubts from all over the environment could trigger experiences in the brain which could countermand the neat calculated decision. Ultimately the computer is equipped with a sensory system so poor compared to the human brain . From an extreemely small slice of what is out there we expect a real life conclusion. ------------------------------ Date: 14 Jul 88 14:17:52 GMT From: uflorida!novavax!proxftl!tomh@gatech.edu (Tom Holroyd) Subject: Re: does AI kill? In article <1376@daisy.UUCP>, klee@daisy.UUCP (Ken Lee) writes: > Computer-generated mistakes aboard the USS Vincennes may lie at the root > of the downing of Iran Air Flight 655 last week, according to senior > military officials being briefed on the disaster. > ... > The officials said Rogers believed the machines - which wrongly > identified the approaching plane as hostile - and fired two missiles at > the passenger plane, knocking it out of the sky over the Strait of > Hormuz. > ... > Some obvious questions right now are: > 1. is AI theory useful for meaningful real world applications? > 2. is AI engineering capable of generating reliable applications? > 3. should AI be used for life-and-death applications like this? Let's face it. That radar system *was* designed to kill. It was only doing its job. In a real combat situation, you can't afford to make the mistake of *not* shooting down the enemy, so you err on the side of shooting down friends. War zones are dangerous places. Now, before y'all start firing cruise missiles at me, I am *NOT*, I repeat NOT praising the system that killed 280 people. What I am saying is that it wasn't the fault of the computer program that incorrectly identified the airliner as hostile. The blame lies entirely on the captain of the USS Vincennes and his superiors for using the system in a zone where commercial flights are flying. The question is not whether AI should be used for life-and-death applications, but whether it should be switched on in a situation like that. In my opinion. P.S. And it could have been done on purpose, by one side or the other. Tom Holroyd UUCP: {uflorida,uunet}!novavax!proxftl!tomh The white knight is talking backwards. ------------------------------ Date: 14 Jul 88 15:20:22 GMT From: ockerbloom-john@yale-zoo.arpa (John A. Ockerbloom) Subject: Re: AI and Life & Death Situations In article <496@rd1632.Dayton.NCR.COM> James King writes: >- Whether AI can be used in life-death situations: > > - If you were in a situation in which a decision had to be made within > seconds, i.e. most life and death situations, would you: > 1. Rely on a toss of a coin? > 2. Make a "shot in the dark" decision? > 3. Make a quickly reasoned decision based on two or three inferences in > your mind? > OR > 4. Use the decision of a computer (if it had knowledge of the situation's > domain and could perform thousands of logical inferences/second)? > - One gets you even odds. Two gets you a random number for the odds. > Three gives you slightly better odds based on minimal decision making. > And four provides you with a recommendation based on the knowledge of > maybe a set of experts and with the speed of computer processing. > - If you're an emotional person you probably pick two. Maybe if you > have a quick, "accessable" hunch you pick three. But if you're a > logical, disciplined, person you would go with the greatest backing > which is four (and a combination of one through three if the commander > is experienced!). I don't think this is a full description of the choices. If you indeed have a great deal of expertise in the area, you will have a very large set of explicit and implicit inferences to work from, fine-tuned over years of experience. You will also have a good idea about the relative importance of different facts and rules, and can thereby find the relevant decision paths very quickly. In short, your mental decision would be based on "deep" knowledge of the situation, and not just on "two or three inferences." Marketing hype aside, it is very difficult to get a computer program to learn from experience and pick out relevant details in a complex problem. It's much easier just to give it a "shallow" form of knowledge in a set of inference rules. In a high-pressure situation, I would not have time to find out *how* a given computer program arrived at a decision, unless I was very familiar with its workings to begin with. So if I were experienced myself, I'd trust my own judgment over the program's in a life-or-death scenario. John Ockerbloom ------------------------------------------------------------------------------ ockerbloom@cs.yale.EDU ...!{harvard,cmcl2,decvax}!yale!ockerbloom ockerbloom@yalecs.BITNET Box 5323 Yale Station, New Haven, CT 06520 ------------------------------ Date: 14 Jul 88 15:27:56 GMT From: rti!bdrc!jcl@mcnc.org (John C. Lusth) Subject: Re: does AI kill? In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes: