Date: Thu 2 Jun 1988 02:02-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@AI.AI.MIT.EDU Us-Mail: MIT Mail Stop 38-390, Cambridge MA 02139 Phone: (617) 253-2737 Subject: AIList Digest V7 #15 To: AIList@AI.AI.MIT.EDU Status: R AIList Digest Thursday, 2 Jun 1988 Volume 7 : Issue 15 Today's Topics: Re: Asimov's Laws of Robotics (Revised) randomness Acting Irrationally Re: Aah, but not in the fire brigade, jazz ensembles, rowing eights,... Human-human communication Re: Fuzzy systems theory was (Re: Alternative to Probability) Unadulterated Behavior ---------------------------------------------------------------------- Date: 27 May 88 15:57:30 GMT From: bwk@mitre-bedford.arpa (Barry W. Kort) Subject: Re: Asimov's Laws of Robotics (Revised) I enjoyed reading Mike Sellers' reaction to my posting on Asimov's Laws of Robotics. Mike stumbles over the "must/may" dilemma: >> II. A robot may respond to requests from human beings, > ^^^ >> or other sentient beings, unless this conflicts with >> the First Law. > >Shouldn't "may" be "must" here, to be imperitive? Otherwise it would seem >to be up to the robot's discretion whether to respond to the human's requests. I changed "must" to "may" because humans sometimes issue frivolous or unwise orders. If I tell Artoo Detoo to "jump in the lake", I hope he has enough sense to ignore my order. With the freedom granted by "may", I no longer need as many caveats of the form "unless this conflicts with a higher-precedence law." Note that along with freedom goes responsibility. The robot now has a duty to be aware of possible acts which could cause unanticipated harm to other beings. The easiest way for the robot to ensure that a freely chosen act is safe is to inquire for objections. This also indemnifies the robot from finger-pointing later on. I respectfully decline Mike's suggestion to remove all references to "sentient beings". There are some humans who function as deterministic finite-state automata, and there are some inorganic systems who behave as evolving intelligences. Since I sometimes have trouble distinguishing human behavior from humane behavior, I wouldn't expect a robot to be any more insightful than a typical person. I appreciated Mike's closing paragraph in which he highlighted the difficulty of balancing robot values, and compared the robot's dilemma with the dilemma faced by our own civilization's leadership. --Barry Kort ------------------------------ Date: Sun, 29 May 88 11:00:20 -0200 From: Antti Ylikoski Subject: randomness In AIList Digest V7 #4, Barry Kort writes: >If I wanted to give my von Neumann machine a *true* random number >generator, I would connect it to an A/D converter driven by thermal >noise (i.e. a toasty resister). I recall that a Zener diode is a good source of noise (but cannot remember the spectrum it gives). It could be a good idea to utilize a Zener / A-D converter random number generator in Monte Carlo simulations. Andy Ylikoski PS. A pearl: Orthodox Christianity: Baruch Ha Ba, B'Shem Adonnnnnai ------------------------------ Date: Mon 30 May 88 23:22:32-PDT From: Ken Laws Subject: Acting Irrationally >> Thus he learns that the other person feels strongly ... > Wouldn't it have been easier if the yeller had simply disclosed his/her > value system in the first place? Or do I have an unrealistic expectation > that the yeller is in fact able to articulate his/her value system to an > inquiring mind? --Barry Kort Yelling is not necessarily an irrational act. It is also a communicative act, indicating an expectation based on custom rather than rationality. Custom tells us how to behave toward others who follow the same customs, but give us no guidance in behavior toward those who break custom but remain within the law and the bounds of rationality. Such people (weirdos, geniuses, punkers, foreigners, teenagers, etc.) make us nervous and complicate our lives, so we respond with anger. We also use anger, real or simulated, to let our children know which rules are based on custom and are thus not explainable. It would be nice if we could just explain our value systems, but we don't seem to be wired that way. (Anyway, we don't understand our own culture well enough.) At least we're civilized enough not to stone or enslave those who are different from us -- at least, not often as part of government or religious policy. Machines will have to be taught to recognize our communicative anger. I hope they won't have to emulate it as well. -- Ken Laws ------------------------------ Date: 31 May 88 15:33:09 GMT From: uflorida!novavax!proxftl!tomh@gatech.edu (Tom Holroyd) Subject: Re: Aah, but not in the fire brigade, jazz ensembles, rowing eights,... In article <1171@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes: > In article <5499@venera.isi.edu> Stephen Smoliar writes: > >The problem comes in deciding > >WHAT needs to be explicitly articulated and what can be left in the "implicit > >background." > ... > For people who haven't spent all their life in academia or > intellectual work, there will be countless examples of carrying out > work in near 100% implicit background (watch fire and ambulance > personelle who've worked together as a team for ages, watch a basketball > team, a steeplejack and his mate, a good jazz ensemble, ...) No. Fire and ambulance personnel have regulations, basketball has rules and teams discuss strategy and tactics during practice, and even jazz musicians use sheet music sometimes. I don't mean to say that implicit communication doesn't exist, just that it's not as useful. I don't know how to build steeples, but I'll bet it can be written down. Articulate as much as you can. It's true we learn by doing, but we need to be told what to do in case it's not obvious (eating is obvious). Tom Holroyd UUCP: {uunet,codas}!novavax!proxftl!tomh The white knight is talking backwards. ------------------------------ Date: 31 May 88 15:05:00 GMT From: uflorida!novavax!proxftl!tomh@gatech.edu (Tom Holroyd) Subject: Human-human communication In article <32403@linus.UUCP>, Barry W. Kort writes: > It is estimated that the human mind accumulates and retains over > a lifetime enough information to fill 50,000 volumes. That's quite > a library. The human input/output channel operates at about 300 bits > per second (30 characters per second). Exchanging personal knowledge > bases is a time-consuming operation. We are destined to remain unaware > of vast portions of our civilization's collective information base. This illustrates the problem quite nicely. Obviously, if we are to achieve understanding of our fellow man, we need to use our human I/O channels as efficiently as possible. > Much of what we know is not easily reduced to language. That which > cannot be described in words may have to be demonstrated in action. > Some people speak of secret knowledge or private language. Name one thing that isn't expressible with language! :-) Even actions can be described. We can't describe the unknown, of course. A dog might "know" something and not be able to describe it, but this is a shortcoming of the dog. Humans have languages, natural and artificial, that let us manipulate and transmit knowledge. Does somebody out there want to discuss the difference between the dog's way of knowing (no language) and the human's way of knowing (using language)? Tom Holroyd UUCP: {uunet,codas}!novavax!proxftl!tomh The white knight is talking backwards. ------------------------------ Date: 31 May 88 21:04:03 GMT From: uvaarpa!virginia!uvacs!cfh6r@mcnc.org (Carl F. Huber) Subject: Re: Asimov's Laws of Robotics (Revised) In article <33085@linus.UUCP> bwk@mbunix (Barry Kort) writes: >Mike stumbles over the "must/may" dilemma: >>> II. A robot may respond to requests from human beings, >> ^^^ >>Shouldn't "may" be "must" here, to be imperitive? Otherwise it would seem >>to be up to the robot's discretion whether to respond to the human's requests. > >I changed "must" to "may" because humans sometimes issue frivolous or >unwise orders. If I tell Artoo Detoo to "jump in the lake", I hope >he has enough sense to ignore my order. >--Barry Kort There may be some valid examples to demonstrate your point, but this doesn't cut it. If you tell Artoo Detoo to "jump in the lake", you hope he has enough sense to understand the meaning of the order, and that includes its frivolocity factor. You want him (it?) to obey the order according to its intended meaning. There is also a lot of elbow room in the word "respond" - this certainly doesn't mean "obey to the letter". -carl ------------------------------ Date: 31 May 88 19:31:27 GMT From: ukma!uflorida!usfvax2!pollock@ohio-state.arpa (Wayne Pollock) Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability) In article <487@sequent.cs.qmc.ac.uk> root@cs.qmc.ac.uk (The Superuser) writes: >... >>>Because fuzzy logic is based on a fallacy >>Is this kind of polemic really necessary? > >Yes. The thing the fuzzies try to ignore is that they haven't established >that their field has any value whatsoever except a few cases of dumb luck. On the other hand, set theory, which underlies much of current theory, is also based on fallacies; (given the basic premses of set theory one can easily derive their negation). As long as fuzzy logic provides a framework for dicussing various concepts and mathematical ideas, which would be hard to describe in traditional terms, the theory serves a purpose. It will undoubtedly continue to evolve as more people become familar with it--it may even lead some researcher someday to an interesting or useful insight. What more do you want from a mathematical theory? Wayne Pollock (The MAD Scientist) pollock@usfvax2.usf.edu Usenet: ...!{ihnp4, cbatt}!codas!usfvax2!pollock GEnie: W.POLLOCK ------------------------------ Date: 31 May 88 23:22:27 GMT From: ncar!noao!amethyst!kww@ames.arpa (K Watkins) Subject: Language-related capabilities (was Re: Human-human communication) In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes: >Name one thing that isn't expressible with language! :-) >A dog might "know" something and not be able to describe it, but this is >a shortcoming of the dog. Humans have languages, natural and artificial, >that let us manipulate and transmit knowledge. > >Does somebody out there want to discuss the difference between the dog's >way of knowing (no language) and the human's way of knowing (using language)? A dog's way of knowing leaves no room that I can see for distinguishing between the model of reality that the dog contemplates and the reality itself. A human's way of knowing--once the human is a competent user of language--definitely allows this distinction, thus enabling lies, fiction, deliberate invention, and a host of other useful and hampering results of recognized possible disjunction between the model and the reality. One aspect of this, probably one of the most important, is that it makes it easy to recognize that in any given situation there is much unknown but possibly relevant data...and to cope with that recognition without freaking out. It is also possible to use language to _refer_ to things which language cannot adequately describe, since language users are aware of reality beyond the linguistic model. Some would say (pursue this in talk.philosophy, if at all) language cannot adequately describe _anything_; but in more ordinary terms, it is fairly common to hold the opinion that certain emotional states cannot be adequately described in language...whence the common nonlinguistic "expression" of those states, as through a right hook or a tender kiss. Question: Is the difficulty of accurate linguistic expression of emotion at all related to the idea that emotional beings and computers/computer programs are mutually exclusive categories? If so, why does the possibility of sensory input to computers make so much more sense to the AI community than the possibility of emotional output? Or does that community see little value in such output? In any case, I don't see much evidence that anyone is trying to make it more possible. Why not? ------------------------------ Date: 31 May 88 23:51:27 GMT From: ncar!noao!amethyst!kww@ames.arpa (K Watkins) Subject: Re: Aah, but not in the fire brigade, jazz ensembles, rowing eights,... In article <239@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes: >Articulate as much as you can. It's true we learn by doing, but we need to >be told what to do in case it's not obvious (eating is obvious). > Life is too short; in the case of a sufficiently aware articulator, both articulator and audience would die of old age before the articulator explained _everything_ s/he could about how to write the letter A. I am not being facetious here; I agree with the desirability of making valuable information explicit. But I believe that the question of which information is valuable is a complex one. It may seem simple at first; but in many cases it is hard for the articulator to tell which behaviors are relevant even to his/her own performance, let alone the as-yet hypothetical performance of the audience. And the assumption that one thing is obvious but another is not is the source of much (most?) disgruntled contempt between teachers and pupils. For instance, it is not even obvious to me what you mean by saying "eating is obvious." Is _how_ to eat obvious? to whom? is what or when or why to eat obvious? Are the currently much-famed eating disorders (anorexia, bulimia, etc.) instances of persons sufficiently defective (?) as to be oblivious to the obvious? Note: This subject fascinates me in part because I am often accused of articulating far more than "necessary"...so (obviously?) my sense of what is obvious could use some work. Part of this issue lies in the fact that, when I articulate more than "necessary," I tend to lose my audience, and that audience loses whatever "necessary" information I was going to impart further down the line. After all, this message is more than a screen long; how many people who read the first screen are still reading? :-) What have those who quit before this point lost that they would have valued? And what, in my discussion, has been "unnecessary articulation of the obvious" whose omission would have improved the sum effect of my communication? ------------------------------ Date: 1 Jun 88 06:17:02 GMT From: quintus!ok@unix.sri.com (Richard A. O'Keefe) Subject: Re: Language-related capabilities (was Re: Human-human communication) In article <700@amethyst.ma.arizona.edu>, K Watkins writes: > If so, why does the possibility of sensory input to computers make so much > more sense to the AI community than the possibility of emotional output? Or > does that community see little value in such output? In any case, I don't see > much evidence that anyone is trying to make it more possible. Why not? Aaron Sloman had a paper "You don't need a soft skin to have a warm heart". I don't know whether that has been followed up. ------------------------------ Date: Wed, 1 Jun 88 12:44:34 MDT From: silbar%mpx1@LANL.GOV (Dick Silbar) Subject: Unadulterated Behavior In AIList V7, #9, Warren Taylor writes a beautiful sentence that I would like to quote again, albeit out of context: "You only need to observe a baby for a short while to see a very nearly unadulterated human behavior." Quite possibly, even fully unadulterated. Dick Silbar ------------------------------ Date: 1 Jun 88 14:00:46 GMT From: pitstop!sundc!netxcom!sdutcher@sun.com (Sylvia Dutcher) Subject: Re: Human-human communication In article <238@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes: > >Name one thing that isn't expressible with language! :-) Look out your window and describe the view to someone who's been blind since birth. Describe a complex mathematical formula, without writing it down. Describe the unusual mannerisims of a friend, without demonstrating them. When you get in a heated discussion, do you gesture with your hands and body? We can express just about anything with language, but is the listener receiving exactly what we are sending? Even the same word, with the same definition, can mean different things to different people, or in different contexts. >Tom Holroyd >UUCP: {uunet,codas}!novavax!proxftl!tomh > >The white knight is talking backwards. -- Sylvia Dutcher * "We cannot accurately describe NetExpress Communications, Inc. * the world, we can only describe 1953 Gallows Rd. * a view of it." Vienna, Va. 22180 * David Hockney ------------------------------ Date: 1 Jun 88 20:44:45 GMT From: frabjous!nau@mimsy.umd.edu (Dana Nau) Subject: Re: Fuzzy systems theory was (Re: Alternative to Probability) In article <1073@usfvax2.EDU> Wayne Pollock writes: >On the other hand, set theory, which underlies much of current theory, is >also based on fallacies; (given the basic premses of set theory one can >easily derive their negation). Not so. Where in the world did you get this idea? Admittedly, _naive_ set theory leads to Russell's paradox--but this was the reason for the development of axiomatic set theories such as Zermelo-Fraenkel set theory (ZF). The consistency of ZF is unproved--but this is a natural consequence of Goedel's incompleteness theorem, and is much different from your contention that set theory is inconsistent. I suggest you read, for example, Shoenfield's _Mathematical_Logic_ (Addison-Wesley, 1967), or Rogers's _Theory_of_Recursive_Functions_and_Effective_Computability_ (McGraw-Hill, 1967). Dana S. Nau ARPA & CSNet: nau@mimsy.umd.edu Computer Sci. Dept., U. of Maryland UUCP: ...!{allegra,uunet}!mimsy!nau College Park, MD 20742 Telephone: (301) 454-7932 ------------------------------ End of AIList Digest ********************