From csnet_gateway Fri Oct 24 02:17:22 1986 Date: Fri, 24 Oct 86 02:17:08 edt From: csnet_gateway (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #230 Status: R AIList Digest Thursday, 23 Oct 1986 Volume 4 : Issue 230 Today's Topics: Philosophy - Searle, Turing, Symbols, Categories & Reflexes as a Test of Self ---------------------------------------------------------------------- Date: 19 Oct 86 02:30:24 GMT From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories greid@adobe.UUCP (Glenn Reid) writes: > [C]oncocting a universal Turing test is sort of useless... There > have been countless monsters on TV...[with] varying degrees of > human-ness...Some...very difficult to detect as being non-human. > However, given enough time, we will eventually notice that they > don't sleep, or that they drink motor oil... The objective of the turing test is to judge whether the candidate has a mind, not whether it is human or drinks motor oil. We must accordingly consult our intuitions as to what differences are and are not relevant to such a judgment. [Higher animals, for example, have no trouble at all passing (the animal version) of the turing test as far as I'm concerned. Why should aliens, monsters or robots, if they have what it takes in the relevant respects? As I have argued before, turing-testing for relevant likeness is really our only way of contending with the "other-minds" problem.] > [T]here are lots of human beings who would not pass the Turing > test [because of brain damage, etc.]. And some of them may not have minds. But we give them the benefit of the doubt for humanitarian reasons anyway. Stevan Harnad (princeton!mind!harnad) ------------------------------ Date: 19 Oct 86 14:59:49 GMT From: clyde!watmath!watnot!watdragon!rggoebel@caip.rutgers.edu (Randy Goebel LPAIG) Subject: Re: Searle, Turing, Symbols, Categories Stevan Harnad writes: > ...The objective of the turing test is to judge whether the candidate > has a mind, not whether it is human or drinks motor oil. This stuff is getting silly. I doubt that it is possible to test whether something has a mind, unless you provide a definition of what you believe a mind is. Turing's test wasn't a test for whether or not some artificial or natural entity had a mind. It was his prescription for an evaluation of intelligence. ------------------------------ Date: 20 Oct 86 14:59:30 GMT From: rutgers!princeton!mind!harnad@Zarathustra.Think.COM (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories rggoebel@watdragon.UUCP (Randy Goebel LPAIG) replies: > I doubt that it is possible to test whether something has a mind, > unless you provide a definition of what you believe a mind is. > Turing's test wasn't a test for whether or not some artificial > or natural entity had a mind. It was his prescription for an > evaluation of intelligence. And what do you think "having intelligence" is? Turing's criterion effectively made it: having performance capacity that is indistinguishable from human performance capacity. And that's all "having a mind" amounts to (by this objective criterion). There's no "definition" in any of this, by the way. We'll have definitions AFTER we have the functional answers about what sorts of devices can and cannot do what sorts of things, and how and why. For the time being all you have is a positive phenomenon -- having a mind, having intelligence -- and an objective and intuitive criterion for inferring its presence in any other case than one's own. (In your own case you presumable know what it's like to have-a-mind/have-intelligence on subjective grounds.) Stevan Harnad princeton!mind!harnad ------------------------------ Date: 21 Oct 86 20:53:49 GMT From: uwslh!lishka@rsch.wisc.edu (a) Subject: Re: Searle, Turing, Symbols, Categories In article <5@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes: > >rggoebel@watdragon.UUCP (Randy Goebel LPAIG) replies: > >> I doubt that it is possible to test whether something has a mind, >> unless you provide a definition of what you believe a mind is. >> Turing's test wasn't a test for whether or not some artificial >> or natural entity had a mind. It was his prescription for an >> evaluation of intelligence. > >And what do you think "having intelligence" is? Turing's criterion >effectively made it: having performance capacity that is indistinguishable >from human performance capacity. And that's all "having a mind" >amounts to (by this objective criterion). There's no "definition" in >any of this, by the way. We'll have definitions AFTER we have the >functional answers about what sorts of devices can and cannot do what >sorts of things, and how and why. For the time being all you have is a >positive phenomenon -- having a mind, having intelligence -- and >an objective and intuitive criterion for inferring its presence in any >other case than one's own. (In your own case you presumable know what >it's like to have-a-mind/have-intelligence on subjective grounds.) > >Stevan Harnad How does one go about testing for something when one does not know what that something is? My basic problem with all this are the two keywords 'mind' and 'intelligence'. I don't think that what S. Harnad is talking about when referring to 'mind' and 'intelligence' are what I believe is the 'mind' and 'intelligence', and I presume others are having this problem (see first article above). I think a fair example is trying to 'test' for UFO's. How does one do this if (a) we don't know what they are and (b) we don't really know if they exist (is it the same thing with magnetic monpoles?). What are really testing for in the case of UFO's? I think this answer is a little more clear than for 'mind', because people generally seem to have an idea of what a UFO is (an Unidentified Flying Object). Therefore, the minute we come across something really strange that falls from the sky and can in no way be identified we label it a UFO (and then try to explain it somehow). However, until this happens (and whether this has already happened depends on what you believe) we can't test specifically for UFO's [at least from how I look at it]. How then does one test for 'mind' or 'intelligence'? These definitions are even less clear. Ask a particular scientist what he thinks is 'mind' and 'intelligence', and then ask another. Chances are that their definitions will be different. Now ask a Christian and a Buddhist. These answers will be even more different. However, I don't think any one will be more valid than the other. Now, if one is to define 'mind' before testing for it, then everyone will have a pretty good idea of what he was testing for. But if one refuses to define it, there are going to be a h*ll of a lot of arguments (as it seems there already have been in this discussion). The same works for intelligence. I honestly don't see how one can apply the Total Turing Test, because the minute one finds a fault, the test has failed. In fact, even if the person who created the 'robot' realizes somehow that his creation is different, then for me the test fails. But this has all been discussed before. However, trying to use 'intelligence' or having a 'mind' as one of the criteria for this test when one expects to arrive at a useful definition "along the way" seems to be sort of silly (from my point of view). I speak only for myself. I do think, though, that the above reasons have contributed to what has become more a fight of basic beliefs than anything else. I will also add my vote that this discussion move away from 'the Total Turing Test' and continue on to something a little less "talked into the dirt". Chris Lishka Wisconsin State Lab of Hygiene [qualifier: nothing above reflects the views of my employers, although my pets may be in agreement with these views] ------------------------------ Date: 22 Oct 86 04:29:21 GMT From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories lishka@uwslh.UUCP (Chris Lishka) asks: > How does one go about testing for something when one does not know > what that something is? My basic problem with all this > [discussion about the Total Turing Test] are the two > keywords 'mind' and 'intelligence'. I don't think that what S. Harnad > is talking about when referring to 'mind' and 'intelligence' are what > I believe is the 'mind' and 'intelligence', and I presume others are > having this problem... You bet others are having this problem. It's called the "other minds" problem: How can you know whether anyone/anything else but you has a mind? > Now, if one is to define 'mind' before testing for it, then > everyone will have a pretty good idea of what he was testing for. What makes people think that the other-minds problem will be solved or simplified by definitions? Do you need a definition to know whether YOU have a mind or intelligence? Well then take the (undefined) phenomenon that you know is true of you to be what you're trying to ascertain about robots (and other people). What's at issue here is not the "definition" of what that phenomenon is, but whether the Total Turing Test is the appropriate criterion for inferring its presence in entities other than yourself. [I don't believe, by the way, that empirical science or even mathematics proceeds "definition-first." First you test for the presence and boundary conditions of a phenomenon (or, in mathematics, you test whether a conjecture is true), then you construct and test a causal explanation (or, in mathematics, you do a formal proof), THEN you provide a definition, which usually depends heavily on the nature of the explanatory theory (or proof) you've come up with.] Stevan Harnad princeton!mind!harnad ------------------------------ Date: 20 Oct 86 18:00:11 GMT From: ubc-vision!ubc-cs!andrews@BEAVER.CS.WASHINGTON.EDU Subject: Re: A pure conjecture on the nature of the self In article <11786@glacier.ARPA> jbn@glacier.ARPA (John B. Nagle) writes: >... The reflexes behind tickling >seem to be connected to something that has a good way of deciding >what is self and what isn't. I would suspect it has more to do with "predictability" -- you can predict, in some sense, where you feel tickling, therefore you don't feel it in the same way. It's similar to the blinking "reflex" to a looming object; if the looming object is someone else's hand you blink, if it's your hand you don't. The predictability may come from a sense of self, but I think it's more likely to come from the fact that you're fully aware of what is going to happen next when it's your own movements giving the stimulus. --Jamie. ...!seismo!ubc-vision!ubc-cs!andrews "Now it's dark" ------------------------------ End of AIList Digest ******************** End of AIList Digest ******************** From csnet_gateway Sun Oct 26 01:10:04 1986 Date: Sun, 26 Oct 86 01:09:57 est From: csnet_gateway (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #231 Status: R AIList Digest Thursday, 23 Oct 1986 Volume 4 : Issue 231 Today's Topics: Queries - Clinical Neuropsychological Assessment & Robot Snooker-Player & HITECH Chess Machine & OOP in AI & PROLOG on IBM MVS & Computing in Publishing & Analog/Digital Distinction & Turing on Stage & Criteria for Expert System Applications ---------------------------------------------------------------------- Date: 19 Oct 86 22:40:12 GMT From: gknight@ngp.utexas.edu Subject: Clinical neuropsychological assessment I'm renewing an inquiry I made several weeks ago. I appreciate all the responses I received -- and those of you who did reply don't have to do so again, obviously. But if there is anyone out there who didn't see or didn't respond to my earlier posting . . . I'm working on (1) a literature review of computer aided or automated neuropsychological assessment systems, and (2) development of an expert system for clinical neuropsychological assessment. I would like to hear from anyone who can give me references, descriptions of work in progress, etc., concerning either subject. Many thanks, -- Gary Knight, 3604 Pinnacle Road, Austin, TX 78746 (512/328-2480). Biopsychology Program, Univ. of Texas at Austin. "There is nothing better in life than to have a goal and be working toward it." -- Goethe. ------------------------------ Date: 20 Oct 86 09:13:41 EDT (Monday) From: MJackson.Wbst@Xerox.COM Subject: Robot Snooker-player Over the weekend I caught part of a brief report on this on Cable News Headlines. They showed a large robot arm making a number of impressive shots, and indicated that the software did shot selection as well. Apparently this work was done somewhere in Great Britain. Can someone provide more detail? Mark ------------------------------ Date: Mon 20 Oct 86 14:27:03-CDT From: Larry Van Sickle Reply-to: CS.VANSICKLE@R20.UTEXAS.EDU Subject: Need reference on HITECH chess machine Can anyone give me a reference that describes CMU's HITECH chess machine/program in some detail? A search of standard AI journals has failed to find one. Thanks, Larry Van Sickle cs.vansickle@r20.utexas.edu Computer Sciences Department, U of Texas at Austin ------------------------------ Date: 20 Oct 86 11:23 PDT From: Stern.pasa@Xerox.COM Subject: Is there OOP in AI? I just looked at the OOPSLA 86 (Object Oriented Programming Systems and LAnguages) proceedings and found no mention of objects as used for AI. Much surprised, I have since been told that the referees explicitly excluded AI references, saying there are AI conferences for that sort of thing. Going back to the AAAI 86 proceedings, there were no papers on the use of OOP in AI. Since then, I have found some references in F. Bancilhon's paper in SIGMOD record 9/86 to some Japanese papers I need to lay hands on. Am I missing any large body of current work here in the states on OOP and AI? Josh ------------------------------ Date: Mon, 20 Oct 86 15:08:49 PLT From: George Cross Subject: PROLOG on IBM MVS Hi, I would appreciate knowing of any Prolog implementations on IBM mainframes that run under MVS (*not* VM). Thanks. ---- George - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - George R. Cross cross@wsu.CSNET Computer Science Department cross%wsu@csnet-relay.ARPA Washington State University faccross@wsuvm1.BITNET Pullman, WA 99164-1210 (509)-335-6319/6636 ------------------------------ Date: 18 Oct 86 09:10:45 GMT From: mcvax!ukc!its63b!epistemi!rda@seismo.css.gov (Robert Dale) Subject: Info on Computing in Publishing Wanted I'd be grateful for any leads on computing in publishing -- references to the literature or products, primarily. I'm not, in the first instance, interested in desktop publishing -- rather, I'm looking for stuff in book, journal, magazine and newspaper publishing -- although pointers to any up-to-date summary articles of what's going on in desktop publishing would be useful. In particular, I'd be interested to hear of any AI-related happenings in the publishing area. I'll summarise any responses I get and repost. Thanks in advance. -- Robert Dale University of Edinburgh, Centre for Cognitive Science, 2 Buccleuch Place, Edinburgh, EH8 9LW, Scotland. UUCP: ...!ukc!cstvax!epistemi!rda JANET: rda@uk.ac.ed.epistemi ------------------------------ Date: 21 Oct 86 13:33:35 GMT From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad) Subject: The Analog/Digital Distinction: Soliciting Definitions I'd like to test whether there is a coherent formulation of the analog/digital distinction out there. I suspect that the results will be surprising. Engineers and computer scientists seem to feel that they have a suitable working definition of the distinction, whereas philosophers have argued that the distinction may not be tenable at all. Cognitive scientists are especially interested because they are concerned with analog vs. nonanalog representations. And neuroscientists are interested in analog and nonanalog processes in the nervous system. I have some ideas, but I'll save them until I sample some of what the Net nets. The ground-rules are these: Try to propose a clear and objective definition of the analog/digital distinction that is not arbitrary, relative, a matter of degree, or loses in the limit the intuitive distinction it was intended to capture. One prima facie non-starter: "continuous" vs. "discrete" physical processes. Stevan Harnad (princeton!mind!harnad) ------------------------------ Date: 22 Oct 86 12:47:26 PDT (Wednesday) From: Hoffman.es@Xerox.COM Subject: Turing on stage Opening this week in one of London's West End theatres is the play, "Breaking The Code" by Hugh Whitemore, starring Derek Jacobi as Alan Turing. The play is based on Andrew Hodges' biography, 'Alan Turing: The Enigma'. I don't know how much the play covers after the World War II years. I'd be interested in any reviews. Send to me directly. If there is interest, I'll summarize for AIList. -- Rodney Hoffman ------------------------------ Date: Fri, 17 Oct 86 15:02 CDT From: PADIN%FNALCDF.BITNET@WISCVM.WISC.EDU Subject: AT FERMILAB--ES OR NOT, THAT IS THE QUESTION. My interest in AI was peaked by a blurb on EXPERT SYSTEMS which I read in the DEC PROFESSIONAL. I immediately saw the possible use of EXPERT SYSTEMS in my work here at FERMILAB. However, in reading more about the development of an ES, it appears to be a very long process and useful only under certain circumstances as outlined by Waterman in his book 'A Guide to Expert Systems'. He states "Consider expert systems only if expert system development is possible, justified, and appropriate." By 'possible' he means if [ (task does not require common sense) & (task requires only cognitive skills) & (experts can articulate their methods) & (genuine experts exist) & (experts agree on solutions) & (task is not too difficult) & (task is not poorly understood) ] then [ expert system development is POSSIBLE ] By 'justified' he means if [ (task solution has a high payoff) or (human expertise being lost) or (human expertise scarce) or (expertise meeded in many locations) or (expertise needed in hostile environment) ] then [ expert system development is JUSTIFIED ] By 'appropriate' he means if [ (task requires symbol manipulation) & (task requires heuristic solutions) & (task is not too easy) & (task has proctical value) & (task is of manageable size) ] then [ expert system approach is APPROPRIATE ] As OPERATORS at FERMILAB we take the Protons that are extracted from our MAIN RING and maneuver them to experimental targets. There are several areas which I see the possible application of ES in our work. 1) troubleshooting help -- we are responsible for maintaining a multitude of systems: water,cryogenic,computer,CAMAC, electrical,safety interlock, and more. quick solutions to problems save money, time, and maximize data flux to experiments. 2) operator training -- we have both rapid turnover and a long training season, i.e., it takes at least a year for an operator to be trained. thus, we need a large knowledge base and a sophisticated simulator/tutorial. 3) data aquisition -- we monitor large amounts of status data and must have out-of-bounds alarms for many devices. our alarm displays need to be centralized and smart so that they display actual problems. 4) control system -- we control the path which the Protons take by controlling the magnetic field strengths of magnets though which the Protons travel. 'TUNING' a BEAM LINE (targeting protons onto experimental apparatus) is an art and as such is subject to the frailty of human judgement. proper tuning is mandatory because it increases data flux to experiments, minimizes radiation intensities, and reduces equipment damage. ? Are Waterman's criteria reasonable ones on which to make a decision about pursueing ES application? ? I've read that the creation of an ES would take about 5 man-years, does that sound right? ? If an ES is recomended, what would be the next step? Do I simply call a representative of some AI company and invite them to make a more informed assessment? First I must convince myself that an ES is something that is really necessary and useful. Next I must be able to convince my superiors. And finally, DOE would need to be convinced! thanks for any info Clem ------------------------------ End of AIList Digest ******************** From csnet_gateway Sat Oct 25 02:04:04 1986 Date: Sat, 25 Oct 86 02:03:47 edt From: csnet_gateway (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #232 Status: R AIList Digest Friday, 24 Oct 1986 Volume 4 : Issue 232 Today's Topics: Queries - Neuron Chip & Neural Nets, Learning - Neural Network Simulations & Cellular Automata, Psychology - Self-Awareness, Logic Programming - Bratko Review & Declarative Languages Bibliography ---------------------------------------------------------------------- Date: Tue, 21 Oct 86 20:11:17 pdt From: Robert Bryant - Cross 8/87 Subject: Neuron Chip INSIGHT magazine, Oct 13, 1986, page 62, had a brief article about a neuron chip being tested by AT&T Bell Labs. "...registers, the electronic equivalent of nerve cell synapses..." if any one has any more detailed information on this please respond. Rob Bryant rbryant@wsu.csnet [I believe Bell Labs was among the places putting one of Hopfield's relaxation nets on a chip. They have also recently announced the construction of an expert system on a chip (10,000 times as fast ...), which I assume is a different project. -- KIL] ------------------------------ Date: Thu, 23 Oct 86 15:42:05 -0100 From: "Michael K. Jackman" subject: Knowledge representation and Sowa's conceptual graphs A number of us at Rutherford Appleton Laboratory (IKBS section) have become interested in Sowa's approach to knowledge representation, which is based on conceptual graphs. (see Clancey's review in AI 27, 1985, Fox, Nature 310, 1984). We believe it to be a particular powerful and useful approach to KR and we are currently currently implementing some of his ideas. We would like to contact any other workers in this field and exchanging ideas on Sowa's approach. Anyone interested should contact me at Rutherford. Michael K. Jackman IKBS section - Rutherford Appleton Laboratory (0235-446619) ------------------------------ Date: 20 Oct 86 18:25:50 GMT From: jam@bu-cs.bu.edu (Jonathan Marshall) Subject: Re: simulating a neural network In article <223@eneevax.UUCP> iarocci@eneevax.UUCP (Bill Dorsey) writes: > > Having recently read several interesting articles on the functioning of >neurons within the brain, I thought it might be educational to write a program >to simulate their functioning. Being somewhat of a newcomer to the field of >artificial intelligence, my approach may be all wrong, but if it is, I'd >certainly like to know how and why. > The program simulates a network of 1000 neurons. Any more than 1000 slows >the machine down excessively. Each neuron is connected to about 10 other >neurons. > . > . > . > The initial results have been interesting, but indicate that more work >needs to be done. The neuron network indeed shows continuous activity, with >neurons changing state regularly (but not periodically). The robot (!) moves >around the screen generally winding up in a corner somewhere where it occas- >ionally wanders a short distance away before returning. > I'm curious if anyone can think of a way for me to produce positive and >negative feedback instead of just feedback. An analogy would be pleasure >versus pain in humans. What I'd like to do is provide negative feedback >when the robot hits a wall, and positive feedback when it doesn't. I'm >hoping that the robot will eventually 'learn' to roam around the maze with- >out hitting any of the walls (i.e. learn to use its senses). > I'm sure there are more conventional ai programs which can accomplish this >same task, but my purpose here is to try to successfully simulate a network >of neurons and see if it can be applied to solve simple problems involving >learning/intelligence. If anyone has any other ideas for which I may test >it, I'd be happy to hear from you. Here is a reposting of some references from several months ago. * For beginners, I especially recommend the articles marked with an asterisk. Stephen Grossberg has been publishing on neural networks for 20 years. He pays special attention to designing adaptive neural networks that are self-organizing and mathematically stable. Some good recent references are: (Category Learning):---------- * G.A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for a Self-Organizing Neural Patttern Recognition Machine." Computer Vision, Graphics, and Image Processing. In Press. G.A. Carpenter and S. Grossberg, "Neural Dynamics of Category Learning and Recognition: Structural Invariants, Reinforcement, and Evoked Potentials." In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds), Pattern Recognition in Animals, People, and Machines. Hillsdale, NJ: Erlbaum, 1986. (Learning):------------------- * S. Grossberg, "How Does a Brain Build a Cognitive Code?" Psychological Review, 1980 (87), p.1-51. * S. Grossberg, "Processing of Expected and Unexpected Events During Conditioning and Attention." Psychological Review, 1982 (89), p.529-572. S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning, Perception, Development, Cognition, and Motor Control. Boston: Reidel Press, 1982. S. Grossberg, "Adaptive Pattern Classification and Universal Recoding: I. Parallel Development and Coding of Neural Feature Detectors." Biological Cybernetics, 1976 (23), p.121-134. S. Grossberg, The Adaptive Brain: I. Learning, Reinforcement, Motivation, and Rhythm. Amsterdam: North Holland, 1986. * M.A. Cohen and S. Grossberg, "Masking Fields: A Massively Parallel Neural Architecture for Learning, Recognizing, and Predicting Multiple Groupings of Patterned Data." Applied Optics, In press, 1986. (Vision):--------------------- S. Grossberg, The Adaptive Brain: II. Vision, Speech, Language, and Motor Control. Amsterdam: North Holland, 1986. S. Grossberg and E. Mingolla, "Neural Dynamics of Perceptual Grouping: Textures, Boundaries, and Emergent Segmentations." Perception & Psychophysics, 1985 (38), p.141-171. S. Grossberg and E. Mingolla, "Neural Dynamics of Form Perception: Boundary Completion, Illusory Figures, and Neon Color Spreading." Psychological Review, 1985 (92), 173-211. (Motor Control):--------------- S. Grossberg and M. Kuperstein, Neural Dynamics of Adaptive Sensory- Motor Control: Ballistic Eye Movements. Amsterdam: North-Holland, 1985. If anyone's interested, I can supply more references. --Jonathan Marshall harvard!bu-cs!jam ------------------------------ Date: 21 Oct 86 17:22:54 GMT From: arizona!megaron!wendt@ucbvax.Berkeley.EDU Subject: Re: simulating a neural network Anyone interested in neural modelling should know about the Parallel Distributed Processing pair of books from MIT Press. They're expensive (around $60 for the pair) but very good and quite recent. A quote: Relaxation is the dominant mode of computation. Although there is no specific piece of neuroscience which compels the view that brain-style computation involves relaxation, all of the features we have just discussed have led us to believe that the primary mode of computation in the brain is best understood as a kind of relaxation system in which the computation proceeds by iteratively seeking to satisfy a large number of weak constraints. Thus, rather than playing the role of wires in an electric circuit, we see the connections as representing constraints on the co-occurrence of pairs of units. The system should be thought of more as "settling into a solution" than "calculating a solution". Again, this is an important perspective change which comes out of an interaction of our understanding of how the brain must work and what kinds of processes seem to be required to account for desired behavior. (Rumelhart & Mcclelland, Chapter 4) Alan Wendt U of Arizona ------------------------------ Date: 22 Oct 86 13:58:12 GMT From: uwmcsd1!uwmeecs!litow@unix.macc.wisc.edu (Dr. B. Litow) Subject: cellular automata Ed. Stephen Wolfram Contains many papers by Wolfram. Available from Taylor&Francis,Intl. Publications Service,242 Cherry St., Philadelphia 19106-1906 *** REPLACE THIS LINE WITH YOUR MESSAGE *** ------------------------------ Date: 19 Oct 86 23:10:13 GMT From: jbn@glacier.stanford.edu (John B. Nagle) Subject: A pure conjecture on the nature of the self Conjecture: the "sense of identity" comes from the same mechanism that makes tickling yourself ineffective. This is not a frivolous comment. The reflexes behind tickling seem to be connected to something that has a good way of deciding what is self and what isn't. There are repeatable phenomena here that can be experimented with. This may be a point of entry for work on some fundamental questions. John Nagle [I apologize for having sent out a reply to this message before putting this one in the digest. -- KIL] ------------------------------ Date: 21 Oct 86 18:19:52 GMT From: cybvax0!frog!tdh@eddie.mit.edu (T. Dave Hudson) Subject: Re: A pure conjecture on the nature of the self > Conjecture: the "sense of identity" comes from the same > mechanism that makes tickling yourself ineffective. Suppose that tickling yourself may be ineffective because of your mental focus. Are you primarily focusing on the sensations in the hand that is doing the tickling, not focusing, focusing on the idea that it will of course be ineffective, or focusing on the sensations created at the tickled site? One of my major impediments to learning athletics was that I had no understanding of what it meant when those rare competent teachers told me to feel the prescribed motion. It requires an act of focusing on the sensations in the different parts of your body as you move. Until you become aware of the sensations, you can't do anything with them. (Once you're aware of them, you have to learn how to deal with a multitude of them, but that's a different issue.) Try two experiments. 1) Walk forward, and concentrate on how your back feels. Stop, then place your hand so that the palm and fingertips cover your lower back at the near side of the spine. Now walk forward again. Notice anything new? 2) Run one hand's index fingertip very lightly over the back of the other hand, so lightly that you can barely feel anything on the back of the other hand, so lightly that maybe you're just touching the hairs on that hand and not the skin. Close your eyes and try to sense where on the back of that hand the fingertip is as it moves. Now do you feel a tickling sensation? David Hudson ------------------------------ Date: 16 Oct 86 07:48:00 EDT From: "CUGINI, JOHN" Subject: Reviews [Forwarded from the Prolog Digest by Laws@SRI-STRIPE.] I'm in the middle of reading the Bratko book, and I would give it a very high rating. The concepts are explained very clearly, there are lots of good examples, and the applications covered are of high interest. Part I (chapters 1-8) is about Prolog per se. Part II (chapters 9-16) shows how to implement many standard AI techniques: chap. 9 - Operations on Data Structures chap. 10 - Advanced Tree Representations chap. 11 - Basic Problem-solving Strategies chap. 12 - Best-first: a heuristic search principle chap. 13 - Problem reduction and AND/OR graphs chap. 14 - Expert Systems chap. 15 - Game Playing chap. 16 - Pattern-directed Programming Part I has 188 pages, part II has 214. You didn't mention Programming in Prolog by Clocksin & Mellish - this is also very good, and covers some things that Bratko doesn't (it's more concerned with non-AI applications), but all in all, I slightly prefer Bratko's book. -- John Cugini ------------------------------ Date: Mon, 6 Oct 86 15:47:15 MDT From: Lauren Smith Subject: Bibliography on its way [Forwarded from the Prolog Digest by Laws@SRI-STRIPE.] I have just sent out the latest update of the Declarative Languages bibliography. Please notify the appropriate people at your site - especially if there were several requests from your site, and you became the de facto distributor. Again, the bibliography is 24 files. This is the index for the files, so you can verify that you received everything. ABDA76a-AZAR85a BACK74a-BYTE85a CAMP84a-CURR72a DA83a-DYBJ83b EGAN79a-EXET86a FAGE83a-FUTO85a GABB84a-GUZM81a HALI84a-HWAN84a ICOT84a-IYEN84a JACOB86a-JULI82a KAHN77a-KUSA84b LAHT80a-LPG86a MACQ84a-MYCR84a NAGAI84a-NUTE85a OHSU85a-OZKA85a PAPAD86a-PYKA85a QUI60 RADE84a-RYDE85a SAIN84a-SZER82b TAGU84a-TURN85b UCHI82a-UNGA84 VALI85-VUIL74a WADA86a-WORL85a YAGH83a-YU84a There has been a lot of interest regarding the formatting of the bibliography for various types of word processing systems. The biblio is maintained (in the UK) in a raw format, hence that is the way that I am distributing it. Since everyone uses different systems, it seems easiest to collect a group of macros that convert RAW FORMAT ===> FAVORITE BIBLIO FORMAT and distribute them. So, if you have a macro that does the conversion please advertise it on the net or better yet, let me know so I can let everyone else know about it. If you have any additions to make, please send them to: -- Andy Cheese at abc%computer-science.nottingham.ac.uk@cs.ucl.ac.uk or Lauren Smith at ls@lanl.arpa Thank you for your interest. -- Lauren Smith [ I will be including one file per issue of the Digest until all twenty four files are distributed starting with the next issue. -ed ] [AIList will not be carrying this bibliography. -- KIL] ------------------------------ End of AIList Digest ******************** From csnet_gateway Sun Oct 26 01:10:29 1986 Date: Sun, 26 Oct 86 01:10:16 est From: csnet_gateway (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #233 Status: R AIList Digest Friday, 24 Oct 1986 Volume 4 : Issue 233 Today's Topics: Bibliography - ai.bib41C ---------------------------------------------------------------------- Date: WED, 20 apr 86 17:02:23 CDT From: leff%smu@csnet-relay Subject: defs for ai.bib41C D MAG88 Robotersysteme\ %V 2\ %N 3\ %D 1986 D MAG89 Journal of Robotic Systems\ %V 3\ %N 3\ %D Autumn 1986 D MAG90 Pattern Recognition\ %V 19\ %N 5\ %D 1986 D MAG91 International Journal of Production Research\ %V 24\ %N 5\ %D SEP-OCT 1986 D MAG92 Fuzzy Sets and Systems\ %V 18\ %N 3\ %D APR 1986 D BOOK56 Advances in Automation and Robotics\ %V 1\ %I JAI Press\ %D 1985\ %C Greenwich, Connecticut D MAG93 COMPINT 85\ %D 1985 D MAG94 The Second Conference on Artificial Intelligence Applications\ %D 1985 ------------------------------ Date: WED, 20 apr 86 17:02:23 CDT From: leff%smu@csnet-relay Subject: ai.bib41C %A D. Partridge %T Artificial Intelligence Applications in the Future of Software Engineering %I John Wiley and Sons %C New York %D 1986 %K AT15 AA08 AI01 %X ISBN 0-20315-3 $34.95 241 pages %A Richard Forsyth %A Roy Rada %T Machine Learning Applications in Expert Systems and Information Retrieval %I John Wiley and Sons %C New York %D 1986 %K AT15 AA15 AI01 AI04 %X ISBN 0-20309-9 Cloth $49.95 , ISBN 0-20318-18 $24.95 paper 277 pages %A W. John Hutchins %T Machine Translation Past, Present and Future %I John Wiley and Sons %C New York %D 1986 %K AT15 AI02 %X 380 pages 0-2031307 1986 $49.95 %A Karamjit S. Gill %T Artificial Intelligence for Society %I John Wiley and Sons %C New York %D 1986 %K O05 AT15 %X 280 pages 1-90930-0 1986 $34.95 %A Donald Michie %T On Machine Intelligence %I John Wiley and Sons %C New York %D 1986 %K AA17 AI07 AI08 AI01 AT15 %X 260 pages 0-20335-8 1986 $29.95 %A Chris Naylor %T Building Your Own Expert System %I John Wiley and Sons %C New York %D 1985 %K AI01 AT15 %X 249 pages 0-20172-X 1985 $15.95 paper %A Peter Bishop %T Fifth Generation Computers %I John Wiley and Sons %C New York %D 1986 %K AT15 GA01 GA02 GA03 %X 166 pages 0-20269-6 1986 $29.95 %A Jerry M. Rosenberg %T Dictionary of Artificial Intelligence and Robotics %I John Wiley and Sons %C New York %D 1986 %K AT15 AI16 AI07 %X 225 pages 1-08982-0 $24.95 cloth; 1-84981-2 $14.95 paper %A Peter S. Sell %T Expert Systems %I John Wiley and Sons %C New York %D 1985 %K AT15 AI01 %X 99 pages 0-20200-9 $14.95 paper %A G. L. Simons %T Expert Systems and Micros %I John Wiley and Sons %C New York %D 1986 %K AT15 H01 AI01 %X 247 pages 0-20277-7 $19.95 paper %A G. L. Simons %T Is Man a Robot? %I John Wiley and Sons %C New York %D 1986 %K AT15 AI16 AI08 %X 200 pages 1-91106-2 $18.95 paper %A G. L. Simons %T Introducing Artificial Intelligence %I John Wiley and Sons %C New York %D 1985 %K AT08 AT15 AI16 %X 281 pages 0-20166-5 $19.95 paper "completely non-technical" %A Yoshiaki Shirai %A Jun-Ichi Tsujii %T Artificial Intelligence Concepts, Techniques and Applications %I John Wiley and Sons %C New York %D 1985 %K AT15 GA01 AI16 %X 177 pages 1-90581-X $19.95 "Drawn from the Fifth Generation Computer Program" %A Luc Steels %A John A. Campbell %T Progress in Artificial Intelligence %I John Wiley and Sons %C New York %D 1985 %K AT15 AI16 GA03 %X "Drawn Euopean Conference on AI" %A Tohru Moto-Oka %A Masaru Kitsuregawa %T The Fith Generation Computer The Japanese Challenge %I John Wiley and Sons %C New York %D 1985 %K GA01 AT15 %X 122 pages 1-90739-1 1985 $17.95 paper %A Leonard Uhr %T Parallel Multicomputers and Artificial Intelligence %I John Wiley and Sons %C New York %D 1986 %K AT15 H03 %X 150 pages 1-84979-0 $32.95 %A J. E. Hayes %A Donald Michie %T Intelligent Systems The Unprecedented Opportunity %I John Wiley and Sons %C New York %D 1984 %K AT15 AI07 AA10 AA07 %X 206 pages 0-20139-8 1984 $19.95 paper %A M. Yazdani %A N. Narayanan %T Artificial Intelligence: Human Effects %I John Wiley and Sons %C New York %D 1985 %K AT15 O05 AA07 AA01 %X 318 pages 0-20239-4 1985 $27.95 %A Richard Ennals %T Artificial Intelligence: Approaches to Logical Reasoning and Historical Research %I John Wiley and Sons %C New York %D 1985 %K AT15 AA11 AA25 AA07 T02 %X 172 pages 0-20181-9 1985 $29.95 %A S. Torrance %T The Mind and The Machine %I John Wiley and Sons %C New York %D 1984 %K AT15 AI16 AA08 %X 213 pages 0-20104-5 1984 $31.95 %A Stuart C. Shapiro %T Encyclopedia of Artificial Intelligence %I John Wiley and Sons %C New York %D 1987 %K AT15 AI16 %X 1500 pages 8.5" by 11" in two volumes 1-80748-7 due out May 1, 1987 $149.95 until Sep 1, 1987 and $175.00 thereafter %A Stephen H. Kaisler %T Interlisp The Language and its Usage %I John Wiley and Sons %C New York %D 1986 %K T01 AT15 %X 1,144 pages 1-81644-2 1986 $49.95 %A Christian Queinnec %T Lisp %I John Wiley and Sons %C New York %D 1985 %K T01 AT15 %X 156 pages 0-20226-2 1985 $15.95 paper (translated by Tracy Ann Lewis) %A J. A. Campbell %T Implementations of Prolog %I John Wiley and Sons %C New York %D 1984 %K T02 T01 AT15 %X 391 pages 0-20045-6 1984 $32.95 paper %A W. D. Burnham %A A. R. Hall %T Prolog Programming and Applications %I John Wiley and Sons %C New York %D 1985 %K T02 AT15 %X 114 pages 0-20263-7 1985 $16.95 paper %A Deyi Li %T A Prolog Database System %I John Wiley and Sons %C New York %D 1984 %K T02 AA09 AT15 %X 207 pages 1-90429-5 1984 %A Rosalind Barrett %A Allan Ramsay %A Aaron Sloman %T Pop-11 A Practical Language for Artificial Intelligence %I John Wiley and Sons %C New York %D 1985 %K AT15 AI01 AI05 AI06 %X 232 pages 0-20237-8 1985 $19.95 %A Hugh de Saram %T Programming in Micro-Prolog %I John Wiley and Sons %C New York %D 1985 %K AT15 T02 %X 166 pages 0-20218-1 1985 $21.95 paper %A Brian Sawyer %A Dennis Foster %T Programming Expert Systems in Pascal %I John Wiley and Sons %C New York %D 1986 %K AT15 AI01 H01 %X 200 pages 1-84267-2 1986 $19.95 paper %A Brian Sawyer %A Dennis Foster %T Programming Expert Systems in Modula-2 %I John Wiley and Sons %C New York %D 1986 %K AT15 AI01 %X 224 pages 1-85036-5 1986 $24.95 paper %A K. Sparck-Jones %A Y. Wilks %T Automatic Natural Language Parsing %I John Wiley and Sons %C New York %D 1985 %K AT15 AI02 %X 208 pages 0-20165-7 1985 $24.95 paper %A C. S. Millish %T Compiler Interpretation of Natural Language Descriptions %I John Wiley and Sons %C New York %D 1985 %K AT15 AI02 %X 182 pages 0-20219-x 1985 $24.95 %A M. Wallace %T Communicating with Databases in Natural Language %I John Wiley and Sons %C New York %D 1985 %K AT15 AA09 AI02 %X 170 pages 0-20105-3 1984 $31.95 %A Mike James %T Classification Algorithms %I John Wiley and Sons %C New York %D 1986 %K AT15 O06 %X 209 pages 1-84799-2 1986 $34.95 %A Satosi Watanabi %T Pattern Recognition: Human and Mechanical %I John Wiley and Sons %C New York %D 1985 %K AT15 AI06 AI08 %X 352 pages 1-80815-6 1985 $44.95 .br "Shows that all the known pattern recognition algorithms can be derived from the principle of minimum entropy." %A Donald A. Norman %A Stephen W. Draper %T User Centered System Design %I Lawrence Erlbaum Associates Inc. %C Hillsdale, NJ %K AI02 %D 1986 %X 1986 544 pages 0-89859-872-9 paper prepaid $19.95 %A Robert J. Baron %T The Cerebral Computer %I Lawrence Erlbaum Associates Inc. %C Hillsdale, NJ %K AT15 AI08 %T Portrait: DFG Special Research Topic "Artificial Intelligence" %J Die Umschau %V 86 %N 9 %D SEP 1986 %K AI16 AT08 %X German, Abstract in English and German %A P. Freyberger %A P. Kampmann %A G. Schmidt %T A Knowledged [sic] Based Navigation Method for Autonomous Mobile Robots (german) %J MAG88 %P 149-162 %K AI07 AA19 %X German %A P. M. Frank %A N. Becker %T Robot Activation with A Directed, fixing and Object-Extracting Camera for Data Reduction %J MAG88 %P 188 %K AI07 AI06 %X German %A William J. Palm %A Ramiro Liscano %T Integrated Design of an End Effector for a Visual Servoing Algorithm %J MAG89 %P 221-236 %K AI07 AI06 %A K. Cheng %A M. Idesawa %T A Simplified Interpolation and Conversion Method of Contour Surface Model to Mesh Model %J MAG89 %P 249-258 %K AI07 AI06 %A Genichiro Kinoshita %A Masanori Idesawa %A Shigeo Naomi %T Robotic Range Sensor with Project of Bright Ring Pattern %J MAG89 %P 249-258 %K AI07 AI06 %A M. G. Thomason %A E. Granum %A R. E. Blake %T Experiments in Dynamic Programming Inference of Markov Networks with Strings Representing Speech Data %J MAG90 %P 343-352 %K AI05 %A M. Juhola %T A Syntactic Method for Analysis of Saccadic Eyeme Movements %J MAG90 %P 353-360 %K AA10 %A H. D. Cheng %A K. S. Fu %T Algorithm Partition and Parallel Recognition of General Context-Free Languages using Fixed-size VLSI Architecture %J MAG90 %P 361-372 %K AI06 H03 O06 %A E. S. Baugher %A A. Rosenfeld %T Boundary Localication in an Image Pyramid %J MAG90 %P 373-396 %K AI06 H03 %A E. A. Parrish %A W. E. McDonald, Jr %T An Adaptive Pattern Analysis System for Isolating EMI %J MAG90 %P 397-406 %K AA04 AI06 %A E. Tanaka %A T. Toyama %A S. Kawai %T High Speed Error Correction of Phoneme Sequences %J MAG90 %P 407-412 %K AI05 %A K. Jajuga %T Bayes Classification Rule for the General Discrete Case %J MAG90 %P 413-416 %K O04 %A N. N. Abdelmalek %T Noise Filtering in Digital Images and Approximation Theory %J MAG90 %P 417 %K AI06 %A S. R. T. Kumara %A S. Hoshi %A R. L. Kashyap %A C. L. Moodie %A T. C. Chang %T Expert Systems in [sic] %J MAG91 %P 1107-1126 %K AI01 %A H. Lipkin %A L. E. Torfason %A J. Duffy %T Efficient Motion Planning for a Planar Manipulator Based on Dexterity and Workspace Geometry %J MAG91 %P 1235 %K AI06 AI09 %A R. R. Yager %T A Characterization of the Extension Principle %J MAG92 %P 205-218 %K O04 %A J. F. Baldwin %T Automated Fuzzy and Probabilistic Inference %J MAG92 %P 219-236 %K O04 AI01 %A A. F. Blishun %T Fuzzy Adaptive Learning Model of Decision-Making Process %J MAG92 %P 273-282 %K O04 AI13 AI04 %A A. O. Esobgue %T Optimal Clustering of Fuzzy Data via Fuzzy Dynamic Programming %J MAG92 %P 283-298 %K O04 O06 %A J. Kacprzyk %T Towards 'Human-Consistent' Mulstistage Decision Making and Control Models Using Fuzzy Sets and Fuzzy Logic %J MAG92 %P 299-314 %K O04 AI08 AI13 %A B. R. Gaines %A M. L. G. Shaw %T Induction of Inference Rules for Expert Systems %J MAG92 %P 315-328 %K AI04 AI01 O04 %A M. Sugeno %A G. T. Kang %T Fuzzy Modeling and Control of Multilayer Incinerator %J MAG92 %P 329-346 %K O04 AA20 %A Peizhuang Wang %A Xihu Liu %A E. Sanchez %T Set-valued Statistics and its Application to Earthquake Engineering %J MAG92 %P 347 %K O04 AA05 %A J. L. Mundy %T Robotic Vision %B BOOK56 %P 141-208 %K AI06 AI07 %A R. Bajcsy %T Shape from Touch %B BOOK56 %P 209-258 %K AI06 AI07 %A T. M. Husband %T Education and Training in Robotics %I IFS Publications Ltd %C Bedford %D 1986 %K AI06 AT15 AT18 %X multiple articles, 315 pages $54.00 ISBN 0-948507-04-7 %A N. Y. Foo %T Dewey Indexing of Prolog Traces %J The Computer Journal %V 29 %N 1 %D FEB 1986 %P 17-19 %K T02 %A M. E. Dauhe-Witherspoon %A G. Muehllehner %T An Iterative Image Space Reconstruction Algorithm Suitable for Volume ECT %J IEEE Trans on Med. Imaging %V 5 %N 2 %D JUN 1986 %P 61-66 %K AA01 AI06 %A B. Zavidovique %A V. Serfaty-Dutron %T Programming Facilities in Image Processing %J MAG93 %P 804-806 %K AI06 %A J. R. Ward %A B. Blesser %T Methods for Using Interactive Hand-print Character Recognition for Computer I nput %J MAG93 %P 798-803 %K AI06 %A Y. Tian-Shun %A T. Yong-Lin %T The Conceptual Model for Chinese Language Understanding and its Man-Machine Paraphrase %J MAG93 %P 795-797 %K AI02 %A G. Sabah %A A. Vilnat %T A Question Answering System which Tries to Respect Conversational Rules %J MAG93 %P 781-785 %K AI02 %A J. Rouat %A J. P. Adoul %T Impact of Vector Quantization for Connected Speech Recognition Systems %J MAG93 %P 778-780 %K AI05 %A G. G. Pieroni %A O. G. Johnson %T A Methodology Visual Recognition of Waves in a Wave Field %J MAG93 %P 774-77 %K AI06 %A G. J. McMillan %T Vimad: A Voice Interactive Maintenance Aiding Device %J MAG93 %P 768-771 %K AI05 AA21 %A D. Laurendeau %A D. Poussart %T A Segmentation Algorithm for Extracting 3D Edges from Range Data %J MAG93 %P 765-767 %K AI06 %A F. Kimura %A T. Sata %A K. Kikai %T A Fast Visual Recognition System of Mechanical Parts by Use of Three Dimensional Model %J MAG93 %P 755-759 %K AI06 AA05 AA26 %A M. L. G. Shaw %A B. R. Gaines %T The Infrastructure of Fifth Generation Computing %J MAG93 %P 747-751 %K GA01 AT19 %A W. Doster %A R. Oed %T On-line Script Recognition - A Userfriendly Man Machine Interface %J MAG93 %P 741-743 %K AI06 AA15 %A R. Descout %T Applications of Speech Technology A Review of the French Experience %J MAG93 %P 735-740 %K AI05 GA03 %A Y. Ariki %A K. Wakimoto %A H. Shieh %A T. Sakai %T Automatic Transformation of Drawing Images Based on Geometrical Structures %J MAG93 %P 719-723 %K AI06 AA05 %A Z. X. Yang %T On Intelligent Tutoring System for Natural Language %J MAG93 %P 715-718 %K AI02 AA07 %A L. Xu %A J. Chen %T Autobase: A System which Automatically Establishes the Geometry Knowledge Base %J MAG93 %P 708-714 %K AI01 AA13 %A G. Pask %T Applications of Machine Intelligence to Education, Part I Conversation System %J MAG93 %P 682 %K AI02 AA07 %A Y. H. Jea %A W. H. Wang %T A Unified Knowledge Representation Approach in Designing an Intelligent tutor %J MAG93 %P 655-657 %K AA07 AI16 %A I. M. Begg %T An Intelligent Authoring System %J MAG93 %P 611-613 %K AA07 %A J. C. Perex %A R. Castanet %T Intelligent Robot Simulation System: The Vision Guided Robot Concept %J MAG93 %P 489-492 %K AI06 AI07 %A B. Mack %A M. M. Bayoumi %T An Ultrasonic Obstacle Avoidance System for a Unimate Puma 550 Robot %J MAG93 %P 481-483 %K AI06 AI07 %A R. A. Browse %A S. J. Lederman %T Feature-Based Robotic Tactile Perception %J MAG93 %P 455-458 %K AI06 AI07 %A R. S. Wall %T Constrained Example Generation for VLSI Design %J MAG93 %P 451-454 %K AA04 %A L. P. Demers %A C. Roy %A E. Cerney %A J. Gecsei %T Integration of VLSI Symbolic Design Tools %J MAG93 %P 308-312 %K AA04 %A R. Wilson %T From Signals to Symbols - The Inference Structure of Perception %J MAG93 %P 221-225 %K AI08 AI06 %A C. Hernandex %A A. Alonso %A J. E. Arias %T Computerized Monitoring as an Aid to Obstetrical Decision Making %J MAG93 %P 203-206 %K AA01 %A M. M. Gupta %T Approximate Reasoning in the Evolution of Next Generation of Expert Systems %J MAG93 %P 201-202 %K O04 AI01 %A W. Wei-Tsong %a P. Wei-Min %T An Effective Searching Approach to Processing Broken Lines in an Image %J MAG93 %P 198-200 %K AI06 %A J. F. Sowa %T Doing Logic on Graphs %J MAG93 %P 188 %K AI16 %A P. T. Cox %A T. Pietrxykowski %T Lograph: A Graphical Logic Programming Language %J MAG93 %P 145-151 %K AI10 %A D. A. Thomas %A W. R. Lalonde %T ACTRA The Design of an Industrial Fifth Generation Smalltalk %J MAG93 %P 138-140 %A Y. Wada %A Y. Kobayashi %A T. Mitsuta %A T. Kiguchi %T A Knowledge Based Approach to Automated Pipe-Route Planning in Three- Dimensional Plant Layout Design %J MAG93 %P 96-102 %A N. P. Suh %A S. H. Kim %T On an Expert System for Design and Manufacturing %J MAG93 %P 89-95 %K AA26 AA05 %A C. Y. Suen %A A. Panoutsopoulos %T Towards a Multi-lingual Character Generator %J MAG93 %P 86-88 %K AI02 %A K. Shirai %A Y. Nagai %A T. Takezawa %T An Expert System to Design Digital Signal Processors %J MAG93 %P 83-85 %K AI01 AA04 %A D. Sriram %A R. Banares-Alcantara %A V. Venkatasubramnian %A A. Westerberg %A M. Rychener %T Knowledge-Based Expert Systems for Chemical Engineering %J MAG93 %P 79-82 %K AI01 AA05 %A P. Savard %A G. Bonneau %A G. Tremblay %A R. Cardinal %A A. R. Leblanc %A P. Page %A R. A. Nadeau %T Interactive Electrophysiologic Mapping System for On-Line Analysis of Cardiac Activation Sequences %J MAG93 %P 76-78 %K AA01 %A R. Bisiani %T VLSI Custom Architectures for Artificial Intelligence %J MAG93 %P 27-31 %A L. H. Bouchard %A L. Emirkanian %T A Formal System for the Relative Clauses in French and its Uses in CAL %J MAG93 %P 32-34 %K AI02 AA07 %A G. Bruno %A A. Elia %A P. Laface %T A Rule-Based System for Production Scheduling %J MAG93 %P 35-39 %K AA05 AI01 %A J. F. Cloarec %A J. P. Cudelou %A J. Collet %T Modeling Switching System Specifications as a Knowledge Base %J MAG93 %P 40-44 %K AA04 %A B. R. Gaines %A M. L. G. Shaw %T Knowledge Engineering for Expert Systems %J MAG93 %P 45-49 %K AI01 %A B. Hardy %A P. Bosc %A A. Chauffaut %T A Design Environment for Dialogue Oriented Applications %J MAG93 %P 53-55 %A P. Haren %A M. Montalban %T Prototypical Objects for CAD Expert Systems %J MAG93 %P 53-55 %K AA05 AI01 AI16 %A S. J. Mrchev %T A Unit Imitating the Functions on the Human Operative Memory %J MAG93 %P 56-67 %K AI08 %A B. Phillips %A S. L. Messick %A M. J. Freiling %A J. H. Alexander %T INKA: The INGLISH Knowledge Acquisition Interface for Electronic Instrument Troubleshooting Systems %J MAG94 %P 676-682 %K AA04 AI02 AA21 %A D. V. Zelinski %A R. N. Cronk %T The ES/AG Environment-Its Development and Use in Expert System Applications %J MAG94 %P 671-675 %K AI01 T03 %A K. H. Wong %A F. Fallside %T Dynamic Programming inthe Recognition of Connected Handwritten Script %J MAG94 %P 666-670 %K AI06 %A V. R. Waldron %T Process Tracing as a Method for Initial Knowledge Acquisition %J MAG94 %P 661-665 %K AI01 AI16 %A H. Van Dyke Parunak %A B. W. Irish %A J. Kindrick %A P. W. Lozo %T Fractal Actors for Distributed Manufacturing Control %J MAG94 %P 653-660 %K H03 AA26 %A W. K. Utt %T Directed Search with Feedback %J MAG94 %P 647-652 %K AI03 %A J. T. Tou %A C. L. Huang %T Recognition of 3-D Objects Via Spatial Understanding of 2-D Images %J MAG94 %P 641-646 %K AI06 %A P. Snow %T Tatting Inference Nets with Bayes Theorem %J MAG94 %P 635-640 %K AI16 O04 %A Y. Shoham %T Reasoning About Causation in Knowledge-Based Systems %J MAG94 %P 629-634 %K AI16 %A H. C. Shen %A G. F. P. Signarowski %T A Knowledge Representation for Roving Robots %J MAG94 %P 629-634 %K AI07 AI16 AA19 %A D. Schwartz %T One Cornerstone in the Mathematical Foundations for a System of Fuzzy- Logic Programming %J MAG94 %P 618-620 %K AI10 O04 %A P. R. Schaefer %A I. H. Bozma %A R. D. Beer %T Extended Production Rules for Validity Maintenance %J MAG94 %P 613-617 %K AI01 AI15 %A M. C. Rowe %A R. Keener %A A. Veitch %A R. B. Lantz %T E. T. Expert Technician/Experience Trapper %J MAG94 %P 607-612 %K AA04 AA21 %A C. E. Riese %A S. M. Zubrick %T Using Rule Induction to Combine Declarative and Procedural Knowledge Representations %J MAG94 %P 603-606 %K AI16 %A D. S. Prerau %A A. S. Gunderson %A R. E. Reinke %A S. K. Goyal %T The COMPASS Expert System: Verification, Technology Transfer, and Expansion %J MAG94 %P 597-602 %K AI01 %A B. Pinkowski %T A Lisp-Based System for Generating Diagnostic Keys %J MAG94 %P 592-596 %K T01 AA21 %A S. R. Mukherjee %A M. Sloan %T Positional Representation of English Words %J MAG94 %P 587-591 %K AI02 %A J. H. Martin %T Knowledge Acquisition Through Natural Language Dialogue %J MAG94 %P 582-586 %K AI01 AI02 %A D. M. Mark %T Finding Simple Routes; "Ease of Description" as an Objective Function in Automated Route Selection %J MAG94 %P 577-581 %A S. Mahalingam %A D. D. Sharma %T WELDEX-An Expert System for Nondestructive Testing of Welds %J MAG94 %P 572-576 %K AI01 AA05 AA21 %A J. Liebowitz %T Evaluation of Expert Systems: An Approach and Case Study %J MAG94 %P 564-571 %K AI01 %A S. J. Laskowski %A H. J. Antonisse %A R. P. Bonasso %T Analyst II: A Knowledge-Based Intelligence Support System %J MAG94 %P 558-563 %K AA18 %A Ronald Baecker %A william Buxton %T Readings in Human-Computer Interaction: A Multidisciplinary Approach %I Morgan Kaufmann %C Los Altos, California %D 1986 %X 650 pages ISBN 0-934613-24-9 paperbound $26.95 %T Proceedings: Graphics Interface '86/Vision Interface '86 %I Morgan Kaufmann %C Los Altos, California %D 1986 %K AI06 %X 402 pages paper bound ISSN 0713-5424 $35.00 %A Peter Politakis %T Empirical Analysis for Expert Systems %I Morgan Kaufmann %C Los Altos, California %D 1985 %K AI01 AA01 rheumatology %X 187 pages paperbound ISBN 0-273-08663-4 $22.95 .br Describes SEEK which was used to develop an expert system for rheumatology %A David Brown %A B. Chandrasekaran %T Design Problem Solving: Knowledge Structures and Control Strategies %I Morgan Kaufmann %C Los Altos, California %D 1986 %K AA05 %X 200 pages paperbound ISBN 0-934613-07-9 $22.95 %A W. Lewis Johnson %T Intention-Based Diagnosis of Errors in Novice Programs %I Morgan Kaufmann %C Los Altos, California %D 1986 %K AA07 AA08 Proust %X 1986, 333 pages, ISBN 0-934613-19-2 %A Etienne Wenger %T Artificial Intelligence and Tutoring Systems: Computational Approaches to the Communication of Knowledge %I Morgan Kaufmann %C Los Altos, California %D Winter 1986-1987 %K AA07 AI02 %X 350 pages, hardbound, ISBN 0-934613-26-5 %A John Kender %T Shape From Texture %I Morgan Kaufmann %C Los Altos, California %D 1986 %K AI06 %X paperbound, ISBN 0-934613-05-2 $22.95 %A David Touretzky %T The Mathematics of Inheritance Systems %I Morgan Kaufmann %C Los Altos, California %D 1986 %K AI16 %X paperbound, 220 pages, ISBN 0-934613-06-0 $22.95 %A Ernest Davis %T Representing and Acquiring Geographic Knowledge %I Morgan Kaufmann %C Los Altos, California %D 1986 %K AI16 %X paperbound, 240 pages, ISBN 0-934613-22-2 $22.95 ------------------------------ End of AIList Digest ******************** From csnet_gateway Sat Oct 25 02:04:24 1986 Date: Sat, 25 Oct 86 02:04:09 edt From: csnet_gateway (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #234 Status: R AIList Digest Friday, 24 Oct 1986 Volume 4 : Issue 234 Today's Topics: Philosophy - Intelligence, Understanding ---------------------------------------------------------------------- Date: Wed, 22 Oct 86 09:49 CDT From: From the desk of Daniel Paul <"NGSTL1::DANNY%ti-eg.csnet"@CSNET-RELAY.ARPA> Subject: AI vs. RI In the last AI digest (V4 #226), Daniel Simon writes: >One question you haven't addressed is the relationship between intelligence and >"human performance". Are the two synonymous? If so, why bother to make >artificial humans when making natural ones is so much easier (not to mention >more fun)? This is a question that has been bothering me for a while. When it is so much cheaper (and possible now, while true machine intelligence may be just a dream) why are we wasting time training machines when we could be training humans in- stead. The only reasons that I can see are that intelligent systems can be made small enough and light enough to sit on bombs. Are there any other reasons? Daniel Paul danny%ngstl1%ti-eg@csnet-relay ------------------------------ Date: 21 Oct 86 14:43:22 GMT From: ritcv!rocksvax!rocksanne!sunybcs!colonel@rochester.arpa (Col. G. L. Sicherman) Subject: Re: extended Turing test > It is not always clear which of the two components a sceptic is > worrying about. It's usually (ii), because who can quarrel with the > principle that a veridical model should have all of our performance > capacities? Did somebody call me? Anyway, it's misleading to propose that a veridical model of _our_ behavior ought to have our "performance capacities." Function and performance are relative to the user; in a human context they have no meaning, except to the extent that we can be said to "use" one another. This context is political rather than philosophical. I do not (yet) quarrel with the principle that the model ought to have our abilities. But to speak of "performance capacities" is to subtly distort the fundamental problem. We are not performers! POZZO: He used to dance the farandole, the fling, the brawl, the jig, the fandango and even the hornpipe. He capered. For joy. Now that's the best he can do. Do you know what he calls it? ESTRAGON: The Scapegoat's Agony. VLADIMIR: The Hard Stool. POZZO: The Net. He thinks he's entangled in a net. --S. Beckett, _Waiting for Godot_ -- Col. G. L. Sicherman UU: ...{rocksvax|decvax}!sunybcs!colonel CS: colonel@buffalo-cs BI: colonel@sunybcs, csdsiche@sunyabvc ------------------------------ Date: 21 Oct 86 14:57:12 GMT From: ritcv!rocksvax!rocksanne!sunybcs!colonel@rochester.arpa (Col. G. L. Sicherman) Subject: Re: Searle & ducks > I. What is "understanding", or "ducking" the issue... > > If it looks like a duck, swims like a duck, and > quacks like a duck, then it is *called* a duck. If you cut it open and > find that the organs are something other than a duck's, *then* > maybe it shouldn't be called a duck. What it should be called becomes > open to discussion (maybe dinner). > > The same principle applies to "understanding". No, this principle applies only to "facts"--things that anybody can observe, in more or less the same way. If you say, "Look! A duck!" and everybody else says "I don't see anything," what are you to believe? If it feels like a bellyache, don't conclude that it's a bellyache. There may be an inner meaning to deal with! Appendicitis, gallstones, trichinosis, you've been poisoned, Cthulhu is due any minute ... This kind of argument always arises when technology develops new capabilities. Bell: "Listen! My machine can talk!" Epiktistes: "No, it can only reproduce the speech of somebody else." It's something new--we must argue over what to call it. Any name we give it will be metaphorical, invoking an analogy with human behavior, or something else. The bottom line is that the thing is not a man; no amount of simulation and dissimulation will change that. When people talk of Ghosts I don't mention the Apparition by which I am haunted, the Phantom that shadows me about the streets, the image or spectre, so familiar, so like myself, which lurks in the plate- glass of shop-windows, or leaps out of mirrors to waylay me. --L. P. Smith -- Col. G. L. Sicherman UU: ...{rocksvax|decvax}!sunybcs!colonel CS: colonel@buffalo-cs BI: colonel@sunybcs, csdsiche@sunyabvc ------------------------------ Date: 21 Oct 86 16:47:53 GMT From: ssc-vax!bcsaic!michaelm@BEAVER.CS.WASHINGTON.EDU Subject: Re: Searle, Turing, Symbols, Categories >Stevan Harnad writes: > ...The objective of the turing test is to judge whether the candidate > has a mind, not whether it is human or drinks motor oil. In a related vein, if I recall my history correctly, the Turing test has been applied several times in history. One occasion was the encounter between the New World and the Old. I believe there was considerable speculation on the part of certain European groups (fueled, one imagines, by economic motives) as to whether the American Indians had souls. The (Catholic) church ruled that they did, effectively putting an end to the controversy. The question of whether they had souls was the historical equivalent to the question of whether they had mind and/or intelligence, I suppose. I believe the Turing test was also applied to oranguatans, although I don't recall the details (except that the orangutans flunked). As an interesting thought experiment, suppose a Turing test were done with a robot made to look like a human, and a human being who didn't speak English-- both over a CCTV, say, so you couldn't touch them to see which one was soft, etc. What would the robot have to do in order to pass itself off as human? -- Mike Maxwell Boeing Advanced Technology Center ...uw-beaver!uw-june!bcsaic!michaelm ------------------------------ Date: 21 Oct 86 13:29:09 GMT From: mcvax!ukc!its63b!hwcs!aimmi!gilbert@seismo.css.gov (Gilbert Cockton) Subject: Re: Searle, AI, NLP, understanding, ducks In article <1919@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes: > >Most so-called "understanding" is the result of training and >education. We are taught "procedures" to follow to >arrive at a desired result/conclusion. Education is primarily a >matter of teaching "procedures", whether it be mathematics, chemistry >or creative writing. The *better* understood the field, the more "formal" >the procedures. Mathematics is very well understood, and >consists almost entirely of "formal procedures". This is contentious and smacks of modelling all learning procedures in terms of a single subject, i.e. mathematics. I can't think of a more horrible subject to model human understanding on, given the inhumanity of most mathematics! Someone with as little as a week of curriculum studies could flatten this assertion instantly. NO respectable curriculum theory holds that there is a single form of knowledge to which all bodies of human experience conform with decreasing measures of formal success. In the UK, it is official curriculum policy to initiate children into several `forms' of knowledge (mathematics, physical science, technology, humanities, aesthetics, religion and the other one). The degree to which "understanding" is accepted as procedural rote learning varies from discipline to discipline. Your unsupported equivalence between understanding and formality ("The *better* understood the field, the more "formal" the procedures") would not last long in the hands of social and religious studies, history, literature, craft/design and technology or art teachers. Despite advances in LISP and connection machines, no-one has yet formally modelled any of these areas to the satisfaction of their skilled practitioners. I find it strange that AI workers who would struggle to write a history/literature/design essay to the satisfaction of a recognised authority are naive enough to believe that they could program a machine to write one. Many educational psychologists and experienced teachers would completely reject your assertions on the ground that unpersonalised cookbook-style passively-internalised formalisms, far from being a sign of understanding, actually constitute the exact opposite of understanding. For me, the term `understanding' cannot be applied to anything that someone has learnt until they can act on this knowledge within the REAL world (no text book problems or ineffective design rituals), justify their action in terms of this knowledge and finally demonstrate integration of the new knowledge with their existing views of the world (put it in their own words). Finally, your passive view of understanding cannot explain creative thought. Granted, you say `Most so-called "understanding"', but I would challenge any view that creative thought is exceptional - the mark of great and noble scientists who cannot yet be modelled by LISP programs. On the contrary, much of our daily lives has to be highly creative because our poor understanding of the world forces us to creatively fill in the gaps left by our inadequate formal education. Show me one engineer who has ever designed something from start to finish 100% according to the book. Even where design codes exist, as in bridge-building, much is left to the imagination. No formal prescription of behaviour will ever fully constrain the way a human will act. In situations where it is meant to, such as the military, folk spend a lot of time pretending either to have done exactly what they were told or to have said exactly what they wanted to be done. Nearer to home, find me one computer programmer who's understanding is based 100% on formal procedures. Even the most formal programmers will be lucky to be in program-proving mode more than 60% of the time. So I take it that they don't `understand' what they're doing the other 40% of the time? Maybe, but if this is the case, then all we've revealed are differences in our dictionaries. Who gave you the formal procedure for ascribing meaning to the word "understanding"? >This leads to the obvious conclusion that humans do not >*understand* natural language very well. >The lack of understanding of natural languages is also empirically >demonstrable. Confusion about the meaning >of a person's words, intentions etc can be seen in every interaction ... over the net! Words MEAN something, and what they do mean is relative to the speakers and the situation. The lack of formal procedures has NOTHING to do with breakdowns in inter-subjective understanding. It is wholly due to inabilities to view and describe the world in terms other than one's own. -- Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN JANET: gilbert@uk.ac.hw.aimmi ARPA: gilbert%aimmi.hw.ac.uk@cs.ucl.ac.uk UUCP: ..!{backbone}!aimmi.hw.ac.uk!gilbert ------------------------------ End of AIList Digest ******************** From vtcs1::in% Mon Oct 27 01:51:43 1986 Date: Mon, 27 Oct 86 01:51:38 est From: vtcs1::in% (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #235 Status: R AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 235 Today's Topics: Queries - GURU & Knowledge-Based Management Tools & IF/PROLOG Memory Expansion, Binding - Integrated Inference Machines, Philosophy - The Analog/Digital Distinction, Bibliography - AI Lab Technical Reports ---------------------------------------------------------------------- Date: 23 Oct 86 02:21:50 GMT From: v6m%psuvm.bitnet@ucbvax.Berkeley.EDU Subject: OPINIONS REQUESTED ON GURU I'D APPRECIATE ANY COMMENTS THE GROUP HAS ON THE AI BASED PACKAGE . VINCENT MARCHIONNI V6M AT PSUVM VIA BITNET OR ACIG 1 VALLEY FORGE PLAZA VALLEY FORGE PA 19487 THANKS VINCE ------------------------------ Date: 24 Oct 1986 21:35-EDT From: cross@afit-ab Subject: Knowledge-based management tools query Are there any pc-based shells that integrate simple rule bases, data base systems (ala DBASE III), spreedsheets? Does anyone have any experience with GURU? Any information would be appreciated. Will be starting some work here towards the design of an intelligent assistant for a program manager. Any pointers to papers or other references would also be appreciated. Thanks in advance. Steve Cross ------------------------------ Date: 24 Oct 86 16:09:05 GMT From: dual!islenet!humu!uhmanoa!aloha1!shee@ucbvax.Berkeley.EDU (shee) Subject: ifprolog. We have if/prolog version 3.0 on unix operating system on HP-9000 machine. We are looking for ways to increase the memory capacity of if/prolog so that there is no stack overflow for our knowledge_based ai programs. ------------------------------ Date: Sun, 26 Oct 86 00:56:18 edt From: gatech!ldi@rayssd.ray.com (Louis P. DiPalma) Subject: Re: Address??? Address for Integrated Inference Machines is as follows: Integrated Inference Machines 1468 E. Katella Avenue Anaheim, California 92805 Phone: (714) 978-6776 ------------------------------ Date: 23 Oct 86 17:20:00 GMT From: hp-pcd!orstcs!tgd@hplabs.hp.com (tgd) Subject: Re: The Analog/Digital Distinction: Soli Here is a rough try at defining the analog vs. digital distinction. In any representation, certain properties of the representational medium are exploited to carry information. Digital representations tend to exploit fewer properties of the medium. For example, in digital electronics, a 0 could be defined as anything below .2volts and a 1 as anything above 4volts. This is a simple distinction. An analog representation of a signal (e.g., in an audio amplifier) requires a much finer grain of distinctions--it exploits the continuity of voltage to represent, for example, the loudness of a sound. A related notion of digital and analog can be obtained by considering what kinds of transformations can be applied without losing information. Digital signals can generally be transformed in more ways--precisely because they do not exploit as many properties of the representational medium. Hence, if we add .1volts to a digital 0 as defined above, the result will either still be 0 or else be undefined (and hence detectable). A digital 1 remains unchanged under addition of .1volts. However, the analog signal would be changed under ANY addition of voltage. --Tom Dietterich ------------------------------ Date: Wed 22 Oct 86 09:38:53-CDT From: AI.CHRISSIE@R20.UTEXAS.EDU Subject: AI Lab Technical Reports [Forwarded from the UTexas-20 bboard by Laws@SRI-STRIPE.] Following is a listing of the reports available from the AI Lab. Reports are available from Chrissie in Taylor Hall 4.130D. An annotated list is also available upon request either on-line or hardcopy. TECHNICAL REPORT LISTING Artificial Intelligence Laboratory University of Texas at Austin Taylor Hall 2.124 Austin, Texas 78712 (512) 471-9562 September 1986 All reports furnished free of charge AI84-01 Artificial Intelligence Project at The University of Texas at Austin, Gordon S. Novak and Robert L. Causey, et al., 1984. AI84-02 Computing Discourse Conceptual Coherence: A Means to Contextual Reference Resolution, Ezat Karimi, August 1984. AI84-03 Translating Horn Clauses From English, Yeong-Ho Yu, August 1984. AI84-04 From Menus to Intentions in Man-Machine Dialogue, Robert F. Simmons, November 1984. AI84-05 A Text Knowledge Base for the AI Handbook, Robert F. Simmons, December 1983. AI85-02 Knowledge Based Contextual Reference Resolution for Text Understanding, Michael Kavanaugh Smith, January 1985. AI85-03 Learning Problem Solving: A Proposal for Continued Research, Bruce W. Porter, March 1985. AI85-04 Using and Revising Learned Concept Models: A Research Proposal, Bruce W. Porter, May 1985. AI85-05 A Self Organizing Retrieval System for Graphs, Robert A. Levinson, May 1985. AI85-06 Lisp Programming Lecture Notes, Gordon S. Novak, Jr., July 1985. AI85-07 Heuristic and Formal Methods in Automatic Program Debugging, William R. Murray, June 1985. (To appear in IJCAI85 Proceedings.) AI85-08 A General Heuristic Bottom-up Procedure for Searching AND/OR Graphs, Vipin Kumar, August 1985. AI85-09 A General Paradigm for AND/OR Graph and Game Tree Search. Vipin Kumar, August 1985. AI85-10 Parallel Processing for Artificial Intelligence, Vipin Kumar, 1985. AI85-11 Branch-AND-Bound Search, Vipin Kumar, 1985. AI85-12 Computational Treatment of Metaphor in Text Understanding: A First Approach, Olivier Winghart, August 1985. AI85-13 Computer Science and Medical Information Retrieval, Robert Simmons, 1985. AI85-14 Technologies for Machine Translation, Robert Simmons, August 1985. AI85-15 The Knower's Paradox and the Logics of Attitudes, Nicholas Asher and Hans Kamp, August 1985. AI85-16 Negotiated Interfaces for Software Reusability, Rick Hill, December 1985. AI85-17 The Map-Learning Critter, Benjamin J. Kuipers, December 1985. AI85-18 Menu-Based Creation of Procedures for Display of Data, Man-Lee Wan, December 1985. AI85-19 Explanation of Mechanical Systems Through Qualitative Simulation, Stuart Laughton, December 1985. AI86-20 Experimental Goal Regression: A Method for Learning Problem Solving Heuristics, Bruce W. Porter and Dennis Kibler, January 1986. AI86-21 GT: A Conjecture Generator for Graph Theory, Wing-Kwong Wong, January 1986. AI86-22 An Intelligent Backtracking Algorithm for Parallel Execution of Logic Programs, Yow-Jian Lin, Vipin Kumar and Clement Leung, March 1986. AI86-23 A Parallel Execution Scheme for Exploiting AND-parallelism of Logic Programs, Yow-Jian Lin and Vipin Kumar, March 1986. AI86-24 Qualitative Simulation as Causal Explanation, Benjamin J. Kuipers, April 1986. AI86-25 Fault Diagnosis Using Qualitative Simulation, Ray Bareiss and Adam Farquhar, April 1986. AI86-26 Symmetric Rules for Translation of English and Chinese, Wanying Jin and Robert F. Simmons, May 1986. AI86-27 Automatic Program Debugging for Intelligent Tutoring Systems, William R. Murray, June, 1986. (PhD dissertation) AI86-28 The Role of Inversion, Clecting and PP-Fronting in Relating Discourse Elements, Mark V. Lapolla, July 1986. AI86-29 A Theory of Argument Coherence, Wing-Kwong C. Wong, July 1986. AI86-30 Metaphorical Shift and The Induction of Similarities, Phillipe M. Alcouffe, July 1986. (Master's thesis) AI86-31 A Rule Language for the GLISP Programming System, Christopher A. Rath, August 1986. (Master's thesis) AI86-32 Talus: Automatic Program Debugging for Intelligent Tutoring Systems, William R. Murray, August 1986. AI86-33 New Algorithms for Dependency-Directed Backtracking, Charles J. Petrie, September, 1986. (Master's thesis) AI86-34 An Execution Model for Exploiting AND-Parallelism in Logic Programs, Yow-Jian Lin and Vipin Kumar, September 1986. AI86-35 PROTOS: An Experiment in Knowledge Acquisition for Heuristic Classification Tasks, Bruce W. Porter and E. Ray Bareiss, August 1986. ------------------------------ End of AIList Digest ******************** From vtcs1::in% Wed Oct 29 19:50:44 1986 Date: Wed, 29 Oct 86 19:50:37 est From: vtcs1::in% (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #236 Status: RO AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 236 Today's Topics: Administrivia - Mod.ai Followup Problem, Philosophy - Replies from Stevan Harnad to Mozes, Cugini, and Kalish ---------------------------------------------------------------------- Date: Sun 26 Oct 86 17:11:45-PST From: Ken Laws Reply-to: AIList-Request@SRI-AI.ARPA Subject: Mod.ai Followup Problem The following five messages are replies by Stevan Harnad to some of the items that have appeared in AIList. These five had not made it to the AIList@SRI-STRIPE mailbox and so were never forwarded to the digest or to mod.ai. Our current hypothesis is that the Usenet readnews command does not correctly deliver followup ("f") messages when used to reply to mod.ai items. Readers with this problem can send replies to net.ai or to sri-stripe!ailist. -- Kenneth Laws ------------------------------ Date: Mon, 27 Oct 86 00:15:36 est From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad) Subject: for posting on mod.ai (reply to E. Mozes, reconstructed) On mod.ai, in Message-ID: <8610160605.AA09268@ucbvax.Berkeley.EDU> on 16 Oct 86 06:05:38 GMT, eyal@wisdom.BITNET (Eyal mozes) writes: > I don't see your point at all about "categorical > perception". You say that "differences between reds and differences > between yellows look much smaller than equal-sized differences that > cross the red/yellow boundary". But if they look much smaller, this > means they're NOT "equal-sized"; the differences in wave-length may be > the same, but the differences in COLOR are much smaller. There seems to be a problem here, and I'm afraid it might be the mind/body problem. I'm not completely sure what you mean. If all you mean is that sometimes equal-sized differences in inputs can be made unequal by internal differences in how they are encoded, embodied or represented -- i.e., that internal physical differences of some sort may mediate the perceived inequalities -- then I of course agree. There are indeed innate color-detecting structures. Moreover, it is the hypothesis of the paper under discussion that such internal categorical representations can also arise as a consequence of learning. If what you mean, however, is that there exist qualitative differences among equal-sized input differences with no internal physical counterpart, and that these are in fact mediated by the intrinsic nature of phenomenological COLOR -- that discontinuous qualitative inequalities can occur when everything physical involved, external and internal, is continuous and equal -- then I am afraid I cannot follow you. My own position on color quality -- i.e., "what it's like" to experience red, etc. -- is that it is best ignored, methodologically. Psychophysical modeling is better off restricting itself to what we CAN hope to handle, namely, relative and absolute judgments: What differences can we tell apart in pairwise comparison (relative discrimination) and what stimuli or objects can we label or identify (absolute discrimination)? We have our hands full modeling this. Further concerns about trying to capture the qualitative nature of perception, over and above its performance consequences [the Total Turing Test] are, I believe, futile. This position can be dubbed "methodological epiphenomenalism." It amounts to saying that the best empirical theory of mind that we can hope to come up with will always be JUST AS TRUE of devices that actual have qualitative experiences (i.e., are conscious) as of devices that behave EXACTLY AS IF they had qualitative experiences (i.e., turing-indistinguishably), but do not (if such insentient look-alikes are possible). The position is argued in detail in the papers under discussion. > Your whole theory is based on the assumption that perceptual qualities > are something physical in the outside world (e.g., that colors ARE > wave-lengths). But this is wrong. Perceptual qualities represent the > form in which we perceive external objects, and they're determined both > by external physical conditions and by the physical structure of our > sensory apparatus; thus, colors are determined both by wave-lengths and > by the physical structure of our visual system. So there's no apriori > reason to expect that equal-sized differences in wave-length will lead > to equal-sized differences in color, or to assume that deviations from > this rule must be caused by internal representations of categories. And > this seems to completely cut the grounds from under your theory. Again, there is nothing for me to disagree with if you're saying that perceived discontinuities are mediated by either external or internal physical discontinuities. In modeling the induction and representation of categories, I am modeling the physical sources of such discontinuities. But there's still an ambiguity in what you seem to be saying, and I don't think I'm mistaken if I think I detect a note of dualism in it. It all hinges on what you mean by "outside world." If you only mean what's physically outside the device in question, then of course perceptual qualities cannot be equated with that. It's internal physical differences that matter. But that doesn't seem to be all you mean by "outside world." You seem to mean that the whole of the physical world is somehow "outside" conscious perception. What else can you mean by the statement that "perceptual qualities represent the form [?] in which we perceive external objects" or that "there's no...reason to expect that...[perceptual] deviations from [physical equality]...must be caused by internal representations of categories." Perhaps I have misunderstood, but either this is just a reminder that there are internal physical differences one must take into account too in modeling the induction and representation of categories (but then they are indeed taken into account in the papers under discussion, and I can't imagine why you would think they would "completely cut the ground from under" my theory) or else you are saying something metaphysical with which I cannot agree. One last possibility may have to do with what you mean by "representation." I use the word eclectically, especially because the papers are arguing for a hybrid representation, with the symbolic component grounded in the nonsymbolic. So I can even agree with you that I doubt that mere symbolic differences are likely to be the sole cause of psychophysical discontinuities, although, being physically embodied, they are in principle sufficient. I hypothesize, though, that nonsymbolic differences are also involved in psychophysical discontinuities. > My second criticism is that, even if "categorical perception" really > provided a base for a theory of categorization, it would be very > limited; it would apply only to categories of perceptual qualities. I > can't see how you'd apply your approach to a category such as "table", > let alone "justice". How abstract categories can be grounded "bottom-up" in concrete psychophysical categories is the central theme of the papers under discussion. Your remarks were based only on the summaries and abstracts of those papers. By now I hope the preprints have reached you, as you requested, and that your question has been satisfactorily answered. To summarize "grounding" briefly: According to the model, (learned) concrete psychophysical categories are formed from sampling positive and negative instances of a category and then encoding the invariant information that will reliably identify further instances. This might be how one learned the concrete categories "horse" and "striped" for example. The (concrete) category "zebra" could then be learned without need for direct perceptual ACQUAINTANCE with the positive and negative instances by simply being told that a zebra is a striped horse. That is, the category can be learned by symbolic DESCRIPTION by merely recombining the labels of the already-grounded perceptual categories. All categorization involves some abstraction and generalization (even "horse," and certainly "striped" did), so abstract categories such as "goodness," "truth" and "justice" could be learned and represented by recursion on already grounded categories, their labels and their underlying representations. (I have no idea why you think I'd have a problem with "table.") > Actually, there already exists a theory of categorization that is along > similar lines to your approach, but integrated with a detailed theory > of perception and not subject to the two criticisms above; that is the > Objectivist theory of concepts. It was presented by Ayn Rand... and by > David Kelley... Thanks for the reference, but I'd be amazed to see an implementable, testable model of categorization performance issue from that source... Stevan Harnad {allegra, bellcore, seismo, packard} !princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771 ------------------------------ Date: Sun, 26 Oct 86 11:05:47 est From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad) Subject: Please post on mod.ai -- first of 4 (cugini) In Message-ID: <8610190504.AA08059@ucbvax.Berkeley.EDU> on mod.ai CUGINI, JOHN replies to my claim that >> there is no rational reason for being more sceptical about robots' >> minds (if we can't tell their performance apart from that of people) >> than about (other) peoples' minds. with the following: > One (rationally) believes other people are conscious BOTH because > of their performance and because their internal stuff is a lot like > one's own. This is a very important point and a subtle one, so I want to make sure that my position is explicit and clear: I am not denying that there exist some objective data that correlate with having a mind (consciousness) over and above performance data. In particular, there's (1) the way we look and (2) the fact that we have brains. What I am denying is that this is relevant to our intuitions about who has a mind and why. I claim that our intuitive sense of who has a mind is COMPLETELY based on performance, and our reason can do no better. These other correlates are only inessential afterthoughts, and it's irrational to take them as criteria. My supporting argument is very simple: We have absolutely no intuitive FUNCTIONAL ideas about how our brains work. (If we did, we'd have long since spun an implementable brain theory from our introspective armchairs.) Consequently, our belief that brains are evidence of minds and that the absence of a brain is evidence of the absence of a mind is based on a superficial black-box correlation. It is no more rational than being biased by any other aspect of appearance, such as the color of the skin, the shape of the eyes or even the presence or absence of a tail. To put it in the starkest terms possible: We wouldn't know what device was and was not relevantly brain-like if it was staring us in the face -- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass the Total Turing Test). That's the only thing our intuitions have to go on, and our reason has nothing more to offer either. To take one last pass at setting the relevant intuitions: We know what it's like to DO (and be able to do) certain things. Similar performance capacity is our basis for inferring that what it's like for me is what it's like for you (or it). We do not know anything about HOW we do any of those things, or about what would count as the right way and the wrong way (functionally speaking). Inferring that another entity has a mind is an intuitive judgment based on performance. It's called the (total) turing test. Inferring HOW other entities accomplish their performance is ordinary scientific inference. We're in no rational position to prejudge this profound and substantive issue on the basis of the appearance of a lump of grey jelly to our untutored but superstitious minds. > [W]e DO have some idea about the functional basis for mind, namely > that it depends on the brain (at least more than on the pancreas, say). > This is not to contend that there might not be other bases, but for > now ALL the minds we know of are brain-based, and it's just not > dazzlingly clear whether this is an incidental fact or somewhat > more deeply entrenched. The question isn't whether the fact is incidental, but what its relevant functional basis is. In other words, what is it about he brain that's relevant and what incidental? We need the causal basis for the correlation, and that calls for a hefty piece of creative scientific inference (probably in theoretical bio-engineering). The pancreas is no problem, because it can't generate the brain's performance capacities. But it is simply begging the question to say that brain-likeness is an EXTRA relevant source of information in turing-testing robots, when we have no idea what's relevantly brain-like. People were sure (as sure as they'll ever be) that other people had minds long before they ever discovered they had brains. I myself believed the brain was just a figure of speech for the first dozen or so years of my life. Perhaps there are people who don't learn or believe the news throughout their entire lifetimes. Do you think these people KNOW any less than we do about what does or doesn't have a mind? Besides, how many people do you think could really pick out a brain from a pancreas anyway? And even those who can have absolutely no idea what it is about the brain that makes it conscious; and whether a cow's brain or a horse-shoe crab's has it; or whether any other device, artificial or natural, has it or lacks it, or why. In the end everyone must revert to the fact that a brain is as a brain does. > Why is consciousness a red herring just because it adds a level > of uncertainty? Perhaps I should have said indeterminacy. If my arguments for performance-indiscernibility (the turing test) as our only objective basis for inferring mind are correct, then there is a level of underdetermination here that is in no way comparable to that of, say, the unobservable theoretical entities of physics (say, quarks, or, to be more trendy, perhaps strings). Ordinary underdetermination goes like this: How do I know that your theory's right about the existence and presence of strings? Because WITH them the theory succeeds in accounting for all the objective data (let's pretend), and without them it does not. Strings are not "forced" by the data, and other rival theories may be possible that work without them. But until these rivals are put forward, normal science says strings are "real" (modulo ordinary underdetermination). Now try to run that through for consciousness: How do I know that your theory's right about the existence and presence of consciousness (i.e., that your model has a mind)? "Because its performance is turing-indistinguishable from that of creatures that have minds." Is your theory dualistic? Does it give consciousness an independent, nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit the objective data just as well (indeed, turing-indistinguishably) without consciousness? "Well..." That's indeterminacy, or radical underdetermination, or what have you. And that's why consciousness is a methodological red herring. > Even though any correlations will ultimately be grounded on one side > by introspection reports, it does not follow that we will never know, > with reasonable assurance, which aspects of the brain are necessary for > consciousness and which are incidental...Now at some level of difficulty > and abstraction, you can always engineer anything with anything... But > the "multi-realizability" argument has force only if its obvious > (which it ain't) that the structure of the brain at a fairly high > level (eg neuron networks, rather than molecules), high enough to be > duplicated by electronics, is what's important for consciousness. We'll certainly learn more about the correlation between brain function and consciousness, and even about the causal (functional) basis of the correlation. But the correlation will really be between function and performance capacity, and the rest will remain the intuitive inference or leap of faith it always was. And since ascertaining what is relevant about brain function and what is incidental cannot depend simply on its BEING brain function, but must instead depend, as usual, on the performance criterion, we're back where we started. (What do you think is the basis for our confidence in introspective reports? And what are you going to say about robots'introspective reports...?) I don't know what you mean, by the way, about always being able to "engineer anything with anything at some level of abstraction." Can anyone engineer something to pass the robotic version of the Total Turing Test right now? And what's that "level of abstraction" stuff? Robots have to do their thing in the real world. And if my groundedness arguments are valid, that ain't all done with symbols (plus add-on peripheral modules). Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771 ------------------------------ Date: Sun, 26 Oct 86 11:11:08 est From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad) Subject: For posting on mod.ai - 2nd of 4 (reply to Kalish) In mod.ai, Message-ID: <861016-071607-4573@Xerox>, "charles_kalish.EdServices"@XEROX.COM writes: > About Stevan Harnad's two kinds of Turing tests [linguistic > vs. robotic]: I can't really see what difference the I/O methods > of your system makes. It seems that the relevant issue is what > kind of representation of the world it has. I agree that what's at issue is what kind of representation of the world the system has. But you are prejudging "representation" to mean only symbolic representation, whereas the burden of the papers in question is to show that symbolic representations are "ungrounded" and must be grounded in nonsymbolic processes (nonmodularly -- i.e., NOT by merely tacking on autonomous peripherals). > While I agree that, to really understand, the system would need some > non-purely conventional representation (not semantic if "semantic" > means "not operable on in a formal way" as I believe [given the brain > is a physical system] all mental processes are formal then "semantic" > just means governed by a process we don't understand yet), giving and > getting through certain kinds of I/O doesn't make much difference. "Non-purely conventional representation"? Sounds mysterious. I've tried to make a concrete proposal as to just what that hybrid representation should be like. "All mental processes are formal"? Sounds like prejudging the issue again. It may help to be explicit about what one means by formal/symbolic: Symbolic processing is the manipulation of (arbitrary) physical tokens in virtue of their shape on the basis of formal rules. This is also called syntactic processing. The formal goings-on are also "semantically interpretable" -- they have meanings; they are connected to objects in the outside world that they are about. The Searle problem is that so far the only devices that do semantic interpretations intrinsically are ourselves. My proposal is that grounding the representations nonmodularly in the I/O connection may provide the requisite intrinsic semantics. This may be the "process we don't understand yet." But it means giving up the idea that "all mental processes are formal" (which in any case does not follow, at least on the present definition of "formal," from the fact that "the brain is a physical system"). > Two for instances: SHRDLU operated on a simulated blocks world. The > modifications to make it operate on real blocks would have been > peripheral and not have affected the understanding of the system. This is a variant of the "Triviality of Transduction (& A/D, & D/A, and Effectors)" Argument (TT) that I've responded to in another iteration. In brief, it's toy problems like SHRDLU that are trivial. The complete translatability of internal symbolic descriptions into the objects they stand for (and the consequent partitioning of the substantive symbolic module and the trivial nonsymbolic peripherals) may simply break down, as I predict, for life-size problems approaching the power to pass the Total Turing Test. To put it another way: There is a conjecture implicit in the solutions to current toy/microworld problems, namely, that something along essentially the same lines will suitably generalize to the grown-up/macroworld problem. What I'm saying amounts to a denial of that conjecture, with reasons. It is not a reply to me to simply restate the conjecture. > Also, all systems take analog input and give analog output. Most receive > finger pressure on keys and return directed streams of ink or electrons. > It may be that a robot would need more "immediate" (as opposed to > conventional) representations, but it's neither necessary nor sufficient > to be a robot to have those representations. The problem isn't marrying symbolic systems to any old I/O. I claim that minds are "dedicated" systems of a particular kind: The kind capable of passing the Total Turing Test. That's the only necessity and sufficiency in question. And again, the mysterious word "immediate" doesn't help. I've tried to make a specific proposal, and I've accepted the consequences, namely, that it's just not going to be a "conventional" marriage at all, between a (substantive) symbolic module and a (trivial) nonsymbolic module, but rather a case of miscegenation (or a sex-change operation, or some other suitably mixed metaphor). The resulting representational system will be grounded "bottom-up" in nonsymbolic function (and will, I hope, display the characteristic "hybrid vigor" that our current pure-bred symbolic and nonsymbolic processes lack), as I've proposed (nonmetaphorically) in the papers under discussion. Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771 ------------------------------ End of AIList Digest ******************** From vtcs1::in% Wed Oct 29 19:50:11 1986 Date: Wed, 29 Oct 86 19:50:05 est From: vtcs1::in% (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #237 Status: RO AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 237 Today's Topics: Philosophy - Harnad's Replies to Krulwich and Paul & Turing Test & Symbolic Reasoning ---------------------------------------------------------------------- Date: Sun, 26 Oct 86 11:45:17 est From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad) Subject: For posting on mod.ai 3rd of 4 (reply to Krulwich) In mod.ai, Message-ID: <8610190504.AA08083@ucbvax.Berkeley.EDU>, 17 Oct 6 17:29:00 GMT, KRULWICH@C.CS.CMU.EDU (Bruce Krulwich) writes: > i disagree...that symbols, and in general any entity that a computer > will process, can only be dealt with in terms of syntax. for example, > when i add two integers, the bits that the integers are encoded in are > interpreted semantically to combine to form an integer. the same > could be said about a symbol that i pass to a routine in an > object-oriented system such as CLU, where what is done with > the symbol depends on it's type (which i claim is it's semantics) Syntax is ordinarily defined as formal rules for manipulating physical symbol tokens in virtue of their (arbitrary) SHAPES. The syntactic goings-on are semantically interpretable, that is, the symbols are also manipulable in virtue of their MEANINGS, not just their shapes. Meaning is a complex and ill-understood phenomenon, but it includes (1) the relation of the symbols to the real objects they "stand for" and (2) a subjective sense of understanding that relation (i.e., what Searle has for English and lacks for Chinese, despite correctly manipulating its symbols). So far the only ones who seem to do (1) and (2) are ourselves. Redefining semantics as manipulating symbols in virtue of their "type" doesn't seem to solve the problem... > i think that the reason that computers are so far behind the > human brain in semantic interpretation and in general "thinking" > is that the brain contains a hell of a lot more information > than most computer systems, and also the brain makes associations > much faster, so an object (ie, a thought) is associated with > its semantics almost instantly. I'd say you're pinning a lot of hopes on "more" and "faster." The problem just might be somewhat deeper than that... Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771 ------------------------------ Date: Sun, 26 Oct 86 11:59:28 est From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad) Subject: For posting on mod.ai, 4th of 4 (reply to Danny Paul) Topic: Machines: Natural and Man-Made On mod.ai, in Message-ID: <8610240550.AA15402@ucbvax.Berkeley.EDU>, 22 Oct 86 14:49:00 GMT, NGSTL1::DANNY%ti-eg.CSNET@RELAY.CS.NET (Daniel Paul) cites Daniel Simon's earlier reply in AI digest (V4 #226): >One question you haven't addressed is the relationship between intelligence and >"human performance". Are the two synonymous? If so, why bother to make >artificial humans when making natural ones is so much easier (not to mention >more fun)? Daniel Paul then adds: > This is a question that has been bothering me for a while. When it > is so much cheaper (and possible now, while true machine intelligence > may be just a dream) why are we wasting time training machines when we > could be training humans instead? The only reasons that I can see are > that intelligent systems can be made small enough and light enough to > sit on bombs. Are there any other reasons? Apart from the two obvious ones -- (1) so machines can free people to do things machines cannot yet do, if people prefer, and (2) so machines can do things that people can only do less quickly and efficiently, if people prefer -- there is the less obvious reply already made to Daniel Simon: (3) because trying to get machines to display all our performance capacity (the Total Turing Test) is our only way of arriving at a functional understanding of what kinds of machines we are, and how we work. [Before the cards and letters pour in to inform me that I've used "machine" incoherently: A "machine," (writ large, Deus Ex Machina) is just a physical, causal system. Present-generation artificial machines are simply very primitive examples.] Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771 ------------------------------ Date: 23 Oct 86 15:39:08 GMT From: husc6!rutgers!princeton!mind!harnad@eddie.mit.edu (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories michaelm@bcsaic.UUCP (michael maxwell) writes: > I believe the Turing test was also applied to orangutans, although > I don't recall the details (except that the orangutans flunked)... > As an interesting thought experiment, suppose a Turing test were done > with a robot made to look like a human, and a human being who didn't > speak English-- both over a CCTV, say, so you couldn't touch them to > see which one was soft, etc. What would the robot have to do in order > to pass itself off as human? They should all three in principle have a chance of passing. For the orang, we would need to administer the ecologically valid version of the test. (I think we have reasonably reliable cross-species intuitions about mental states, although they're obviously not as sensitive as our intraspecific ones, and they tend to be anthropocentric and anthropomorphic -- perhaps necessarily so; experienced naturalists are better at this, just as cross-cultural ethnographic judgments depend on exposure and experience.) We certainly have no problem in principle with foreign speakers (the remarkable linguist, polyglot and bible-translator Kenneth Pike has a "magic show" in which, after less than an hour of "turing" interactions with a speaker of any of the [shrinking] number of languages he doesn't yet know, they are babbling mutually intelligibly before your very eyes), although most of us may have some problems in practice with such a feat, at least, without practice. Severe aphasics and mental retardates may be tougher cases, but there perhaps the orang version would stand us in good stead (and I don't mean that disrespectfully; I have an extremely high regard for the mental states of our fellow creatures, whether human or nonhuman). As to the robot: Well that's the issue here, isn't it? Can it or can it not pass the appropriate total test that its appropriate non-robot counterpart (be it human or ape) can pass? If so, it has a mind, by this criterion (the Total Turing Test). I certainly wouldn't dream of flunking either a human or a robot just because he/it didn't feel soft, if his/its total performance was otherwise turing indistinguishable. Stevan Harnad princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ Date: 23 Oct 86 14:52:56 GMT From: rutgers!princeton!mind!harnad@lll-crg.arpa Subject: Re: extended Turing test colonel@sunybcs.UUCP (Col. G. L. Sicherman) writes: > [I]t's misleading to propose that a veridical model of _our_ behavior > ought to have our "performance capacities"...I do not (yet) quarrel > with the principle that the model ought to have our abilities. But to > speak of "performance capacities" is to subtly distort the fundamental > problem. We are not performers! "Behavioral ability"/"performance capacity" -- such fuss over black-box synonyms, instead of facing the substantive problem of modeling the functional substrate that will generate them. ------------------------------ Date: 24 Oct 86 19:02:42 GMT From: spar!freeman@decwrl.dec.com Subject: Re: Searle, Turing, Symbols, Categories Possibly a more interesting test would be to give the computer direct control of the video bit map and let it synthesize an image of a human being. ------------------------------ Date: Fri, 24 Oct 86 22:54:58 EDT From: "Col. G. L. Sicherman" Subject: Re: turing test PHayes@SRI-KL.ARPA (Pat Hayes) writes: > Daniel R. Simon has worries about the Turing test. A good place to find > intelligent discussion of these issues is Turings original article in MIND, > October 1950, v.59, pages 433 to 460. That article was in part a response to G. Jefferson's Lister Oration, which appeared as "The mind of mechanical man" in the British Medical Journal for 1949 (pp. 1105-1121). It's well worth reading in its own right. Jefferson presents the humane issues at least as well as Turing presents the scientific issues, and I think that Turing failed to rebut, or perhaps to comprehend, all Jefferson's objections. ------------------------------ Date: Fri, 24 Oct 86 18:09 CDT From: PADIN%FNALB.BITNET@WISCVM.WISC.EDU Subject: THE PSEUDOMATH OF THE TURING TEST LETS PUT THE TURING TEST INTO PSEUDO MATHEMATICAL TERMS. DEFINE THE SET Q={question1,question2,...}. LETS NOTE THAT FOR EACH q IN Q, THERE IS AN INFINITE NUMBER OF RESPONSES ( THE RESPONSES NEED NOT BE RELEVANT TO THE QUESTION, THEY JUST NEED TO BE RESPONSES). IN FACT, WE CAN DEFINE A SET R={EVERY POSSIBLE RESPONSE TO ANY QUESTION}, i.e., R={r1,r2,r3,...}. WE CAN DEFINE THE TURING TEST AS A FUNCTION T THAT MAPS QUESTIONS q in Q TO A SET RR IN R OF ALL RESPONSES ( i.e., RR IS A SUBSET OF R). WE CAN THEN WRITE T(q) --> RR WHICH STATES THAT THERE EXISTS A FUNCTION T THAT MAPS A QUESTION q TO A SET OF RESPONSES RR. THE EXISTENCE OF T FOR ALL QUESTIONS q IS EVIDENCE FOR THE PRESENCE OF MIND SINCE T CHOOSES, OUT OF AN INFINITE NUMBER OF RESPONSES, THOSE RESPONSES THAT ARE APPROPRIATE TO AN ENTITY WITH A MIND. NOTE: T IS THE SET {(question1,{resp1-1,resp2-1,...,respn-1}), (question2,{resp1-2,resp2-2,...,respk-2}), ... (questionj,{resp1-j,resp2-j,...,respj-h}), } WE USE A SET (RR) OF RESPONSES BECAUSE THERE ARE ,FOR MOST QUESTIONS, MORE THEN ONE RESPONSE. THERE ARE TIMES OF COURSE WHEN THERE IS JUST ONE ELEMENT IN RR, SUCH AS, THE RESPONSE TO THE QUESTION, 'IS IT RAINING OUTSIDE?'. NOW A PROBLEM ARRISES: WHO IS TO DECIDE WHICH SUBSET OF RESPONSES INDICATES THE EXISTENCE OF MIND? WHO WILL DECIDE WHICH SET IS APPROPRIATE TO INDICATE AN ENTITY OTHER THAN OURSELVES IS OUT THERE RESPONDING? FOR EXAMPLE, IF WE DEFINE THE SET RR AS RR={r(i) | r(i) is randomly chosen from R} THEN TO EACH QUESTION q IN THE SET OF QUESTIONS USED TO DETERMINE THE EXISTENCE OF MIND, WE GET A RESPONSE WHICH APPEARS TO BE RANDOM, THAT IS , WE CAN MAKE NO SENSE OF THE RESPONSE WITH RESPECT TO THE QUESTION ASKED. IT WOULD SEEM THAT THIS WOULD BE SUFFICIENT TO LABEL TO RESPONDENT A MINDLESS ENTITY. HOWEVER, IT IS THE EXACT RESPONSE ONE WOULD EXPECT OF A SCHIZOPHRENIC. NOW WHAT DO WE DO? DO WE CHOSE TO DEFINE SCHIZOPHRENICS AS MINDLESS PEOPLE? THIS IS NOT MORALLY PALATABLE. DO WE CHOSE TO ALLOW THE 'RANDOM SET' TO BE USED AS CRITERIA FOR ASSESSING THE QUALITY OF MINDEDNESS? THIS CHOICE IS NOT ACCEPTABLE EITHER BECAUSE IT SIMPLY RESULTS IN WHAT MAY BE CALLED TURING NOISE, YIELDING NO USEFUL INFORMATION. IF WE ARE UNWILLING TO ACCEPT ANOTHER'S DECISION AS TO THE SET OF ACCEPTABLE RESPONSES, THEN WE ARE COMPELLED TO DO THE DETERMINATION OURSELVES. NOW IF WE ARE TO USE OUR JUDGEMENT IN DETERMINING THE PRESENCE OF ANOTHER MIND, THEN WE MUST ACCEPT THE POSSIBILITY OF ERROR INHERENT IN THE HUMAN DECISION MAKING PROCESS. AT BEST,THEN, THE TURING TEST WILL BE ABLE TO GIVE US ONLY A HINT AT THE PRESENCE OF ANOTHER MIND; A LEVEL OF PROBABILITY. ------------------------------ Date: 26 Oct 86 20:56:29 GMT From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad) Subject: Re: Searle, Turing, Symbols, Categories freeman@spar.UUCP (Jay Freeman) replies: > Possibly a more interesting test [than the robotic version of > the Total Turing Test] would be to give the computer > direct control of the video bit map and let it synthesize an > image of a human being. Manipulating digital "images" is still only symbol-manipulation. It is (1) the causal connection of the transducers with the objects of the outside world, including (2) any physical "resemblance" the energy pattern on the transducers may have to the objects from which they originate, that distinguishes robotic functionalism from symbolic functionalism and that suggests a solution to the problem of grounding the otherwise ungrounded symbols (i.e., the problem of "intrinsic vs. derived intentionality"), as argued in the papers under discussion. A third reason why internally manipulated bit-maps are not a new way out of the problems with the symbolic version of the turing test is that (3) a model that tries to explain the functional basis of our total performance capacity already has its hands full with anticipating and generating all of our response capacities in the face of any potential input contingency (i.e., passing the Total Turing Test) without having to anticipate and generate all the input contingencies themselves. In other words, its enough of a problem to model the mind and how it interacts successfully with the world without having to model the world too. Stevan Harnad {seismo, packard, allegra} !princeton!mind!harnad harnad%mind@princeton.csnet (609)-921-7771 ------------------------------ End of AIList Digest ******************** From vtcs1::in% Wed Oct 29 19:50:24 1986 Date: Wed, 29 Oct 86 19:50:17 est From: vtcs1::in% (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #238 Status: RO AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 238 Today's Topics: Seminars - Toward Meta-Level Problem Solving (CMU) & Diagnosing Multiple Faults (SU) & Using Scheme for Discrete Simulation (SMU) & Ramification and Qualification in the Blocks World (SU) & Knowledge Programming using Functional Representations (SRI), Conference - AAAI Workshop on Uncertainty in AI, 1987 ---------------------------------------------------------------------- Date: 22 October 1986 1027-EDT From: Elaine Atkinson@A.CS.CMU.EDU Subject: Seminar - Toward Meta-Level Problem Solving (CMU) SPEAKER: Prof. Kurt VanLehn, Psychology Dept., CMU TITLE: "Towards meta-level problem solving" DATE: Thursday, October 23 TIME: 4:00 p.m. PLACE: Adamson Wing, Baker Hall ABSTRACT: This talk presents preliminary evidence for a new model of procedure following. Following a mentally held procedure is a common activity. It takes about 12 procedures to fill an order at McDonalds. Perhaps 50,000 procedures are followed daily in running an aircraft carrier. Despite its ubiquity and economic importance, little is known about procedure following. The folk model is that people have an interpreter, similar to the interpreters of Lisp, OPS5 or ACT*. The most common interpreters in cognitive science are hierarchical, in that they employ a goal stack or a goal tree as part of their temporary state. A new model of procedure following will be sketched based on the idea that procedure following is meta-level problem solving. The problem is to get a procedure to execute. The operators do things like set goals, pop them, etc. The state descriptions are things like "goal1 is more recent than goal2." Different problem spaces correspond to different interpreters: the goal stack, goal tree and goal agenda are three different meta-level problem spaces. We present data based on protocols from 25 subjects executing procedures that show that (1) different subjects have different interpreters (stack and agenda are the most common) and (2) some subjects change interpretation strategy in the midst of execution. Although these data do not unequivocally refute the folk model of procedure following, they receive a simpler, more elegant interpretation under the meta-level problem solving model. ------------------------------ Date: Thu, 23 Oct 86 15:32:19 pdt From: Premla Nangia Subject: Seminar - Diagnosing Multiple Faults (SU) Speaker: Johan de Kleer Intelligent Systems Laboratory Xerox Palo Alto Title: Diagnosing Multiple Faults Time: 4.15 p.m. Place: Cedar Hall Conference Room Diagnostic tasks require determining the differences between a model of an artifact and the artifact itself. The differences between the manifested behavior of the artifact and the predicted behavior of the model guide the search for the differences between the artifact and its model. The diagnostic procedure presented in this paper is model-based, inferring the behavior of the composite device from knowledge of the structure and function of the individual components comprising the device. The system (GDE --- General Diagnostic Engine) has been implemented and tested on examples in the domain of troubleshooting digital circuits. This research makes several novel contributions: First, the system diagnoses failures due to multiple faults. Second, failure candidates are represented and manipulated in terms of minimal sets of violated assumptions, resulting in an efficient diagnostic procedure. Third, the diagnostic procedure is incremental, exploiting the iterative nature of diagnosis. Fourth, a clear separation is drawn between diagnosis and behavior prediction, resulting in a domain (and inference procedure) independent diagnostic procedure. Fifth, GDE combines model-based prediction with sequential diagnosis to propose measurements to localize the faults. The usually required conditional probabilities are computed from the structure of the device and models of its components. This capability results from a novel way of incorporating probabilities and information theory with the context mechanism provided by Assumption-Based Truth Maintenance. ------------------------------ Date: WED, 10 oct 86 17:02:23 CDT From: leff%smu@csnet-relay Subject: Seminar - Using Scheme for Discrete Simulation (SMU) Using Scheme for Discrete Simulation Edward E. Ferguson, Texas Instruments, Location 315 Sic, Time 2PM Scheme is a lexically-scoped dialect of LISP that gives the programmer access to continuations, a fundamental capability upon which general control structures can be built. This presentation will show how continuations can be used to extend Scheme to have the basic features of a discrete simulation language. Topics that will be covered include discrete simulation techniques, addition of simulation capability to a general-purpose language, why Scheme is a good base language for simulation, and the complete Scheme text for a simulation control package. ------------------------------ Date: 24 Oct 86 1704 PDT From: Vladimir Lifschitz Subject: Seminar - Ramification and Qualification in the Blocks World (SU) RAMIFICATION AND QUALIFICATION IN THE BLOCKS WORLD Matt Ginsberg David Smith Thursday, October 30, 4pm MJH 252 In this talk, we discuss the need to infer properties of actions from general domain information. Specifically, we discuss the need to deduce the indirect consequences of actions (the ramification problem), and the need to determine inferentially under what circumstances a particular action will be blocked because its successful execution would involve the violation of a domain constraint (the qualification problem). We present a formal description of action that addresses these problems by considering a single model of the domain, and updating it to reflect the successful execution of actions. The bulk of the talk will involve the investigation of simple blocks world problems that existing formalisms have difficulty dealing with, including the Hanks-McDermott problem, and two new problems that we describe as "the dumbbell and the pulley". ------------------------------ Date: Fri 24 Oct 86 08:31:01-PDT From: Margaret Olender Subject: Seminar - Knowledge Programming using Functional Representations (SRI) KNOWLEDGE PROGRAMMING USING FUNCTIONAL REPRESENTATIONS Tore Risch Syntelligence 10:00 AM, WEDNESDAY, October 29 SRI International, Building E, Room EJ228 SYNTEL is a novel knowledge representation language that provides traditional features of expert system shells within a pure functional programming paradigm. However, it differs sharply from existing functional languages in many ways, ranging from its ability to deal with uncertainty to its evaluation procedures. A very flexible user-interface facility, tightly integrated with the SYNTEL interpreter, gives the knowledge engineer full control over both form and content of the end-user system. SYNTEL executes in both LISP machine and IBM mainframe/workstation environments, and has been used to develop large knowledge bases dealing with the assessment of financial risks. This talk will present an overview of its architecture, as well as describe the real-world problems that motivated its development. VISITORS: Please arrive 5 minutes early so that you can be escorted up from the E-building receptionist's desk. Thanks! P.S. Note change in day and time.... ------------------------------ Date: Thu, 23 Oct 86 23:23:40 pdt From: levitt@ads.ARPA (Tod Levitt) Subject: AAAI Workshop on Uncertainty in AI, 1987 CALL FOR PARTICIPATION Third Workshop on: "Uncertainty in Artificial Intelligence" Seattle, Washington, July 10-12, 1987 (preceeding AAAI conf.) Sponsored by: AAAI This is the third annual AAAI workshop on Uncertainty in AI. The first two workshops have been successful and productive, involving many of the top researchers in the field. The 1985 workshop proceedings have just appeared as a book, "Uncertainty in Artificial Intelligence", in the North-Holland Machine Intelligence and Pattern Recognition series. The general subject is automated or interactive reasoning under uncertainty. This year's emphasis is on the representation and control of uncertain knowledge. One effective way to make points, display tradeoffs and clarify issues in representation and control is through demonstration in applications, so these are especially encouraged, although papers on theory are also welcome. The workshop provides an opportunity for those interested in uncertainty in AI to present their ideas and participate in discussions with leading researchers in the field. Panel discussions will provide a lively cross-section of views. Papers are invited on the following topics: * Applications--including both results and implementation difficulties; experimental comparison of alternatives * Knowledge-based and procedural representations of uncertain information * Uncertainty in model-based reasoning and automated planning * Learning under uncertainty; theories of uncertain induction * Heursitics and control in evidentially based systems * Non-deterministic human-machine interaction * Uncertain inference procedures * Other uncertainty in AI issues. Papers will be carefully reviewed. Space is limited, so prospective attendees are urged to submit a paper with the intention of active participation in the workshop. Preference will be given to papers that have demonstrated their approach in real applications; however, underlying certainty calculi and reasoning methodologies should be supported by strong theoretical underpinnings in order to best encourage discussion on a scientific basis. To allow more time for discussion, most accepted papers will be included for publication and poster sessions, but not for presentation. Four copies of a paper or extended abstract should be sent to the program chairman by February 10, 1987. Acceptances will be sent by April 20, 1987. Final (camera ready) papers must be received by May 22, 1987. Proceedings will be available at the workshop. General Chair: Program Chair: Arrangements Chair: Peter Cheeseman Tod Levitt Joe Mead NASA-Ames Research Center Advanced Decision Systems KSC Inc. Mail Stop 244-7 201 San Antonio Circle 228 Liberty Plaza Moffett Field, CA 94035 Suite 286 Rome, NY 13440 (415)-694-6526 Mountain View, CA 94040 (315)-336-0500 cheeseman@ames-pluto.arpa (415)-941-3912 levitt@ads.arpa Program Committee: P. Bonissone, P. Cheeseman, J. Lemmer, T. Levitt, J. Pearl, R. Yager, L. Zadeh ------------------------------ End of AIList Digest ******************** From vtcs1::in% Fri Oct 31 02:05:22 1986 Date: Fri, 31 Oct 86 02:05:17 est From: vtcs1::in% (LAWS@SRI-STRIPE.ARPA) To: ailist@sri-stripe.arpa Subject: AIList Digest V4 #239 Status: R AIList Digest Thursday, 30 Oct 1986 Volume 4 : Issue 239 Today's Topics: Natural Language - Nonsense Quiz, Humor - Understanding Dogs and Dognition ---------------------------------------------------------------------- Date: 21 Oct 86 17:19:40 PDT (Tuesday) From: Wedekind.ES@Xerox.COM Subject: Nonsense quiz A couple of years ago, on either this list or Human-Nets, there appeared a short multiple-choice test which was written so that one could deduce "best" answers based on just the form, not the content, of the questions (in fact there wasn't much content, since almost every word over 3 letters long was a nonsense word). If anyone has this test, I would very much like to see it (along with any "official" answers you may have saved). If you want to see what I receive (or, better yet, if you have any original questions to add to the test), just let me know. thanks, Jerry ------------------------------ Date: Wed 29 Oct 86 22:48:43-PST From: Ken Laws Subject: Nonsense Quiz Here's a copy of the quiz taken from Human-Nets. Those interested in such things should get a copy of R.M. Balzer, Human Use of World Knowledge, ISI/RR-73-7, Information Sciences Institute, March 1974, Arpa Order No. 2223/1. It contains fairly detailed analysis of text such as "Sooner or later everyone runs across the problem of pottling something to a sprock inside the lorch." Date: 9 Sep 1981 From: research!alice!xchar [ Bell Labs, Murray Hill ] Reply-to: "research!alice!xchar care of" Subject: test-taking skills In HUMAN-NETS V4 #37, Greg Woods pointed out that high scores on multiple-choice tests may (as in his case) reflect highly developed test-taking skills rather than great intelligence. The test below illustrates Greg's thesis that one can often make correct choices that are "not based at all on...knowledge of the subject matter." I got this test from Joseph Kruskal (Bell Labs), who got it from Clyde Kruskal (NYU Courant Institute), who got it from Jerome Berkowitz (Courant Institute). Unfortunately, Prof. Berkowitz is currently out of town, so I cannot trace its origin any farther back. I will supply the generally accepted answers, and perhaps some explanations, later. --Charlie Harris * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * The following is a hypothetical examination on which you could get every item correct by knowing some of the pitfalls of test construction. See how well you can do! (Circle the letter preceding the correct response.) 1. The purpose of the cluss in furmpaling is to remove a.cluss-prags c. cloughs b. tremalis d. plumots 2. Trassig is true when a. lusps trasses the vom b. the viskal flans, if the viskal is donwil or zortil c. the begul d. dissles lisk easily 3. The sigia frequently overfesks the trelsum because a. all sigia are mellious b. sigias are always vortil c. the reelsum is usually tarious d. no trelsa are feskable 4. The fribbled breg will minter best with an a. derst c. sortar b. morst d. ignu 5. Among the reasons for tristal doss are a. The sabs foped and the foths tinzed b. the dredges roted with the orots c. few racobs were accapted in sluth d. most of the polats were thonced 6. Which of the following is/are always present when trossels are being gruven? a. rint and vost c. shum and vost b. vost d. vost and plone 7. The mintering function of the ignu is most effectively carried out in connection with a. razma tol c. the fribbled breg b. the grosing stantol d. a frally slush 8. a. c. b. d. Date: 15 Sep 1981 15:14:39-PDT From: ihuxo!hobs at Berkeley (John Hobson) Reply-to: "ihuxo!hobs in care of" Subject: test-taking skills Charlie-- The hypothetical exam on test-taking skills that you submitted to HUMAN-NETS Digest V4 #46 has been an object of much interest here at Indian Hill. A number of us have taken the test and we would like to see just how well we did. The answers and reasons for those answers are as follows: 1. The purpose of the cluss in furmpaling is to remove a. cluss-prags c. cloughs b. tremalis d. plumots 1--a. The cluss is mentioned in the question and in the answer. 2. Trassig is true when a. lusps trasses the vom b. the viskal flans, if the viskal is donwil or zortil c. the begul d. dissles lisk easily 2--a. The word trassig in the question and the verb trasses in the answer. 3. The sigia frequently overfesks the trelsum because a. all sigia are mellious b. sigias are always vortil c. the reelsum is usually tarious d. no trelsa are feskable 3--c. The key word here is "usually", along with "frequently" in the question. Anyway, it is often best to give a non-absolute answer in case there is an exception. 4. The fribbled breg will minter best with an a. derst c. sortar b. morst d. ignu 4--d. The giveaway here is the article "an" since "ignu" is the only answer staring with a vowel. 5. Among the reasons for tristal doss are a. The sabs foped and the foths tinzed b. the dredges roted with the orots c. few racobs were accapted in sluth d. most of the polats were thonced 5--a. This is a bit more subtle, but we think that since the question calls for "reasons" in the plural and (a) is the only answer with more than one reason, that the answer is (a). 6. Which of the following is/are always present when trossels are being gruven? a. rint and vost c. shum and vost b. vost d. vost and plone 6--b. Vost is mentioned in all possible answers, so vost must always be present. 7. The mintering function of the ignu is most effectively carried out in connection with a. razma tol c. the fribbled breg b. the grosing stantol d. a frally slush 7--c. Since in question 4 (above), the fribbled breg was mintering with an ignu, the thing mintering with the ignu is, of course, the fibbled breg. 8. a. c. b. d. We haven't the foggiest. Perhaps "all of the above". I once took a multiple-guess test in English History where the last question was: The only British Prime Minister ever assassinated was: a. Clement Atlee e. None of the above b. Spencer Perceval f. One or more of the above c. The Duke of Wellington g. Don't know d. All of the above h. Don't care b, f, g and h were accepted as correct answers. John Hobson ihuxo!hobs Bell Labs -- Indian Hill Date: 18 Sep 1981 12:13 PDT From: Kolling at PARC-MAXC Subject: test-taking skills About that test..... I think the answer to 2 is b, not a. Either a or b is possible (not c because it isn't grammatically correct, and not d because it's fuzzy due to "easily". Looking at the answers as follows: 1. a 2. a or b 3. c 4. d 5. a 6. b 7. c 8. ? Note the pattern a,b,c,d, so I think 2 is b and 8 is d. Karen (Now you know how I got through school.) Date: 29 September 1981 0858-EDT (Tuesday) From: Mary.Shaw at CMU-10A Subject: Test-taking skills I agree with Karen on the answers: a, b, c, d, a, b, c, d. John's reasons are correct except for #s 2 and 8. Karen is right about 8 (it's the pattern). The reason #2 is b rather than a is that option b is markedly dissimilar from all the others. (One of the rules of test-writing is to avoid making the right answer stand out because it's much longer or shorter than the others, especially if it's longer because of a qualifying clause as in b here.) Mary ------------------------------ Date: Tue, 21 Oct 86 21:07:33 PDT From: cottrell@nprdc.arpa (Gary Cottrell) Subject: Reply to Winograd and Flores SEMINAR Understanding Dogs and Dognition: A New Foundation for Design Garrison W. Cottrell Department of Dog Science Condominium Community College of Southern California There is a crisis in Dog-Human relations, as has been evidenced by recent attempts to make dogs more "user-friendly" (see Programming the User-Friendly Dog, Cottrell 1985a). A new approach has appeared (Whineandpoop and Flossy, 1986) that claims that previous attempts at Dog-Human Interfaces have floundered on a basic misunderstanding of the Dog. The problem has been that we have approached the Dog as if he was one of us - and he certainly is not. Their perusal of the philosophies of Holedigger and Mateyourauntie has led them to a new understanding: A West Coast Understanding. There is no Objective Reality[1] that we form internal representations of, rather, organisms are structurally coupled[2] to their environment, the so-called "seamless web" theory of cognition. Thus the inside/outside dichotomy that has plagued AI researchers and dogs for years is a false one[3]. This has led them to a whole new way of understanding how dogs should be programmed. In the past we have assumed some internal representation in the dog's head (see Modelling the Intentional Behavior of the Dog, Cottrell 1984b). In this new view, the reason dogs are so dense is not that they have impoverished internal representations, but that they don't have internal representations. Instead, the dog is structurally coupled to the world - he moves about embedded in the ooze of the environment, and naturally, it slows him down. Not only that, but it is the wrong environment - the human one, leading to continual breakdown[4]. Thus our problem is in forming a consensual domain with another species. We have to place ourselves in their domain to hear them - this is termed "listening in the backyard". We feel that there is much to be gained from combining their view with the connectionist approach[5]. The problem is combining the intensional programming of evolution with extensional programming by the owner. Connectionist theories of learning combined with considerations of "listening in the backyard" suggest that if we simply present the dog with many examples of the desired input-output behavior within the backyard, we will get the desired result. ____________________ [1]Actually, Californians have known this for years. [2]Note that this is to be distinguished from the structural coupling that produces new dogs from old ones. [3]Dogs have often followed Mateyourauntie in this, ignoring the inside/outside dichotomy. These considerations may eliminate the basis for the continence-performance distinction (Hutchins, 1986). [4]The field of Dog-Machine Interfaces attempts to deal with such problems as the poor design of the doorknob - a lever would help reduce the inside/outside barrier. Others feel that this research is misdirected; the doorknob is designed that way pre- cisely because it acts as a species filter, keeping dogs out of restaurants and movie theatres. [5]Their work also suggests applying the theory of speech acts to the command interface. Thus, we can classify much more than simple Directives. For example, "You've had it now, Jellybean!" is a commissive - the speaker is committed to a future course of action. The dog will usually respond with an attempt to withdraw from the dialogue, but the speaker rejects his withdrawal. "You're in the doghouse, Bean" is a declarative - the speaker brings about a correspondence between the propositional content of this and reality simply by uttering it. P.S. As usual, troff source (1 page laser printer output) on request to: gary cottrell Institute for Cognitive Science, UCSD cottrell@nprdc (ARPA) {ucbvax,decvax,akgua,dcdwest}!sdcsvax!sdics!cottrell (USENET) ------------------------------ End of AIList Digest ********************