orth America today). The book is a transcript, not much edited except for explanatory footnotes, of a series of lectures trying to explain how proper names might work. The arguments against the "quality cluster" theories seem pretty conclusive. They include the way we use counterfactuals, that is talking about an object or a person if they were different than they actually were (like, what would Babbage have been like if he had lived in an age of VLSI chips? or what would Mayor Curly of Boston been like if he hadn't been a crook?) These discussions can get pretty far away from reality, and this indicates that the names we use allow us to keep track of who or what we mean without getting confused by the changes in qualities and properties. The properties and qualities are not what provide the "sense" or "essense" of the name. Kripke goes on to suggest that we understand names through a "naming" and a "chain of acquaintances". For example, Napoleon was named at his christening, and various people met him, and they talked to people about him, and this chain of acquaintances kept going even after he was dead. Thus there is a (probably multi-path) chain of conversations and pointings and descriptions that leads back from your understanding of the name "Napoleon" to the christening where he received his name. I am not sure that this is a correct appraisal of the mechanism for understanding names, but it certainly is the best I have heard. Leonard (?) Linsky has recently written a book attacking this and similar views, and indicating that a synthesis of the Russell and Frege theories still has problems but avoids most of the pitfalls of acquaintances. Unfortunately I have not yet read that book. For other works in the area, certainly read Quine's Word and Object and the volume of collected Putnam papers on language. Also works by Searle and Austin on speech acts are useful for thinking about the clues, both verbal and non-verbal, that allow us to make sense of conversations where not everything is stated explicitly. Enjoy! R Mark Chilenskas chilenskas@cca-vms decvax!cca!rmc ------------------------------ Date: Mon 21 May 84 12:12:05-EDT From: Jan Subject: Seminar - Information Management Systems [Harvard] [Forwarded from the MIT bboard by SAWS@MIT-MC.] Wednesday, May 23 Professor Erik Sandewall from Linkoping University, Sweden will talk at Harvard in the colloquium series. Theory of Information Management Systems 4:00PM Aiken Lecture Hall, Tea in Pierce 213 at 3:30 It is often convenient and natural to view a data base as a network consisting of nodes, arcs from nodes to nodes, and attribute values attached to nodes. This view occurs in artificial intelligence (eg semantic networks), data base theory (eg. entity-relationship models), and office systems (eg. for representation of the virtual office). Unfortunately, the network view of data bases is usually treated informally, in contrast to the formal treatment that is available for relational data bases. The theory of information management systems attempt to remedy this situation. Formally, a network is viewed as a set of triples where f is a function symbol, x is a node, and y is a node or an attribute value. Two perspective on such networks are of interests: 1) algebraic operations on networks allow the definition of cursor-related editing operations, and of line-drawing graphics. 2) by viewing a network as an interpretation on a variety of first-order logic, one can express constraints on the data structures that are allowed there. In particular, both "pure Lisp" data structures and "impure" structures (involving shared sublists and circular structures) can be characterized. Proposition can be also used for specifying derived information as an extension of the interpretation. This leads to a novel way of treating non-monotonic reasoning. The seminar emphsizes mostly the second of these two approaches. Host: Jan Komorowski ------------------------------ Date: 21 May 1984 11:10-EDT From: DISRAEL at BBNG.ARPA Subject: Seminar - Open Systems [Forwarded from the MIT bboard by SASW@MIT-MC.] This Wednesday, at 3:00 Carl Hewitt of the MIT AI LAB will be speaking on "Open Systems". The seminar will be held in the 3rd floor large conference room. Open Systems: the Challenge for Intelligent Systems Continous growth and evolution, absence of bottlenecks, arm's-length relationships, inconsistency among knowledge bases, decentralized decision making, and the need for negotiation among system parts are interdependent and necessary properties of open systems. As our computer systems evolve and grow they are more and more taking on the characteristics of open systems. Traditional foundational assumptions in Artificial Intelligence such as the "closed world hypothesis", the "search space hypothesis", and the possibility of consistently axiomatizing the knowledge involved become less and less applicable as the evolution toward open systems continues. Thus open systems pose a considerable challenge in the development of suitable conceptual foundations for intelligent systems. ------------------------------ End of AIList Digest ******************** 24-May-84 23:20:19-PDT,16820;000000000001 Mail-From: LAWS created at 24-May-84 23:18:04 Date: Thu 24 May 1984 21:35-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #63 To: AIList@SRI-AI AIList Digest Friday, 25 May 1984 Volume 2 : Issue 63 Today's Topics: Cognitive Psychology - Dreams, Philosophy - Essence & Identity & Continuity & Recognition ---------------------------------------------------------------------- Date: Mon 21 May 84 10:48:00-PDT From: NETSW.MARK@USC-ECLB.ARPA Subject: cognitive psychology / are dreams written by a committee? Apparently (?) dreams are programmed, scheduled event-sequences, not mere random association. Does anyone have a pointer to a study of dream-programming and scheduling undertaken from the stand-point of computer science? ------------------------------ Date: Mon 21 May 84 11:39:51-PDT From: Ken Laws Subject: Dreams: A Far-Out Suggestion The May issue of Dr. Dobb's Journal contained an article on "Sixth Generation Computers" by Richard Grigonis (of the Children's Television Workshop). I can't tell how serious Mr. Grigonis is about faster-than- light communication and computation in negative time; he documents the physics of these possibilities as though he were both dead serious and well informed. He also discusses the possibility of communicating with computers via brain waves, and it this material that has spurred the following bit of speculation. There seems to be growing evidence that telepathy works, at least for some people some of the time. The mechanism is not understood, but then neither are the mechanisms for memory, unconscious thought, dreams, and other cognitive phenomena. Mr. Grigonis suggests that low-frequency electromagnetic waves may be at work, and provides the following support: Low frequencies are attenuated very slowly, although their energy does spread out in space (or space/time); the attenuation of a 5 Hz signal at 10,000 kilometers is only 5%. A 5 Hz signal of 10^-6 watt per square centimeter at your cranium would generate a field of 10^-24 watt per square centimeter at the far side of the earth; this is well within the detection capabilities of current radio telescopes. Further, alpha waves of 7.8 and 14.1 cycles per second and beta waves of 20.3 cycles per second are capable of constructive interference to establish standing waves throughout the earth. Now suppose that the human brain, or a network of such brains distributed in space (and time), contained sufficient antenna circuitry to pick up "influences" from the global "thought field" in a manner similar to the decoding of synthetic aperture radar signals. Might this not explain ESP, dreams, "racial memory", unconscious insight, and other phenomena? We broadcast to the world the nature of our current concerns, others try to translate this into terms meaningful to their lives, resonances are established, and occasionally we are able to pick up answers to our original concerns. The human species as a single conscious organism! Alas, I don't believe a word of it. -- Ken Laws ------------------------------ Date: Thu, 24 May 1984 02:52 EDT From: MINSKY%MIT-OZ@MIT-MC.ARPA Subject: Essences About essences. Here is a section from a book I am finishing about The Society of Mind. THE SOUL "And we thank Thee that darkness reminds us of light." (T. S. Eliot) My friends keep asking me if a machine could have a soul? And I keep asking them if a soul can learn. I think it is important to understand this retort, in order to recognize that there may be unconscious malice in such questions. The common concept of a soul says that the essence of a human mind lies in some entirely featureless point-like spark of invisible light. I see this as a symptom of the most dire anti-self respect. That image of a nothing, cowering behind a light too bright to see, denies that there is any value or significance in struggle for accomplishment. This sentiment of human worthlessness conceals itself behind that concept of an essence of the self. Here's how it works. We all know how a superficial crust of trash can unexpectedly conceal some precious gift, like treasure buried in the dirt, or ordinary oyster hiding pearl. But minds are just the opposite. We start as ordinary embryonic animals, which then each build those complicated things called minds -- whose merit lies entirely within their own coherency. The brain-cells, raw, of which they're made are, by themselves, as valueless as separate daubs of paint. That's why that soul idea is just as upside-down as seeking beauty in the canvas after scraping off Da Vinci's smears. To seek our essence only misdirects our search for worth -- since that is found, for mind, not in some priceless, compact core, but in its subsequently vast, constructed crust. The very allegation of an essence is degrading to humanity. It cedes no merit to our aspirations to improve, but only to that absence of no substance, which was there all along, but eternally detached from all change of sense and content, divorced both from society of mind and from society of man; in short, from everything we learn. What good can come from such a thought, or lesson we can teach ourselves? Why, none at all -- except, perhaps, that it is futile to think that changes don't exist, or that we are already worse or better than we are. --- Marvin Minsky ------------------------------ Date: Wed, 23 May 84 09:49:21 EDT From: Stephen Miklos Subject: Essence of Things? It is not too difficult to come up with a practical problem in which the identity of the greek ship is important. To wit: In year One, the owner of the ship writes a last will and testament, leaving "my ship and all its fittings and appliances" to his nephew. The balance of his estate he leaves to his wife. In Year Two, he commences to refit his ship one board at a time. After a few years he has a pile of old boards which he builds into a second ship. Then he dies. A few hypotheticals: 1. Suppose both ships are in existence at the time of probate. 2. Suppose the old-board ship had been destroyed in a storm. 3. Suppose the new-board ship had been destroyed in a storm. 4. Suppose the original ship had been refitted by replacing the old boards with fiberglass 5. Suppose the original boat had not been refitted, but just taken apart and later reassembled. 6. Suppose the original ship had been taken apart and replaced board by board, but as part of a single project in which the intention was to come up with two boats. 6a. Suppose that this took a while, and that from time to time our Greek testator took the partially-reboarded boat out for a spin on the Mediterranean. In each of these cases, who gets the old-board ship? Who gets the new-board ship? It seems to me that the case for the fallaciousness of the argument for boat y (the new-board boat) seriously suffers in hypo #6 and thereby is compromised for the pure hypothetical. It should not be the case that somebody's intention makes the difference in determining the logical identity of an object, although that is the way the law would handle the problem, if it could descry an intention. Just trying to get more confused, SJM ------------------------------ Date: Wed, 23 May 84 10:47 EDT From: MJackson.Wbst@XEROX.ARPA Subject: Re: Continuity of Identity An interesting "practical" problem of the Greek Ship/Lincoln's Axe type arises in the restoration of old automobiles. Since many former manufacturers are out of business, spare parts stocks may not exist, body pieces may have been one-offs, and for other reasons, restoration often involves the manufacture of "new" parts. Obviously at some point one has a "replica" of a Bugatti Type 35 rather than a "restored" Bugatti Type 35 (and the latter is desirable enough to some people so that they would happily start from a basket full of fragments. . .). What is that point (and how many baskets of fragments can one original Bugatti yield)? In fact, old racing cars are worse. The market value of, say, a 1959 Formula 1 Cooper is significantly enhanced if it was driven by, say, Moss or Brabham, particularly if it was used to win a significant race. But what if it subsequently was crashed and rebuilt? Rebuilt from the frame up? Rebuilt *entirely* but assigned the previous chassis number by the factory (a common practice)? Under what circumstances is one justified as advertising such an object as "ex-Moss?" Mark ------------------------------ Date: 18 May 84 18:58:24-PDT (Fri) From: ihnp4!mgnetp!burl!clyde!akgua!mcnc!ncsu!uvacs!edison!jso @ Ucb-Vax Subject: Re: the Greek Ship problem Article-I.D.: edison.219 The resolution of the Greek Ship/Lincoln's Axe problem seems to be that an object retains its identity over a period of time if it has an unbroken time-line as a whole. Most of the cells in your body weren't there when you were born, and most that you had then aren't there now, but aren't you still the same person/entity, though you have far from the same characteristics? John Owens ...!uvacs!edison!jso ------------------------------ Date: Thu 24 May 84 13:00:04-PDT From: Laurence R Brothers Subject: identity over time "to cross again is not to cross". Obviously, people don't generally function with that concept in mind, or nothing would be practically identical to anything else. I forget the statistic that says how long it takes for all the atoms in your body to be replaced by new ones, but, presumably, you are still identifiable as the same person you were x years ago. How about saying that some object is "essentially identical" in context y (where context y consists of a set of properties) to another object if it is both causally linked to the first object, and is the object that fulfills the greates number of properties in y to the greatest precision. Clearly, this definition does not work all that well in some cases, but it at least has the virtue of conciseness. If two objects are "essentially identical" in the "universal context", then they may as well be named the same in common usage, at least, if not with total accuracy, since they would seem to denote what people would consider "naively" to be the same object. -Laurence ------------------------------ Date: 22 May 84 22:48:39-PDT (Tue) From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax Subject: A restatement of the problem (phil/ai) Article-I.D.: wxlvax.281 It has been my experience that whenever many people misinterpret me, it is due to my unclarity (if that's a word) in making my statement. This appears to be what happened with my original posting on human perception vs computer or robotic perception. Therefore, rather than trying to reply to all the messages that appeared on the net and in my mailbox, let me try a new, longer posting that will hopefully clarify the question that I have. "Let us consider some cases of misperception... Take for example a "mild" commonplace case of misperception. Suppose that I see a certain object as having a smooth surface, and I proceed to walk toward it. As I approach it, I come to realize visually (and it is, in fact, true) that its surface is actually pitted and rough rather than smooth. A more "severe" case of misperception is the following. Suppose that, while touring through the grounds of a Hollywood movie studio, I approach what, at first, I take to be a tree. As I come near to it, I suddenly realize that what I have been approaching is, in fact, not a tree at all but a cleverly constructed stage prop. In each case I have a perceptual experience of an object at the end of which I "go back" on an earlier attribution. Of present significance is the fact that in each case, although I do "go back" on an earlier attribution, I continually *experience* it "as" one and the same. For, I would not have experienced myself now as having made a perceptual *mistake about an object* unless I experience the object now as being THE VERY SAME object I experienced earlier." [This passage is from Dr. Miller's recent book: Miller, Izchak. "Husserl: Perception and Temporal Awareness" MIT Press, c. 1984. It is quoted from page 64, by permission of the author.] So, let me re-pose my original question: As I understand it, issues of perception in AI today are taken to be issues of feature-recognition. But since no set of features (including spatial and temporal ones) can ever possibly uniquely identify an object across time, it seems to me (us) that this approach is a priori doomed to failure. Feature recognition cannot be the way to accurately simulating/reproducing human perception. Now, since I (we) are novices in this field, I want to open the question up to those more knowledgeable. Why are AI/perception people barking up the wrong tree? Or, are they? (One more note: PLEASE remember to put "For Alan" in the headers of mail messages you send me. ITT Corp is kind enough to allow me the use of my father's account, but he doesn't need to sift through all my mail.) --Alan Wexelblat (for himself and Izchak Miller) (Currently appearing at: ..decvax!ittvax!wxlvax!rlw) ------------------------------ Date: 24 May 84 18:58-PDT From: Laws@SRI-AI Subject: Continuity Other examples related to the Greek Ship difficulty: the continuity of the Olympic flame (or rights to the Olympic name), posession of the world heavyweight title if the champ retires and then "unretires", title to property as affected by changes in either the property or the owner's status, Papal succession and the right of ordained priests to ordain others, personal identity after organ transplants, ... In all the cases, the philosophical principles seem less important than having some convention for resoving disputes. Often market forces are at work: the seller may make any claim that isn't outrageously fraudulent, and the buyer pays a price commensurate with his belief that the claims are valid, will hold up in court, or will be believed by his own friends and customers. On the subject of perception and recognition: we have computational methods of recognizing objects in images despite changes in background, brightness or color, texture, perspective, motion, scale changes, occlusion or damage, imaging technique (e.g., visual vs. infrared or radar signatures), and other types of variation. We don't yet have a single computer program that can do all of the above, but most of the matching problems have been solved by one program or another. Some problems can't be solved, of course: is that red Volkswagon the same one that I saw yesterday, or has another one been parked in the same place? The key to image analysis is often not in recognition of feature clusters but in understanding how features change across space or time. The patterns of change are themselves features that must be recognized, and that can't be done unless you can determine the image areas over which to compute the gradients. You can't recognize the whole from the parts because you can't find the parts unless you know the configuration of the whole. One of the most powerful techniques for such problems is hypothesize- and-test. Find anything in the scene that can suggest part of the analysis, leap to a conclusion, and see if you can make the answer fit the scene. I suspect that this explains the object constancy that Alan is worried about. We are so loathe to give up a previously accepted parse that we will tolerate extreme deviations from our expectations before abandoning the interpretation and searching for another. Even when forced to reparse, we have great difficulty in combining the scene entities in groupings other than those we first locked onto (as in Cole's Law and "how to wreck a nice beach"); this suggests that the prominent groupings form symbolic proto-objects that remain constant even though we reevaluate the details, or "features", within the context of the groupings. -- Ken Laws ------------------------------ End of AIList Digest ******************** 25-May-84 09:46:33-PDT,14032;000000000000 Mail-From: LAWS created at 25-May-84 09:43:23 Date: Fri 25 May 1984 09:38-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #64 To: AIList@SRI-AI AIList Digest Friday, 25 May 1984 Volume 2 : Issue 64 Today's Topics: Courses - Expert Systems Syllabus Request, Games - Core War Sources, Logic Programming - Boyer-Moore Prover, AI Books - AI and Business, Linguistics - Use of "and", Scientific Method - Hardware Prototyping ---------------------------------------------------------------------- Date: 23 May 1984 1235-EDT From: CASHMAN at DEC-MARLBORO Subject: Expert systems course Has anyone developed an expert systems course using the book "Building Expert Systems" (Hayes-Roth & Lenat) as the basic text? If so, do you have a syllabus? -- Paul Cashman (Cashman@DEC-MARLBORO) ------------------------------ Date: Thursday, 24 May 1984 17:17:49 EDT From: Michael.Mauldin@cmu-cs-cad.arpa Subject: Core War... Some people are having problems FTPing the core war source... If you prefer, just send me a note and I'll mail you the source over the net. It is written in C, runs on Unix (4.1 immediately, or 4.2 with 5 minutes of hacking), and is mailed in one file of 42K characters. Michael Mauldin (Fuzzy) Department of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 (412) 578-3065, mauldin@cmu-cs-a. ------------------------------ Date: 24-May-84 12:48:20-PDT From: jbn@FORD-WDL1.ARPA Subject: Re: Boyer-Moore prover on UNIX systems [Forwarded from the Stanford bboard by Laws@SRI-AI.] The Boyer-Moore prover is now available for UNIX systems. While I did the port, Boyer and Moore now have my code and have integrated it into their generic version of the prover. They are handling distribution. The prover is now available for the Symbolics 3600, TOPS-20 systems, Multics, and UNIX for both VAXen and SUNs. There is a single version with conditional compilation, it resides on UTEXAS-20, and can be obtained via FTP. Send requests to BOYER@UTEXAS-20 or MOORE@UTEXAS-20, not me, please. The minimum machine for the prover is a 2MB UNIX system with Franz Lisp 38.39 or later, about 20-80MB of disk, and plenty of available CPU time. If you want to know more about the prover, read Boyer and Moore's ``A Computational Logic'' (1979, Academic Press, ISBN 0-12-122950-5). Using the prover requires a thorough understanding of this work. Please pass this on to all who got the last notice, especially bulletin boards and news systems. Thanks. Nagle (@SCORE) ------------------------------ Date: 23 May 1984 13:50:30-PDT (Wednesday) From: Adrian Walker Subject: AI & Business The summary on AI for Business is most interesting. You might like to list also the book: Artificial Intelligence Applications for Business Walter Reitman, Editor Ablex Publishing Corporation, Norwood, New Jersey, 1984 It's in the bookstores now. Adrian Walker IBM SJ Research k51/282, tieline 276-6999, outside 408-256-6999 vnet: sjrlvm1(adrian) csnet: Adrian@ibm-sj arpanet: Adrian%ibm-sj@csnet-relay ------------------------------ Date: 18 May 84 9:34:56-PDT (Fri) From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.ags @ Ucb-Vax Subject: Re: Use of "and" - (nf) Article-I.D.: pucc-i.281 We are blinded by everyday usage into putting an interpretation on "people in Indiana and Ohio" that really isn't there. That phrase should logically refer to 1. The PEOPLE of Indiana, and 2. The STATE of Ohio (but not the people). If someone queries a program about "people in Indiana and Ohio", a reasonable response by the program might be to ask, "Do you mean people in Indiana and IN Ohio?" which may lead eventually to the result "There are no people in Indiana and in Ohio." Dave Seaman ..!pur-ee!pucc-i:ags ------------------------------ Date: 20 May 84 8:23:00-PDT (Sun) From: ihnp4!inuxc!iuvax!brennan @ Ucb-Vax Subject: Re: Use of "and" Article-I.D.: iuvax.3600002 Come on, Dave, I think you missed the point. No person would have any trouble at all understanding "people in Indiana and Ohio", so why should a natural language parser have trouble with it??? JD Brennan ...!ihnp4!inuxc!iuvax!brennan (USENET) Brennan@Indiana (CSNET) Brennan.Indiana@CSnet-Relay (ARPA) ------------------------------ Date: 21 May 84 12:54:15-PDT (Mon) From: harpo!ulysses!allegra!dep @ Ucb-Vax Subject: Re: Use of "and" Article-I.D.: allegra.2484 Why does everyone assume that there is no one who is both in Indiana and Ohio? The border is rather long and it seem perfectly possible that from time to time there are people with one foot in Inidana and the other in Ohio - or for that matter, undoubtedly someone sleeps with his head in I and feet in O (or vice versa). Lets hear it for the stately ambiguous! ------------------------------ Date: Sun 20 May 84 18:56:36-PDT From: John B. Nagle Subject: Quote [Forwarded from the Stanford bboard by Laws@SRI-AI.] ``... the normal mode of operation in computer science has been abandoned in the realm of artificial intelligence. The tendency has been to propose solutions without perfecting them.'' Harold Stone, writing about the NON-VON machines being proposed at Columbia from Mosaic, the magazine of the National Science Foundation, vol 15, #1, p. 24. ------------------------------ Date: Tue 22 May 84 18:43:35-PDT From: John B. Nagle Subject: Re: Quote, background of There have been some requests for more context on the quote I posted. The issue is that the Columbia people working on non-von Neumann architectures are now proposing to build NON-VON 4, their fourth machine. However, NON-VONs 1 to 3 are either unfinished or were never started, according to the writer quoted, and the writer doesn't think much of this. My point in posting this is that it is significant that it appeared in the National Science Foundation's publication. The people with the money may be losing patience. ------------------------------ Date: Mon 21 May 84 22:06:44-PDT From: Tom Dietterich Subject: Re: Quote [Forwarded from the Stanford bboard by Laws@SRI-AI.] From Nagle (quoting Harold Stone) ``... the normal mode of operation in computer science has been abandoned in the realm of artificial intelligence. The tendency has been to propose solutions without perfecting them.'' Which parse of this is correct? Has the tendency to "propose solutions without perfecting them" held in the remainder of computer science, or in artificial intelligence? Either way I think it is ridiculous. Computer Science is so young that there are very few things that we have "perfected". We do understand alpha-beta search, LALR(1) parser generators, and a few other things. But we haven't come near to perfecting a theory of computation, or a theory of the design of programming languages, or a theory of heuristics. --Tom ------------------------------ Date: Wed 23 May 84 00:16:43-EDT From: David Shaw Subject: Re: FYI, again [Forwarded from the Stanford bboard by Laws@SRI-AI.] Tom, I have just received a copy of your reaction to Harold Stone's criticism of AI, and in particular, of the NON-VON project. In answer to your question, I'm certain, based on previous interactions with Harold, that the correct parsing of his statement is captured by the contention that AI "proposes solutions without perfecting them", while "the normal mode of operation in computer science" perfects first, then proposes (and implements). I share your feelings (and those expressed by several other AI researchers who have written to me in this regard) about his comments, and would in fact make an even stronger claim: that the "least-understood" areas in AI, and indeed in many other areas of experimental computer science research, often turn out in the long run to be the most important in terms of ultimate practical import. I do not mean to imply that concrete results in such areas as the theories of heuristic search or resolution theorem-proving are not important, or should not be studied by those interested in obtaining results of practical value. Still, it is my guess that, for example, empirical findings based on real attempts to implement "expert systems", while lacking in elegance and mathematical parsimony, may well prove to have an equally important long-term influence on the field. This is certainly not true in many fields of computer science research. There are a number of areas in which "there's nothing so practical as a good theory". In AI, however, and especially in the construction of non-von Neumann machines for AI and other symbolic applications, the single-minded pursuit of generality and rigor, to the exclusion of (often imperfectly directed) experimentation, would in many cases seem to be a prescription for failure. Those of us who experiment in silicon as well as instructions have recently been the targets of special criticism. Why, our critics ask, do we test our ideas IN HARDWARE before we know that we have found the optimal solutions for all the problems we claim to address? Doesn't such behavior demonstrate a lack of knowledge of the published literature of computer architecture? Aren't we admitting defeat when we first build one machine, then construct a different one based on what we have learned in building the first? My answer to these criticisms is based observation that, in the age of VLSI circuits, computer-aided logic design, programmable gate arrays, and quick-turnaround system implementation, the experimental implementation of hardware has taken on many of the salient qualities of the experimental implementation of software. Like their counterparts in software-oriented research, contemporary computer architects often implement hardware in the course of their research, and not only at the point of its culmination. Such experimentation helps to explicate "fuzzy" ideas, to prune the tree of possible architectural solutions to given problems, and to generate actual (as opposed to asymptotic or approximate) data on silicon area and execution time expenditures. Such experimentation would not be nearly so critical if it were now possible to reliably predict the detailed operation of a complex system constructed using a large number of custom-designed VLSI circuits. Unfortunately, it isn't. In the real world, efforts to advance the state of the art in new computer architectures without engaging in the implementation of experimental prototypes presently seem to be as futile as efforts to advance our understanding of systems software without ever implementing a compiler or operating system. In short, it is my feeling that "dry-dock" studies of "new generation" computer architectures may now be of limited utility at best, and at worst, seriously misleading, in the absence of actual experimentation. Here, the danger of inadequate study in the abstract seems to be overshadowed by the danger of inadequate "reality-testing", which often leads to the rigorous and definitive solution of practically irrelevant problems. It's my feeling that Stone's comments reflect a phenomenon that Kuhn has described in "The Structure of Scientific Revolutions" as characteristic of a "shift of paradigm" in scientific research. I still remember my reaction as a graduate student at Stanford when my advisor, Terry Winograd, told our research group that, in many cases, an AI researcher writes a program not to study the results of its execution, but rather for the insight gained in the course of its implementation. A mathematician by training, I was distressed by this departure from my model of mathematical (proof of theorem) and scientific (conjecture and refutation) research. In time, however, I came to believe that, if I really wanted to make new science in my chosen field, I might be forced to consider alternative models for the process of scientific exploration. I am now reconciled to this shift of paradigm. Like most paradigm shifts, this one will probably encounter considerable resistance among those whose scientific careers have been grounded in a different set of rules. Like most paradigm shifts, its critics are likely to include those who, like Harold Stone, have made the most significant contributions within the constraints of earlier paradigms. Like most paradigm shifts, however, its value will ultimately be assessed not in terms of its popularity among such scientists, but in rather in terms of its contribution to the advancement of our understanding of the area to which it is applied. Personally, I find considerable merit in this new research paradigm, and plan to continue to devote a large share of my efforts to the experimental development and evaluation of architectures for AI and other symbolic applications, in spite of the negative reaction such efforts are now encountering in certain quarters. I hope that my colleagues will not be dissuaded from engaging in similar research activities by what I regard as the transient effects of a fundamental paradigm shift. David ------------------------------ End of AIList Digest ******************** 27-May-84 21:30:13-PDT,15114;000000000000 Mail-From: LAWS created at 27-May-84 21:28:54 Date: Sun 27 May 1984 21:22-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #65 To: AIList@SRI-AI AIList Digest Monday, 28 May 1984 Volume 2 : Issue 65 Today's Topics: AI Tools - KS300 & MicroPROLOG and LISP, Expert Systems - Checking of NMOS Cells, AI Courses - Expert Systems, Cognition - Dreams & ESP, Seminars - Explanation-Based Learning & Analogy in Legal Reasoning & Nonmonotonicity in Information Systems ---------------------------------------------------------------------- Date: 23 May 84 12:42:27-PDT (Wed) From: hplabs!hao!seismo!cmcl2!philabs!linus!vaxine!chb @ Ucb-Vax Subject: KS300 Question Article-I.D.: vaxine.266 Does anybody know who owns the rights to the KS300 expert systems tool? KS300 is an EMYCIN lookalike, and I think it runs under INTERLISP. Any help would be appreciated. ----------------------------------------------------------- "It's not what you look like when you're doin' what you're doin', it's what you're doin' when you're doin' what you look like what you're doin'" ---125th St. Watts Band Charlie Berg ...allegra!vaxine!chb ------------------------------ Date: 25 May 84 12:28:22-PDT (Fri) From: hplabs!hao!seismo!cmcl2!floyd!whuxle!spuxll!abnjh!cbspt002 @ Ucb-Vax Subject: MicroPROLOG and LISP for the Rainbow? Article-I.D.: abnjh.647 Can anybody point me toward microPROLOG and LISPs for the DEC Rainbow 100. Either CP/M86 or MS-DOS 2.0, 256K, floppies. Thanks in advance. M. Kenig ATT-IS, S. Plainfield NJ uucp: ...!abnjh!cbspt002 ------------------------------ Date: 25 May 1984 1438-PDT (Friday) From: cliff%ucbic@Berkeley (Cliff Lob) Subject: request for info This is a request to hear about any work that is going on related to my master's research in expert systems: RULE BASE ERROR CHECKING OF NMOS CELLS The idea is to build an expert system that embodies the knowledge of expert VLSI circuit designers to criticize NMOS circuit design at the cell (<15 transistors) level. It is not to be a simulator, but rather it is to be used by designers to have their cell critiqued by an experienced expert. The program will be used to try to catch the subtle bugs (ie non-logic error, not shown by standard simulation) that occur in the cell design process. I will be writing the code in PSL and a KRL Frame type language. Is there any work of a similar nature going on? Cliff Lob cliff@ucbic.BERKELEY ------------------------------ Date: Fri 25 May 84 13:33:49-MDT From: Robert R. Kessler Subject: re: Expert systems course (Vol 2, #64) I taught a course this spring quarter on "Knowledge Engineering" using the Hayes-Roth text. Since we only had a quarter, I decided to focus on writing expert systems as opposed to developing expert systems tools. We had available Hewlett Packard's Heuristic Programming and Representation Language (HPRL) to use to build some expert systems. A general outline follows: First third: Covered the first 2 to 3 chapters of the text. This gave the students enough exposure to general expert systems concepts. Second third: In depth exposure of HPRL. Studied knowledge representation using their Frame structure and both forward and backward chaining rules. Final third: Discussed the Oak Ridge Natl Lab problem covered in Chapter 10 of the text. We then went through each of the systems described (Chapters 6 and 9) to understand their features and misfeatures. Finally, we contrasted how we would have solved the problem using HPRL. Students had various assignments during the first half of the quarter to learn about frames, and both types of rules. They then (and are right now) working on a final expert system of their own choosing (have varied from a mechanics helper, plant doctor, first aid expert, simulator of the SAIL game, to others). All in all, the text was very good, and is so far the best I've seen. Bob. ------------------------------ Date: Sat, 26 May 84 17:06:57 PDT From: Philip Kahn RE: Subject: cognitive psychology / are dreams written by a committee? FLAME ON Where can you find any evidence that "dreams are programmed, scheduled event-sequences, not mere random association?" I have never found any author that espoused this viewpoint. Per chance, I think that viewpoint imposes far too much conscious behavior onto unconscious phenomena? If they are indeed run by a "committee", what happens during a proxy fight? FLAME OFF ------------------------------ Date: Fri 25 May 84 10:13:51-PDT From: NETSW.MARK@USC-ECLB.ARPA Subject: epiphenomenon conjecture conjecture: 'consciousness', 'essence' etc. are epiphenomena at the level of the 'integrative function' which facilitates the interaction between members of the 'community' of brain-subsystems. Many a-i systems have been developed which model particular putative or likely brain-subsystems, what is the status of efforts allowing the integration of such systems in an attempt to model the consciousness as a 'community of a-i systems' ??? ------------------------------ Date: Fri, 25 May 84 10:09:44 PDT From: Scott Turner Subject: Dreams...Far Out Did the astronauts on the moon suffer any problems with dreams, etc? Without figuring the attentuation, it seems like that might be far enough away to cause problems with reception...since I don't recall any such effects, perhaps we can assume that mankind doesn't have any such carrier wave. Makes a good base for speculative fiction, though. Interstellar travel would have to be done in ships large enough to carry a critical mass of humans. Perhaps insane people are merely unable to pick up the carrier wave, and so on. -- Scott ------------------------------ Date: Sun 27 May 84 11:44:43-PDT From: Joe Karnicky Reply-to: ZZZ.V5@SU-SCORE.ARPA Subject: Re: existence of telepathy I disagree strongly with Ken's assertion that "There seems to be growing evidence that telepathy works, at least for some people some of the time." (May 21 AIlist). It seems to me that the evidence which exists now is the same as has existed for possibly 100,000 years, namely anecdotes and poorly controlled experiments. I recommend reading the book "Science: Good, Bad, and Bogus" by Martin Gardner, or any issue of "The Skeptical Observer". What do you think ? Joe Karnicky ------------------------------ Date: 23 Apr 84 10:51:01 EST From: DSMITH@RUTGERS.ARPA Subject: Seminar - Explanation-Based Learning [This and the following Rutgers seminar notices were delayed because I have not had access to the Rutgers bboard for several weeks. This seems a good time to remind readers that AIList carries such abstracts not to drum up attendance, but to inform those who cannot attend. I have been asked several times for help in contacting speakers, evidence that the seminar notices do prompt professional interchanges. -- KIL] Department of Computer Science COLLOQUIUM SPEAKER: Prof. Gerald DeJong University of Illinois TITLE: EXPLANATION BASED LEARNING Machine Learning is one of the most important current areas of Artificial Intelligence. With the trend away from "weak methods" and toward a more knowledge-intensive approach to intelligence, the lack of knowledge in an Artificial Intelligence system becomes one of the most serious limitations. This talk advances a technique called explanation based learning. It is a method of learning from observations. Basically, it involves endowing a system with sufficient knowledge so that intelligent planning behavior of others can be recognized. Once recognized, these observed plans are generalized as far as possible while preserving the underlying explantion of their success. The approach supports one-trial learning. We are applying the approach to three diverse areas: Natural Language processing, robot task planning, and proof of propositional calculus theorems. The approach holds promise for solving the knowedge collection bottleneck in the construction of Expert Systems. DATE: April 24 TIME: 2:50 pm PLACE: Hill 705 Coffee at 2:30 Department of Computer Science COLLOQUIUM SPEAKER: Rishiyur Nikhil University of Pennsylvania TITLE: FUNCTIONAL PROGRAMMING LANGUAGES AND DATABASES ABSTRACT Databases and Programming Languages have traditionally been "separate" entities, and their interface (via subroutine libraries, preprocessors, etc.) is generally cumbersome and error-prone. We argue that a functional programming language, together with a data model called the "Functional Data Model", can provide an elegant and simple integrated database programming environment. Not only does the Functional Data Model provide a richer model for new database systems, but it is also easy to implement atop existing relational and network databases. A "combinator"-style implementation technique is particularly suited to implementing a functional language in a database environment. Functional database languages also admit a rich type structure, based on that of the programming language ML. While having the advantages of strong static type-checking, and allowing the definition of user-views of the database, it is unobtrusive enough to permit an interactive, incremental, Lisp-like programming style. We shall illustrate these ideas with examples from the language FQL, where they have been prototyped. DATE: Thursday, April 26, 1984 TIME: 2:50 p.m. PLACE: Room 705 - Hill Center Coffee at 2:30 ------------------------------ Date: 3 May 84 16:21:34 EDT From: Michael Sims Subject: Seminar - Analogy in Legal Reasoning [Forwarded from the Rutgers bboard by Laws@SRI-AI.] machine learning brown bag seminar Title: Analogy with Purpose in Legal Reasoning from Precedents Speaker: Smadar Kedar-Cabelli Date: Wednesday, May 9, 1984, 12:00-1:30 Location: Hill Center, Room 423 (note new location) One open problem in current artificial intelligence (AI) models of learning and reasoning by analogy is: which aspects of the analogous situations are relevant to the analogy, and which are irrelevant? It is currently recognized that analogy involves mapping some underlying causal structure between situations [Winston, Gentner, Burstein,Carbonell]. However, most current models of analogy provide the system with exactly the relevant structure, tailor-made to each analogy to be performed. As AI systems become more complex, we will have to provide them with the capability of automatically focusing on the relevant aspects of situations when reasoning analogically. These will have to be sifted from the large amount of information used to represent complex, real-world situations. In order to study these general issues, I am examining a particular case study of learning and reasoning by analogy: legal reasoning from precedents. This is studied within the TAXMAN II project, which is investigating legal reasoning using AI techniques [McCarty, Sridharan, Nagel]. In this talk, I will discuss the problem and a proposed solution. I am examining legal reasoning from precedents within the context of current AI models of analogy. I plan to add a focusing capability. Current work on goal-directed learning [Mitchell, Keller] and explanation-based learning [DeJong] applies here: the explanation of how a the analogous precedent case satisfies the goal of the legal argument helps to automatically focus the reasoning on what is relevant. Intuitively, if your purpose is to argue that a certain stock distribution is taxable by analogy to a precedent case, you will know that aspects of the cases having to do with the change in the economic position of the defendants are relevant for the purpose of this analogy, while aspects of the case such as the size of paper on which the stocks were printed, or the defendants' hair color, are irrelevant for that purpose. This knowledge of purpose, and the ability to use it to focus on relevant features, are missing from most current AI models of analogy. ------------------------------ Date: 15 May 84 11:13:50 EDT From: BORGIDA@RUTGERS.ARPA Subject: Seminar - Nonmonotonicity in Information Systems [Forwarded from the Rutgers bboard by Laws@SRI-AI.] III Seminar by Alex Borgida, Wed. 2:30 pm/Hill 423 The problem of Exceptional Situations in Information Systems -- An overview We begin by illustrating the wide range of exceptional situations which can arise in the context of Information Systems (ISs). Based on this evidence, we argue for 1) a methodology of software design which abstracts exceptional/special cases by considering normal cases first and introducing special cases as annotations in successive phases of refinement, and 2) the need for ACCOMMODATING AT RUN TIME exceptional situations not anticipated during design. We then present some Programming Language features which we believe support the above goals, and hence facilitate the design of more flexible ISs. We conclude by briefly describing two research issues in Artificial Intelligence which arise out of this work: a) the problem of logical reasoning in a knowledge base of formulas where exceptions "contradict" general rules, and b) the issue of suggesting improvements to the design of an IS based on the exceptions to it which have been encountered. ------------------------------ End of AIList Digest ******************** 29-May-84 10:24:26-PDT,16580;000000000000 Mail-From: LAWS created at 29-May-84 10:22:41 Date: Tue 29 May 1984 10:13-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #66 To: AIList@SRI-AI AIList Digest Tuesday, 29 May 1984 Volume 2 : Issue 66 Today's Topics: AI Courses - Expert Systems, Expert Systems - KS300 Response, Linguistics - Use of "and", Perception - Identification & Misperception, Philosophy - Identity over Time & Essence, Seminar - Using PROLOG to Access Databases ---------------------------------------------------------------------- Date: Tue 29 May 84 08:59:00-CDT From: Charles Petrie Subject: Expert Systems Course Gordon Novak at UT (UTEXAS-20) teaches Expert Systems based on "Building Expert Systems". The class project is building a system with Emycin. For details on the sylabus, please contact Dr. Novak. I took the course and found the "hands-on" experience very helpful as well as Dr. Novak's comments and anedotes about the other system building tools. Charles Petrie ------------------------------ Date: Mon 28 May 84 22:42:41-PDT From: Tom Dietterich Subject: Re: KS300 Inquiry KS300 is a product of Teknowledge, Inc. Palo Alto, CA ------------------------------ Date: 23 May 84 17:31:36-PDT (Wed) From: hplabs!hao!seismo!cmcl2!philabs!sbcs!debray @ Ucb-Vax Subject: Re: Use of "and" Article-I.D.: sbcs.640 > No person would have any trouble at all understanding "people > in Indiana and Ohio", so why should a natural language parser > have trouble with it??? The problem is that the English word "and" is used in many different ways, e.g.: 1) "The people in Indiana and Ohio" -- refers to the union of the set of people in Indiana, and the set of people in Ohio. Could conceivably be rewritten as "the people in Indiana and the people in Ohio". The arguments to "and" can be reordered, i.e. it refers to the same set as "the people in Ohio and Indiana". 2) "The house on 55th Street and 7th Avenue" -- refers to the *intersection* of the set of houses on 55th street and the set of houses on 7th Avenue (hopefully, a singleton set!). NOT the same as "the house on 55th street and the house on 7th Avenue". The arguments to "and" *CAN* be reordered, however, i.e. one could as well say, "the house on 7th Ave. and 55th Street". 3) "You can log on to the computer and post an article to the net" -- refers to a temporal order of events: login, THEN post to the net. Again, not the same as "you can log on to the computer and you can post an article to the net". Unlike (2) above, the meaning changes if the arguments to "and" are reordered. 4) "John aced Physics and Math" -- refers to logical conjunction. Differs from (2) in that it can also be rewritten as "John aced Physics and John aced Math". &c. People know how to parse these different uses of "and" correctly due to a wealth of semantic knowledge. For example, knowledge about computers (that articles cannot be posted to the net without logging onto a computer) enables us to determine that the "and" in (3) above refers to a temporal ordering of events. Without such semantic information, your English parser'll probably get into trouble. Saumya Debray, SUNY at Stony Brook uucp: {cbosgd, decvax, ihnp4, mcvax, cmcl2}!philabs \ {amd70, akgua, decwrl, utzoo}!allegra > !sbcs!debray {teklabs, hp-pcd, metheus}!ogcvax / CSNet: debray@suny-sbcs@CSNet-Relay ------------------------------ Date: Fri 25 May 84 12:10:32-CDT From: Charles Petrie Subject: Object identification The AI approach certainly does not seem to be hopeless. As someone else mentioned, the boat and ax problems are philosophical ones. They fall a bit out of our normal (non-philisophical) area of object recognition: these are recognition problems for ordinary people. The point we should get from them is that there may not be an objective single algorithm that completely matches our intuition about pattern recognition in all cases. In fact, these problems may show such to be impossible since there is no intuitive consensus in these cases. The AI approach aspires to something more humble - finding techniques that work on particular objects enough of the time so as to be useful. Representing objects as feature, or attribute, sets does not seem hopeless just because object's features change over time. Presumably, we can get a program to handle that problem the same way that people do. We seem to conclude that an object is the same if it has not changed too much in some sense. Given that the values of the attributes of an object change, we recognize it as the same object if, since the last observation, either the values have not changed very much, or most values have not changed, or if certain high priority values haven't changed, or some combination of the first three. To some extent, object recognition is subjective in that it depends on the changes since the last observation. When we come home after 20 years, we are likely to remark that the town is completely different. But what makes it the same town so that we can talk about its differences, are certain high importance attributes that have not changed, such as its location and the major street layout. If we can discover sufficient heuristics of how to handle this kind of change, then we succeed. Since people already do it, even if it involves additional large amounts of contextual information, feature recognition is obviously possible. Charles Petrie ------------------------------ Date: 23 May 84 11:18:54-PDT (Wed) From: ihnp4!ihuxr!lew @ Ucb-Vax Subject: Re: misperception Article-I.D.: ihuxr.1096 Alan Wexelblat gave the following example of misperception: ------------------- A more "severe" case of misperception is the following. Suppose that, while touring through the grounds of a Hollywood movie studio, I approach what, at first, I take to be a tree. As I come near to it, I suddenly realize that what I have been approaching is, in fact, not a tree at all but a cleverly constructed stage prop. ------------------- This reminds me strongly of the Chapter, "Knock on Wood (Part two)", of TROUT FISHING IN AMERICA. Here is an excerpt: I left the place and walked down to the different street corner. How beautiful the field looked and the creek that came pouring down in a waterfall off the hill. But as I got closer to the creek I could see that something was wrong. The creek did not act right. There was a strangeness to it. There was a thing about its motion that was wrong. Finally I got close enough to see what the trouble was. The waterfall was just a flight of white wooden stairs leading up to a house in the trees. I stood there for a long time, looking up and looking down, following the stairs with my eyes, having trouble believing. Then I knocked on my creek and heard the sound of wood. TROUT FISHING IN AMERICA abounds with striking metaphors, similes, and other forms of imagery. I had never considered these from the point of view of the science of perception, but now that I do so, I think they provide some interesting examples for contemplation. The first chapter, "The Cover for Trout Fishing in America", provides a very simple but interesting perceptual shift. "The Hunchback Trout" provides an extended metaphor based on a simple perceptual similarity. Anyway, it's a great book. Lew Mammel, Jr. ihnp4!ihuxr!lew ------------------------------ Date: 24 May 84 11:35:55-PDT (Thu) From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax Subject: Re: the Greek Ship problem Article-I.D.: ccivax.144 In reference to John Owens resolution of the Greek Ship problem: > Most of the cells in your body weren't there when > you were born, and most that you had then aren't there now, but aren't > you still the same person/entity, though you have far from the same > characteristics? Is it such an easy question? It's far from clear that the answer is yes. The question might be What is it that we recognize as persisting over time? And if all the cells in our bodies are different, then where does this what reside? Could it be that nothing persists? Or is it that what persists is not material (in the physical sense)? Bill Anderson ...!{ {ucbvax | decvax}!allegra!rlgvax }!ccivax!band ------------------------------ Date: 25 May 84 17:46:26-PDT (Fri) From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!flink @ Ucb-Vax Subject: pointer -- identity over time Article-I.D.: umcp-cs.7266 I have responded to Norm Andrews, Brad Blumenthal and others on the subject of identity across time, in net.philosophy, which I think is where it belongs. Anyone interested should see my recent posting there. --P. Torek ------------------------------ Date: 25 May 84 15:08:52-PDT (Fri) From: decvax!decwrl!dec-rhea!dec-smurf!arndt @ Ucb-Vax Subject: "I see", said the carpenter as he picked up his hammer and saw. Article-I.D.: decwrl.621 But perception, don't you see, is in the I of the beholder! Remember the problem of Alice, "Which dreamed it?" "Now, Kitty, let's consider who it was that dreamed it all. This is a serious question, my dear, and you should not go on licking your paw like that - as if Dina hadn't washed you this morning! You see, Kitty, it MUST have been either me or the Red King. He was part of my dream, of course - but then I was part of his dream, too! Was it the Red King, Kitty? You were his wife, my dear, so you ought to know - oh, Kitty, DO help to settle it! I'm sure your paw can wait." The point being, if WE can't decide logically what constitudes a "REAL" perception for ourselves (and I contend that there is no LOGICAL way out of the subjectivist trap) how in the WORLD can we decide on a LOGICAL basis if another human, not to mention a computer, has perception? We can't!! Therefore we operate on a faith basis a la Turing and move forward on a practical level and don't ask silly questions like, "Can Computers Think?". Comments? Regards, Ken Arndt ------------------------------ Date: 26 May 84 13:07:47-PDT (Sat) From: decvax!mcnc!unc!ulysses!gamma!pyuxww!pyuxt!marcus @ Ucb-Vax Subject: Re: "I see", said the carpenter as he picked up his hammer and saw. Article-I.D.: pyuxt.119 Eye agree! While it is valuable to challenge the working premises that underlie research, for most of the time we have to accept these on faith (working hypotheses) if we are to be at all productive. Most arguments connected with Descartes or to perceptions of perceptions ultimately have lead to blind alleys and dead ends. marcus hand (pyuxt!marcus) ------------------------------ Date: 28 May 1984 2124-PDT From: WENGER%UCI-20B@UCI-750a Subject: Response to Marvin Minsky Although I concede that Marvin Minsky's statements about the essence of consciousness are a somewhat understandable reaction to a common form of spiritual immaturity, they are also an expression of an equal form of immaturity that I find to be very common in the scientific community. We should beware of reactions because they are rarely significantly different from the very things they are reacting to. Therefore, I would like to respond to his statements with a less restrictive -- maybe even refreshing -- point of view. I think it deserves some pondering. The question 'Does a machine have a soul ?' may well be a question that only the machine itself can validly ask when it gets to that point. My experience suggests that the question whether one has a soul can only be asked in the first person singular meaningfully. Asking questions presupposes some knowledge of the subject; total ignorance requires a quest. What do we know about the subject except for our own ideas ? Now, regardless of how the issue should or can be approached, the fact is that answering the question of the soul on the grounds that the existence of an essential reality would interfere with our achievements is really an irrelevant statement. Investigation cannot be a matter of personal preference. Discarding an issue on the basis of its ramifications on our image of ourselves is contrary to the scientific approach. Should we stop studying AI because it might trivialize our notion of intelligence ? The statement is not only irrelevant, but I do not see that it is even correct. I do not find any contradiction between perceiving one's source of consciousness as having some essential quality and