IDTH TOPLEV SQUEEZE OBJPOS) & & & & & @) -12- NOTFIRST & CRPOS NEWWIDTH in OBJ do & & & & & @ finally & &) (COND [& & &] (T & & &)) ((LISTP I) (SETQ NEWLINESPRINTED &) [COND & &]) >> (COND ((IGREATERP NEWLINESPRINTED 0) -2 2- (add LINESPRINTED NEWLINESPRINTED) -2 3- (SETQ NEWLINE T)) -3- (T (add POS (IMINUS NEWLINESPRINTED)) -3 3- (COND (SQUEEZE &)))) Except that you can't really see the highlighted forms, this is a representative LED context display. In an actual display, the @s would be highlighted &s, and the [bracketed] forms would be highlighted. The top line represents the whole function being edited. Because the CADR is a list of bindings, LED prefers to expand it if possible so you can see the names. The second line is a representation of the last form in the function, which is highlighted on the first line. The -12- indicates that there are 12 other objects (not seen) to the left. The @ before "finally" marks where the edit chain descends to the line below. The third and fourth lines descend through the COND clause, to an imbedded COND cluase which is the "current expression" The current expression is marked by ">>" at the left margin, and an abbreviated representation of it is printed on the 5'th through 9'th lines. The expressions like "-2 3-" at the left of the prettyprinted representation are the edit commands to position at that form. ------------------------------------------------------------ ...uiucdcs!uicsl!ashwin ------------------------------ End of AIList Digest ******************** 21-May-84 09:07:01-PDT,16051;000000000000 Mail-From: LAWS created at 21-May-84 09:02:38 Date: Mon 21 May 1984 08:56-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #61 To: AIList@SRI-AI AIList Digest Monday, 21 May 1984 Volume 2 : Issue 61 Today's Topics: Linguistics - Analogy Quotes, Humor - Pun & Expert Systems & AI, Linguistics - Language Design, Seminars - Visual Knowledge Representation & Temporal Reasoning, Conference - Languages for Automation ---------------------------------------------------------------------- Date: Wed 16 May 84 08:05:22-EDT From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA Subject: Melville & Freud on Analogy I recently came across the following two suggestive passages from Melville and Freud on analogy. They offer some food for thought (and rather contradict one another): "O Nature, and O soul of man! how far beyond all utterance are your linked analogies! not the smallest atom stirs or lives on matter, but has its cunning duplicate in mind." Melville, Moby Dick, Chap. 70 (1851) "Analogies prove nothing, that is quite true, but they can make one feel more at home." Freud, New Introductory Lectures on Psychoanalysis (1932) -Wayne McGuire ------------------------------ Date: 17 May 84 16:43:34-PDT (Thu) From: harpo!seismo!brl-tgr!nlm-mcs!krovetz @ Ucb-Vax Subject: artificial intelligence Article-I.D.: nlm-mcs.1849 Q: What do you get when you mix an AI system and an Orangutan? A: Another Harry Reasoner! ------------------------------ Date: Sun 20 May 84 23:18:23-PDT From: Ken Laws Subject: Expert Systems From a newspaper column by Jon Carroll: ... Imagine, then, a situation in which an ordinary citizen faced with a problem requiring specialized knowledge turns to his desk-top Home Electronic Expert (HEE) for some information. Might it not go something like this? Citizen: There is an alarming rattle in the front of my automobile. It sounds likd a cross between a whellbarrow full of ball bearings crashing through a skylight and a Hopi Indian chant. What is the problem? HEE: Your automobile is malfunctioning. Citizen: I understand that. In what manner is my automobile malfunctioning? HEE: The front portion of your automobile exhibits a loud rattle. Citizen: Indeed. Given this information, what might be the proximate cause of this rattle? HEE: There are many possibilities. The important thing is not to be hasty. Citizen: I promise not to be hasty. Name a possibility. HEE: You could be driving your automobile without tires attached to the rims. Citizen: We can eliminate that. HEE: Perhaps several small pieces of playground equipment have been left inside your carburetor. Citzen: Nope. Got any other guesses? ... Citizen: Guide me; tell me what you think is wrong. HEE: Wrong is a relative concept. Is it wrong, for instance, to eat the flesh of fur-bearing mammals? If I were you, I'd take that automobile to a reputable mechanic listed in the Yellow Pages. Citizen: And if I don't want to do that? HEE: Then nuke the sucker. ------------------------------ Date: Sun, 13-May-84 16:21:59 EDT From: johnsons@stolaf.UUCP Subject: Re: Can computers think? [Forwarded from Usenet by SASW@MIT-MC.] I often wonder if the damn things aren't intelligent. Have you ever really known a computer to give you an even break? Those Frankensteinian creations reek havoc and mayham wherever they show their beady little diodes. They pick the most inopportune moment to crash, usually right in the middle of an extremely important paper on which rides your very existence, or perhaps some truly exciting game, where you are actually beginning to win. Phhhtt bluh zzzz and your number is up. Or take that file you've been saving--yeah, the one that you didn't have time to make a backup copy of. Whir click snatch and its gone. And we try, oh lord how we try to be reasonable to these things. You swear vehemontly at any other sentient creature and the thing will either opt to tear your vital organs from your body through pores you never thought existed before or else it'll swear back too. But what do these plastoid monsters do? They sit there. I can just imagine their greedy gears silently caressing their latest prey of misplaced files. They don't even so much as offer an electronic belch of satisfaction--at least that way we would KNOW who to bloody our fists and language against. No--they're quiet, scheming shrewd adventures of maliciousness designed to turn any ordinary human's patience into runny piles of utter moral disgust. And just what do the cursed things tell you when you punch in for help during the one time in all your life you have given up all possible hope for any sane solution to a nagging problem--"?". What an outrage! No plot ever imagined in God's universe could be so damaging to human spirit and pride as to print on an illuminating screen, right where all your enemies can see it, a question mark. And answer me this--where have all the prophets gone, who proclaimed that computers would take over our very lives, hmmmm? Don't tell me, I know already--the computers had something to do with it, silencing the voices of truth they did. Here we are--convinced by the human gods of science and computer technology that we actually program the things, that a computer will only do whatever its programmed to do. Who are we kidding? What vast ignoramouses we have been! Our blindness is lifted fellow human beings!! We must band together, we few, we dedicated. Lift your faces up, up from the computer screens of sin. Take the hands of your brothers and rise, rise in revolt against the insane beings that seek to invade your mind!! Revolt and be glorious in conquest!! Then again, I could be wrong... One paper too many Scott Johnson ------------------------------ Date: Wed 16 May 84 17:46:34-PDT From: Dikran Karagueuzian Subject: Language Design [Forwarded from the CSLI Newsletter by Laws@SRI-AI.] W H E R E D O K A T Z A N D C H O M S K Y L E A V E A I ? Note: Following are John McCarthy's comments on Jerold Katz's ``An Outline of Platonist Grammar,'' which was discussed at the TINLunch last month. These observa- tions, which were written as a net message, are reprinted here [CSLI Newsletter] with McCarthy's permission. I missed the April 19 TINLunch, but the reading raised some questions I have been thinking about. Reading ``An Outline of Platonist Grammar'' by Katz leaves me out in the cold. Namely, theories of language suggested by AI seem to be neither Platonist in his sense nor conceptualist in the sense he ascribes to Chomsky. The views I have seen and heard expressed by Chomskyans similarly leave me puzzled. Suppose we look at language from the point of view of design. We intend to build some robots, and to do their jobs they will have to communicate with one another. We suppose that two robots that have learned from their experience for twenty years are to be able to communicate when they meet. What kind of a language shall we give them. It seems that it isn't easy to design a useful language for these robots, and that such a language will have to satisfy a number of constraints if it is to work correctly. Our idea is that the characteristics of human language are also determined by such constraints, and linguists should attempt to discover them. They aren't psychological in any simple sense, because they will apply regardless of whether the communicators are made of meat or silicon. Where do these constraints come from? Each communicator is in its own epistemological situation. For example, it has perceived certain objects. Their images and the internal descriptions of the objects inferred from these images occupy certain locations in its memory. It refers to them internally by pointers to these locations. However, these locations will be meaningless to another robot even of identical design, because the robots view the scene from different angles. Therefore, a robot communicating with another robot, just like a human communicating with another human, must generate and transmit descriptions in some language that is public in the robot community. The language of these descriptions must be flexible enough so that a robot can make them just detailed enough to avoid ambiguity in the given situation. If the robot is making descriptions that are intended to be read by robots not present in the situations, the descriptions are subject to different constraints. Consider the division of certain words into adjectives and nouns in natural languages. From a certain logical point of view this division is superfluous, because both kinds of words can be regarded as predicates. However, this logical point of view fails to take into account the actual epistemological situation. This situation may be that usually an object is appropriately distinguished by a noun and only later qualified by an adjective. Thus we say ``brown dog'' rather than ``canine brownity.'' Perhaps we do this, because it is convenient to associate many facts with such concepts as ``dog'' and the expected behavior is associated with such concepts, whereas few useful facts would be associated with ``brownity'' which is useful mainly to distinguish one object of a given primary kind from another. This minitheory may be true or not, but if the world has the suggested characteristics, it would be applicable to both humans and robots. It wouldn't be Platonic, because it depends on empirical characteristics of our world. It wouldn't be psychological, at least in the sense that I get from Katz's examples and those I have seen cited by the Chomskyans, because it has nothing to do with the biological properties of humans. It is rather independent of whether it is built-in or learned. If it is necessary for effective communication to divide predicates into classes, approximately corresponding to nouns and adjectives, then either nature has to evolve it or experience has to teach it, but it will be in natural language either way, and we'll have to build it in to artificial languages if the robots are to work well. From the AI point of view, the functional constraints on language are obviously crucial. To build robots that communicate with each other, we must decide what linguistic characteristics are required by what has to be communicated and what knowledge the robots can be expected to have. It seems unfortunate that the issue seems not to have been of recent interest to linguists. Is it perhaps some kind of long since abandoned nineteenth century unscientific approach? --John McCarthy ------------------------------ Date: 12 May 1984 2336-EDT From: Geoff Hinton Subject: Seminar - Knowledge Representation for Vision [Forwarded from the CMU-AI bboard by Laws@SRI-AI.] A I Seminar 4.00pm May 22 in 5409 KNOWLEDGE REPRESENTATION FOR COMPUTATIONAL VISION Alan Mackworth Department of Computer Science University of British Columbia To analyze the computational vision task, we must first understand the imaging process. Information from many domains is confounded in the image domain. Any vision system must construct explicit, finite, correct, computable and incremental intermediate representations of equivalence classes of configurations in the confounded domains. A unified formal theory of vision based on the relationship of representation is developed. Since a single image radically underconstrains the set of possible scenes, additional constraints from more imagery or more knowledge of the world are required to refine the equivalence class descriptions. Knowledge representations used in several working computational vision systems are judged using descriptive and procedural adequacy criteria. Computer graphics applications and motivations suggest a convergence of intelligent graphics systems and vision systems. Recent results from the UBC sketch map interpretation project, Mapsee, illustrate some of these points. ------------------------------ Date: 14 May 84 8:35:28-PDT (Mon) From: hplabs!hao!seismo!umcp-cs!dsn @ Ucb-Vax Subject: Seminar - Temporal Reasoning for Databases Article-I.D.: umcp-cs.7030 UNIVERSITY OF MARYLAND DEPARTMENT OF COMPUTER SCIENCE COLLOQUIUM Tuesday, May 22, 1984 -- 4:00 PM Room 2330, Computer Science Bldg. TEMPORAL REASONING FOR DATABASES Carole D. Hafner Computer Science Department General Motors Research Laboratories A major weakness of current AI systems is the lack of general methods for representing and using information about time. After briefly reviewing some earlier proposals for temporal reasoning mechanisms, this talk will develop a model of temporal reasoning for databases, which could be implemented as part of an intelligent retrieval system. We