iate. Sorry if I do go on... Jim Hendler ------------------------------ Date: Wed 9 May 84 18:08:03-PDT From: Dikran Karagueuzian Subject: Seminar - Content-Addressable Memory [Forwarded from the CSLI Newsletter by Laws@SRI-AI.] FOR THE RECORD CSLI post-doctoral fellow Pentti Kanerva was a guest lecturer at MIT Tuesday, May 1. The topic of his lecture was "Random-access Memory with a Very Large Address Space (1 2000) as a Model of Human Memory: Theory and Implementation." Douglas R. Hofstadter was host. Following is an abstract of the lecture. Humans can retrieve information from memory according to content (recalling and recognizing previously encountered objects) and according to temporal sequence (performing a learned sequence of actions). Retrieval times indicate the direct retrieval of stored information. In the present theory, memory items are represented by n-bit binary words (points of space {0,1}n. The unifying principle of the theory is that the address space and the datum space of the memory are the same. As in the conventional random-access memory of a computer, any stored item can be accessed directly by addressing the location in which the item is stored; the sequential retrieval is accomplished by storing the memory record as a linked list. Unlike in the conventional random-access memory, many locations are accessed at once, and this accounts for recognition. Three main results have been obtained: (1) The properties of neurons allow their use as address decoders for a generalized random-access memory; (2) distributing the storage of an item in a set of locations makes very large address spaces (2 1000) practical; and (3) structures similar to those suggested by the theory are found in the cerrebellum. ------------------------------ Date: 11 May 1984 07:08:26-EDT From: Mark.Fox@CMU-RI-ISL1 Subject: IEEE AI Conf. Call for Papers [Forwarded from the SRI bboard by Laws@SRI-AI.] CALL FOR PAPERS IEEE Workshop on Principles of Knowledge-Based Systems Sheraton Denver Tex, Denver, Colorado, 3 - 4 December 1984 Purpose: The purpose of this conference is to focus attention on the principle theories and methods of artificial intelligence which have played an important role in the construction of expert and knowledge-based systems. The workshop will provide a forum for researchers in expert and knowledge-based systems to discuss the concepts which underly their systems. Topics include: - Knowledge Acquisition. * manual elicitation. * machine learning. - Knowledge Representation. - Causal modeling. - The Role of Planning in Expert Reasoning - Knowledge Utilization. * rule-based reasoning * theories of evidence * focus of attention. - Explanation. - Validation. * measures. * user acceptance. Please send eight copies of a 1000-2000 word double-space, typed, summary of the proposed paper to: Mark S. Fox Robotics Institute Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 All submissions will be read by the program committee: - Richard Duda, Syntelligence - Mark Fox, Carnegie-Mellon University - John McDermott, Carnegie-Mellon University - Tom Mitchell, Rutgers University - John Roach, Virginia Polytechnical Institute - Reid Smith, Schlumberger Corp. - Mark Stefik, Xerox Parc - Donald Waterman, Rand Corp. Summaries are to focus primarily on new principles, but each principle should be illustrated by its use in an knowledge-based system. It is important to include specific findings or results, and specific comparisons with relevant previous work. The committee will consider the appropriateness, clarity, originality, significance and overall quality of each summary. June 7, 1984 is the deadline for the submission of summaries. Authors will be notified of acceptance or rejection by July 23, 1984. The accepted papers must be typed on special forms and received by the program chairman at the above address by September 3, 1984. Authors of accepted papers will be expected to sign a copyright release form. Proceedings will be distributed at the workshop and will be subsequently available for purchase from IEEE. Selected full papers will be considered (along with papers from the IEEE Conference on AI and Applications) for a special issue of IEEE PAMI on knowledge-based systems to be published in Sept. 1985. The deadline for submission of full papers is 16 December 1984. General Chairman John Roach Dept. of Computer Science Virginia Polytechnic Institute Blacksburg, VA Program Co-Chairmen Mark S. Fox Tom Mitchell Robotics Institute Dept. of Computer Science Carnegie-Mellon Univ. Rutgers University Pittsburgh, PA New Brunswick, NJ Registration Chairman Local Arrangements Chairman Daniel Chester David Morgenthaler Dept. of Computer Science Martin Marietta Corp. University of Delaware Denver, Colorado Newark, Delaware ------------------------------ End of AIList Digest ******************** 20-May-84 22:38:33-PDT,12278;000000000000 Mail-From: LAWS created at 20-May-84 22:35:38 Date: Sun 20 May 1984 22:30-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #59 To: AIList@SRI-AI AIList Digest Sunday, 20 May 1984 Volume 2 : Issue 59 Today's Topics: Metaphysics - Perception, Recognition, Essence, and Identity ---------------------------------------------------------------------- Date: 15 May 84 23:33:31-PDT (Tue) From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax Subject: A topic for discussion, phil/ai persons. Article-I.D.: wxlvax.277 Here is a thought which a friend and I have been kicking around for a while (the friend is a professor of philosophy at Penn): It seems that it is IMPOSSIBLE to ever build a computer that can truly perceive as a human being does, unless we radically change our ideas about how perception is carried out. The reason for this is that we humans have very little difficulty identifying objects as the same across time, even when all the features of that object change (including temporal and spatial ones). Computers, on the other hand, are being built to identify objects by feature-sets. But no set of features is ever enough to assure cross-time identification of objects. I accept that this idea may be completely wrong. As I said, it's just something that we have been batting around. Now I would like to solicit opinions of others. All ideas will be considered. All references to literature will be appreciated. Feel free to reply by mail or on the net. Just be aware that I don't log on very often, so if I don't answer for a while, I'm not snubbing you. --Alan Wexelblat (for himself and Izchak Miller) (currently appearing at: ...decvax!ittvax!wlxvax!rlw Please put "For Alan" in all mail headers.) ------------------------------ Date: 15 May 84 14:49:41-PDT (Tue) From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax Subject: Re: A topic for discussion, phil/ai persons. Article-I.D.: ariel.630 The computer needs to be able to distinguish between "metaphysically identical" and "essentially the same". This distinction is at the root of an old (2500 years?) Greek ship problem: Regarding Greeks ship problem: When a worn board is replaced by a new board, the ship is changed, but it is the same ship. The difference leaves the ship essentially the same but not identically the same. If all the boards of a ship are replaced one by one until the ship is entirely redone with new boards, it is still the same ship (essentially). Now, if all the old boards that had been removed were put together again in their original configuration so as to duplicate the new-board ship, would the new old-board ship be iden- tically or essentially the same as the original old-board ship? Assume nailless construction techniques were used thruout, and assume all boards always fit perfectly the same way every time. We now have two ships that are essentially the same as the original ship, but, I maintain, neither ship is identical to the original ship. The original ship's identity was not preserved, although its identity was left sufficiently unchanged so as to preserve the ship's essence. The ship put together with the previously-removed old boards is not identically the same as the original old-board ship either, no matter how carefully it is put together. It too is only essentially the same as the original ship. A colleague suggested that 'essence' in this case was contextual, and I tend to agree with him. Actually, even if the Greeks left the original ship alone, the ship's identity would change from one instant to the next. Even while remaining essentially the same, the fact that the ship exists in the context of (and in relation to) a changing universe is enough to vary the ship's identity from moment to mo- ment. The constant changes in the ship's characteristics are admittedly very subtle, and do not change the essential capacity/functionality/identity of the ship. Minute changes in a ships identity have 'essentially' no impact. Only a change sufficiently large (such as a small hole in the hull) have an essential impact. "Essence" has historically been considered metaphysical. In her "Introduction to Objectivist Epistemology" (see your local bookstore) Ayn Rand identified essence as epistemological rather than metaphysical. The implications of this identification are profound, and more than I want to get into in this article. Philosopher Leonard Peikoff's article "The Analytic-Synthetic Dichotomy", in the back of the newer editions of Rand's Intro to Obj Epist, shows how crucial the distinction between essence-as-metaphysical and essence-as-epistemological really is. Read Rand's book and see why the computer would have to make the same distinc- tion. That distinction, however, has to be made on the CONCEPTUAL level. I think Rand's discussion of concept-formation will probably convince you that it will be quite some time before man-made machinery is up to that... Norm Andrews, AT+T Information Systems (201)834-3685 vax135!ariel!norm ------------------------------ Date: 16 May 84 7:10:40-PDT (Wed) From: hplabs!hao!seismo!rochester!rocksvax!sunybcs!gloria!rosen @ Ucb-Vax Subject: Re: A topic for discussion, phil/ai persons. Article-I.D.: gloria.176 Just a few quick comments, 1) The author seems to use perceive as visual perception. It can not be a prerequisite for intelligence due to all the counter examples in the human race. Not every human has sight, so we should be able to get intelligence from various types of inputs. 2) Since humans CAN do it is the evidence that OTHER systems can do it. 3) The major assumption is that the only way a computer can identify objects is by having static "feature-sets" that are from the object alone, without having additional information, but why have that restriction? First, all features don't change at once, your grandmother doesn't all- of-a-sudden have the features of a desk. Second, the processor can/must change with the enviornment as well as the object in question. Third, the context plays a very important role in the recognition of of an object. Functionality of the object is cruical. Remindings from previous interactions with that object, and so on. The point is that clearly a static list of what features objects must have and what features are optional is not enough. Yet there is no reason to believe that this is the only way computers can represent objects. The points here come from many sources, and have their origin from such people as Marvin Minsky and Roger Schank among others. There is a lot of literature out there. ------------------------------ Date: 16 May 84 9:50:24-PDT (Wed) From: hplabs!hao!seismo!rochester!ritcv!ccieng5!ccieng2!bwm @ Ucb-Vax Subject: Re: Essence Article-I.D.: ccieng2.179 I don't think ANYONE is looking to build a computer that can understand phiolosophy. If I can build something that acts the same as an IQ-80 person, I would be happy. This involves a surprising amount of work, (like vision, language, etc.) but could certainly be confused by two 'identical' ships as could I. Just because A human can do something does not imply that our immediate AI goals should include it. Rather, first lets worry about things ALL humans can do. Brad Miller ...[cbrma, rlgvax, ritcv]!ccieng5!ccieng2!bwm ------------------------------ Date: 17 May 84 7:04:41-PDT (Thu) From: ihnp4!houxm!hocda!hou3c!burl!ulysses!unc!mcnc!ecsvax!emigh @ Ucb-Vax Subject: Re: the Greek Ship problem Article-I.D.: ecsvax.2511 This reminds me of the story of Lincoln's axe (sorry, I've forgotten the source). A farmer was showing a visitor Lincoln's axe: Visitor: Are you sure that's Lincoln's axe Farmer: It's Lincoln's axe. Of course I've had to replace the handle three times and the head once, but it's Lincoln's axe alright. Adds another level of reality to the Greek Ship Problem. Ted H. Emigh Genetics and Statistics, North Carolina State U, Raleigh NC USENET: {akgua decvax duke ihnp4 unc}!mcnc!ecsvax!emigh ARPA: ecsvax!emigh@Mcnc or decvax!mcnc!ecsvax!emigh@BERKELEY ------------------------------ Date: 16 May 84 15:20:19-PDT (Wed) From: ihnp4!drutx!houxe!hogpc!houti!ariel!vax135!floyd!cmcl2!seismo!ro chester!rocksvax!sunybcs!gloria!colonel @ Ucb-Vax Subject: Re: the Greek Ship problem Article-I.D.: gloria.178 This is a good example of the principle that it depends on who's doing the perceiving. To a barnacle, it's a whole new ship. Col. G. L. Sicherman ...seismo!rochester!rocksvax!sunybcs!gloria!colonel ------------------------------ Date: 16 May 84 15:17:06-PDT (Wed) From: harpo!seismo!rochester!rocksvax!sunybcs!gloria!colonel @ Ucb-Vax Subject: Re: Can computers perceive Article-I.D.: gloria.177 If by "perception" you imply "recognition", then of course computers cannot perceive as we can. You can recognize only what is meaningful to you, and that probably won't be meaningful to a computer. Col. G. L. Sicherman ...seismo!rochester!rocksvax!sunybcs!gloria!colonel ------------------------------ Date: 16 May 84 10:57:00-PDT (Wed) From: pur-ee!uiucdcs!marcel @ Ucb-Vax Subject: Re: A topic for discussion, phil/ai pers - (nf) Article-I.D.: uiucdcs.32300026 The problem is one of identification. When we see one object matching a description of another object we know about, we often assume that the object we're seeing IS the object we know about -- especially when we expect the description to be definite [1]. This is known as Leibniz's law of the indiscernability of identicals. That's found its way into the definitions of set theory [2]: two entities are "equal" iff every property of one is also a property of the other. Wittgenstein [3] objected that this did not allow for replication, ie the fact that we can distinguish two indistinguishable objects when they are placed next to each other (identity "solo numero"). So, if we don't like to make assumptions, either no two objects are ever the same object, or else we have to follow Aristotle and say that every object has some property setting it apart from all others. That's known as Essentialism, and is hotly disputed [4]. The choices until now have been: breakdown of identification, essentialism, or assumption. The latter is the most functional, but not nice if you're after epistemic certainty. Still, I see no insurmountable problems with making computers do the same as ourselves: assume identity until given evidence to the contrary. That we can't convince ourselves of that method's epistemic soundness does nothing to its effectiveness. All one needs is a formal logic or set theory (open sentences, such as predicates, are descriptions) with a definite description operator [2,5]. Of course, that makes the logic non-monotonic, since a definite description becomes meaningless when two objects match it. In other words, a closed-world assumption is also involved, and the theory must go beyond first- order logic. That's a technical problem, not necessarily an unsolvable one [6]. [1] see the chapter on SCHOLAR in Bobrow's "Representation and Understanding"; note the "uniqueness assumption". [2] Introduced by Whitehead & Russell in their "Principia Mathematica". [3] Wittgenstein's "Tractatus". [4] WVO Quine, "From a logical point of view". [5] WVO Quine, "Mathematical Logic". [6] Doyle's Truth Maintenance System (Artif. Intel. 12) attacks the non- monotonicity problem fairly well, though without a sound theoretical basis. See also McDermott's attempt at formalization (Artif. Intel. 13 and JACM 29 (Jan '82)). Marcel Schoppers U of Illinois at Urbana-Champaign uiucdcs!marcel ------------------------------ End of AIList Digest ******************** 20-May-84 22:58:34-PDT,18508;000000000000 Mail-From: LAWS created at 20-May-84 22:56:07 Date: Sun 20 May 1984 22:43-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #60 To: AIList@SRI-AI AIList Digest Monday, 21 May 1984 Volume 2 : Issue 60 Today's Topics: AI Literature - Artificial Intelligence Abstracts, Survey - Summary on AI for Business, AI Tools - LISP on PCs & Boyer-Moore Prover on VAXen and SUNs, Games - Core War Software, AI Tools - Display-Oriented LISP Editors ---------------------------------------------------------------------- Date: Sun 20 May 84 14:10:16-EDT From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA Subject: Artificial Intelligence Abstracts Does anyone else on this list wish, as I do, that there existed a publication entitled ARTIFICIAL INTELLIGENCE ABSTRACTS? The field of artificial intelligence is probably the supreme interdisciplinary sphere of activity in the world, and its vital concerns extend across the spectrum of computer science, philosophy, psychology, biology, mathematics, literary theory, linguistics, statistics, electrical engineering, mechanical engineering, etc. I wonder if one of the major member publishers of the NFAIS (National Federation of Abstracting & Indexing Services) could be convinced to undertake the publication of a monthly reference serial which would reprint from the following abstracting services those abstracts which bear most pertinently on the concerns of AI research: Biological Abstracts / Computer & Control Abstracts / Computer & Information Systems Abstracts Journal / Current Index to Journals in Education / Dissertation Abstracts International / Electrical & Electronics Abstracts / Electronics & Communications Abstracts Journal / Engineering Index / Government Reports Announcements and Index / Informatics Abstracts / Information Science Abstracts / International Abstracts in Operations Research / Language and Language Behavior Abstracts / Library & Information Science Abstracts / Mathematical Reviews / Philosopher's Index / PROMT / Psychological Abstracts / Resources in Education / (This is by no means a comprehensive list of relevant reference publications.) Would other people on the list find an abstracting service dedicated to AI useful? Perhaps an initial step in developing such a project would be to arrive at a consensus regarding what structure of research fronts/subject headings appropriately defines the field of AI. --Wayne McGuire ------------------------------ Date: Fri, 18 May 84 15:29:35 pdt From: syming%B.CC@Berkeley Subject: Summary on AI for Business This is the summary of the responses to my request about "AI for Business" one month ago on AIList Digest. Three organizations are working on this area. They are Syntelligence, SRI, and Arthur D. Little, Inc.. Syntelligence's objective is to bring intelligent computer systems for business. Currently the major work is in finance area. The person to contact is: Peter Hart, President, 800 Oak Grove Ave, Suite 201, Menlo Park, CA 94025. (415) 325-9339, SRI has a sub-organization called Financial Expert System Program headed by Sandra Cook, (415) 859-5478. A prototype system for a financial application has been constructed. Arthur D. Little are developing AI-based MRP, financial planning, strategic planning and marketing system. However, I do not have much information yet. The person to contact with is Tom Martin. The Director of AI at Arthur D. Little, Karl M. Wiig, gave an interesting talk on "Will Artificial Intelligence Provide The Rebirth of Operations Research?" at TIMS/ORSA Joint National Meeting in San Francisco on May 16. In his talk, a few projects in ADL are mentioned. If interested, write to 35/48 Acorn Park, Cambridge, MA 01240. Gerhard Friedrich of DEC also gave a talk about expert systems on TIMS/ORSA meeting on Tuesday. He mentioned XSEL for sales, XCON for engineering, ISA, IMACS and IBUS for manufacturing and XSITE for customer services. XCON is successor of R1, which is well known. XSEL was published in Machine Intelligence Vol.10. However, I do not know the references for the rest. If you know, please inform me. The interests on AI in Business community is just started. TIMS is probably the first business professional society who will form a interest group on AI. If interested, please write to W. W. Abendroth, P.O. Box 641, Berwyn, PA 19312. The people who have responsed to my request and shown interests are: --------------------------------------------------- SAL@COLUMBIA-20.ARPA DB@MIT-XX.ARPA Henning.ES@Xerox.ARPA brand%MIT-OZ@MIT-MC.ARPA NEWLIN%upenn.csnet@csnet-relay.arpa shliu%ucbernie@Berkeley.ARPA klein%ucbmerlin@Berkeley.ARPA david%ucbmedea@Berkeley.ARPA nigel%ucbernie@Berkeley.ARPA norman%ucbernie@Berkeley.ARPA meafar%B.CC@Berkeley.ARPA maslev%B.CC@Berkeley.ARPA edfri%B.CC@Berkeley.ARPA ------------------------------------------------------ Please inform me if I made any mistake on above statements. Keep in touch. syming hwang, syming%B.CC@Berkeley.ARPA, (415) 642-2070, 350 Barrows Hall, School of Business Administration, U.C. Berkeley, Berkeley, CA 94720 ------------------------------ Date: Tue, 15 May 84 10:25 EST From: Kurt Godden Subject: LISP machines question To my knowledge, the least expensive PC that runs LISP is the Atari. Sometime during the past year I read a review in Creative Computing of an Interlisp subset that runs on the Atari family. The reviewer was Kenneth Litkowski and his overall impression of the product was favorable. -Kurt Godden General Motors Research Labs ------------------------------ Date: 14-May-84 23:07:56-PDT From: jbn@FORD-WDL1.ARPA Subject: Boyer-Moore prover on VAXen and SUNs [Forwarded from the Stanford bboard by Laws@SRI-AI.] For all theorem proving fans, the Boyer-Moore Theorem Prover has now been ported to VAXen and SUNs running 4.2BSD Unix. Boyer and Moore ported it from TOPS-20 to the Symbolics 3600; I ported it from the 3600 to the VAX 11/780, and it worked on the SUN the first time. Vaughn Pratt has a copy. Performance on a SUN 2 is 57% of a VAX 11/780; this is quite impressive for a micro. Now when a Mac comes out with some real memory... Nagle (@SCORE) ------------------------------ Date: Sunday, 20 May 1984 23:23:30 EDT From: Michael.Mauldin@cmu-cs-cad.arpa Subject: Core War [The Scientific American article referred to below is an entertaining description of software entities that crawl or hop through an address space trying to destroy other such entities and to protect themselves against similar depredations. Very simple entities are easy to protect against or to destroy, but are difficult to find. Complex entities (liveware?) have to be able to repair themselves more quickly than primitive entities can eat away at them. This leads to such oddities as a redundant organism that switches its consciousness between bodies after verifying that the next body has not yet been corrupted. -- KIL] If anybody is interested in the May Scientific American's Computer Recreations article, you may also be interested in getting a copy of the CMU version of the Redcode assembler and Mars interpreter. I have written a battle program which has some interesting implications for the game. The program 'mortar' uses the Fibonacci sequence to generate a pseudo-random series of attacks. The program spends 40% of its time shooting at other programs, and finally kills itself after 12,183 cycles. Before that time it writes to 53% of memory and is guaranteed to hit any stationary program larger than 10 instructions. Since the attacks are random, a program which relocates itself has no reason to hope that the new location is any safer than the old one. Some very simplistic mathematical analysis indicates that while Dwarf should kill Mortar 60% of the time (this has been verified empirically), no non-repairing program of size 10 or larger can beat Mortar. Furthermore, no self-repairing program of size 141 can beat Mortar. I believe that this last result can be tightened significantly, but I haven't looked at it too long yet. I haven't written this up, but I might be cajoled into doing so if many people are interested. I would very much like to see some others veryify/correct these results. ======================================================================== Access information: ======================================================================== The following Unix programs are available: mars - A redcode simulator, written by Michael Mauldin redcode - A redcode assembler, written by Paul Milazzo Battle programs available: dwarf, gemini, imp, mortar, statue. Userid "ftpguest" with password "cmunix" on the "CMU-CS-G" VAX has access to the Mars source. The following files are available: mlm/rgm/marsfile ; Single file (shell script) mlm/rgm/srcmars/* ; Source directory Users who cannot use FTP to snarf copies should send mail requesting that the source be mailed to them. ======================================================================== Michael Mauldin (Fuzzy) Department of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 (413) 578-3065, mauldin@cmu-cs-a. ------------------------------ Date: 11 May 84 7:00:35-PDT (Fri) From: hplabs!hao!seismo!cmcl2!lanl-a!cib @ Ucb-Vax Subject: Re: wanted: display-oriented interlisp structure editor Article-I.D.: lanl-a.7072 Our system is ISI-Interlisp on a UNIX VAX, and I normally use emacs to edit Interlisp code. emacs can be called with the LISPUSERS/TEXTEDIT program. It needs a minor patch to be able to handle files with extensions. I can give further details by mail if you are interested. ------------------------------ Date: 8 May 84 13:32:00-PDT (Tue) From: pur-ee!uiucdcs!uicsl!ashwin @ Ucb-Vax Subject: Re: wanted: display-oriented interlisp s - (nf) Article-I.D.: uicsl.15500035 We use the LED editor which runs in InterLisp-VAX under UNIX. It's no DEDIT but is better than the TTY editor. We have the source which should make it pretty easy to set up on your system. I have no idea about copyright laws etc., but I suppose I could mail it to you if you want it. Here's a write-up on LED (from LED.TTY): ------------------------------------------------------------ LED -- A display oriented extension to Interlisp's editor -- for ordinary terminals. LED is an add on to the standard Interlisp editor, which maintains a context display continuously while editing. Other than the automatically maintained display, the editor is unchanged except for the addition of a few useful macros. HOW TO USE ---------- load the file (see below) possibly set screen control parameters to non-default values edit normally also: see documentation for SCREENOP to get LED to recognise your terminal type. THE DISPLAY ----------- Each line of the context display represents a level of the list structure you are editing, printed with PRINTLEVEL set to 0, 1, 2 or 3. Highlighting is used to indicate the area on each line that is represented on the line below, so you can thread your eye upward through successive layers of code. Normally, the top line of the screen displays the top level of the edit chain, the second line displays the second level and so on. For expressions deeper than LEDLINES levels, the top lines is the message: (nnn more cars above) and the next LEDLINES of the screen correspond to the BOTTOM levels of the edit chain. When the edit chain does become longer than LEDLINES, the display is truncated in steps of LEDLINES/2 lines, so for example if LEDLINES=20 (the default) and your edit chain is 35 levels deep, the lisplay will be (20 more cars above) followed by 15 lines of context display representing the 20'th through 35'th levels of the edit chain. Each line, representing some level of the edit chain, is printed such that it fits entirely on one screen line. Three methods are used to accomplish the shortening of the printed representation: Replacing comments with (*) Setting PRINTLEVEL to a smaller value, which changes expressions into ampersands Truncting the leading and/or trailing expressions around the attention point. If the whole expression can't be printed, replacing comments is tried first. If still to large, truncation is tried if the current printlevel is >= LEDTLEV. Otherwise the whole process is restarted with a smaller PRINTLEVEL. The choice of LEDTLEV effectively chooses between seeing more detail or seeing more forms. The last line of the display, representing the "current" expression, is printed onto ONE OR MORE lines of the display, controlled by the variable LEDPPLINES and the amount of space (less than LEDLINES) available. The line(s) representing the current expression are prettprinted with elision, similar to the other context lines, using a prettyprint algorithm similar to the standard prettyprinter. Default is LEDPPLINES=6, meaning that up to six lines will be used to print the current expression. The setting of LEDPPLINES can be manipulated from within the editor using the (PPLINES n) command. The rest of your screen, the part below the context display, is available just as always to print into or do operations that do not affect the edit chain (and therefore the appearance of the context display). Each time the context display is updated, the rest of the screen is cleared and the cursor positioned under the context display. On terminals that have a "memory lock" feature to restrict the scrolling region, it is used to protect the context display from scrolling off the screen. TERMINAL TYPES -------------- Terminal types are currently supported: HP2640 old HP terminals HP26xx all other known HP terminals Hazeltine 1520 hazeltine 1520 terminals Heathkit sometimes known as Zenith Ann Arbor Ambassador The mapping between system terminal terminal type information and internal types is via the alist SYSTEMTERMTYPES, which is used by DISPLAYTERMP to set the variables CURRENTSCREEN and DISPLAYTERMTYPE. Screen control macros: (in order of importance) ---------------------- DON turn on continuous display updating DOF disable continuous display updating CLR clear the display CC clear the display and redo the context display CT do a context display, incrementally updating the screen. use CC and CT to get isolated displays even when automatic updating is not enabled. (LINES n) display at most n lines of context default is 20 (PPLINES n) set the limit for prettyprinting the "current" expression. (TRUNC n) allow truncation of the forms displayed if PLEV<=n useful range is 0-3, default is 1 PB a one time "bracified" context display. PL a one time context display with as much detail as possible. pb and pl are varian display formats similar the the basic context display. Global variables: ----------------- DISPON if T, continuous updating is on DISPLAYTERMTYPE terminal type you are using. HP HP2640 of HZ this is set automatically by (DISPLAYTERMTYPE) HPENHANCECHAR enhancement character for HP terminals. A-H are possibilities. LEDLINES maximum umber of lines of context to use. Default is 20. LEDTLEV PLEV at which truncation becomes legal LEDPPLINES maximum number of lines used to prettyprint the current expression FILES: ------ on TOPS-20 load LED.COM on VAX/UNIX load LISPUSERS/LED.V these others are pulled in automatically. LED the list editor proper SCREEN screen manipulation utilities. PRINTOPT elision and printing utilities SAMPLE DISPLAY ______________ (LAMBDA (OBJ DOIT LMARGIN CPOS WIDTH TOPLEV SQUEEZE OBJPOS) & & & & & @) -12- NOTFIRST & CRPOS NEWWIDTH in OBJ do & & & & & @ finally & &) (COND [& & &] (T & & &)) ((LISTP I) (SETQ NEWLINESPRINTED &) [COND & &]) >> (COND ((IGREATERP NEWLINESPRINTED 0) -2 2- (add LINESPRINTED NEWLINESPRINTED) -2 3- (SETQ NEWLINE T)) -3- (T (add POS (IMINUS NEWLINESPRINTED)) -3 3- (COND (SQUEEZE &)))) Except that you can't really see the highlighted forms, this is a representative LED context display. In an actual display, the @s would be highlighted &s, and the [bracketed] forms would be highlighted. The top line represents the whole function being edited. Because the CADR is a list of bindings, LED prefers to expand it if possible so you can see the names. The second line is a representation of the last form in the function, which is highlighted on the first line. The -12- indicates that there are 12 other objects (not seen) to the left. The @ before "finally" marks where the edit chain descends to the line below. The third and fourth lines descend through the COND clause, to an imbedded COND cluase which is the "current expression" The current expression is marked by ">>" at the left margin, and an abbreviated representation of it is printed on the 5'th through 9'th lines. The expressions like "-2 3-" at the left of the prettyprinted representation are the edit commands to position at that form. ------------------------------------------------------------ ...uiucdcs!uicsl!ashwin ------------------------------ End of AIList Digest ******************** 21-May-84 09:07:01-PDT,16051;000000000000 Mail-From: LAWS created at 21-May-84 09:02:38 Date: Mon 21 May 1984 08:56-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #61 To: AIList@SRI-AI AIList Digest Monday, 21 May 1984 Volume 2 : Issue 61 Today's Topics: Linguistics - Analogy Quotes, Humor - Pun & Expert Systems & AI, Linguistics - Language Design, Seminars - Visual Knowledge Representation & Temporal Reasoning, Conference - Languages for Automation ---------------------------------------------------------------------- Date: Wed 16 May 84 08:05:22-EDT From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA Subject: Melville & Freud on Analogy I recently came across the following two suggestive passages from Melville and Freud on analogy. They offer some food for thought (and rather contradict one another): "O Nature, and O soul of man! how far beyond all utterance are your linked analogies! not the smallest atom stirs or lives on matter, but has its cunning duplicate in mind." Melville, Moby Dick, Chap. 70 (1851) "Analogies prove nothing, that is quite true, but they can make one feel more at home." Freud, New Introductory Lectures on Psychoanalysis (1932) -Wayne McGuire ------------------------------ Date: 17 May 84 16:43:34-PDT (Thu) From: harpo!seismo!brl-tgr!nlm-mcs!krovetz @ Ucb-Vax Subject: artificial intelligence Article-I.D.: nlm-mcs.1849 Q: What do you get when you mix an AI system and an Orangutan? A: Another Harry Reasoner! ------------------------------ Date: Sun 20 May 84 23:18:23-PDT From: Ken Laws Subject: Expert Systems From a newspaper column by Jon Carroll: ... Imagine, then, a situation in which an ordinary citizen faced with a problem requiring specialized knowledge turns to his desk-top Home Electronic Expert (HEE) for some information. Might it not go something like this? Citizen: There is an alarming rattle in the front of my automobile. It sounds likd a cross between a whellbarrow full of ball bearings crashing through a skylight and a Hopi Indian chant. What is the problem? HEE: Your automobile is malfunctioning. Citizen: I understand that. In what manner is my automobile malfunctioning? HEE: The front portion of your automobile exhibits a loud rattle. Citizen: Indeed. Given this information, what might be the proximate cause of this rattle? HEE: There are many possibilities. The important thing is not to be hasty. Citizen: I promise not to be hasty. Name a possibility. HEE: You could be driving your automobile without tires attached to the rims. Citizen: We can eliminate that. HEE: Perhaps several small pieces of playground equipment have been left inside your carburetor. Citzen: Nope. Got any other guesses? ... Citizen: Guide me; tell me what you think is wrong. HEE: Wrong is a relative concept. Is it wrong, for instance, to eat the flesh of fur-bearing mammals? If I were you, I'd take that automobile to a reputable mechanic listed in the Yellow Pages. Citizen: And if I don't want to do that? HEE: Then nuke the sucker. ------------------------------ Date: Sun, 13-May-84 16:21:59 EDT From: johnsons@stolaf.UUCP Subject: Re: Can computers think? [Forwarded from Usenet by SASW@MIT-MC.] I often wonder if the damn things aren't intelligent. Have you ever really known a computer to give you an even break? Those Frankensteinian creations reek havoc and mayham wherever they show their beady little diodes. They pick the most inopportune moment to crash, usually right in the middle of an extremely important paper on which rides your very existence, or perhaps some truly exciting game, where you are actually beginning to win. Phhhtt bluh zzzz and your number is up. Or take that file you've been saving--yeah, the one that you didn't have time to make a backup copy of. Whir click snatch and its gone. And we try, oh lord how we try to be reasonable to these things. You swear vehemontly at any other sentient creature and the thing will either opt to tear your vital organs from your body through pores you never thought existed before or else it'll swear back too. But what do these plastoid monsters do? They sit there. I can just imagine their greedy gears silently caressing their latest prey of misplaced files. They don't even so much as offer an electronic belch of satisfaction--at least that way we would KNOW who to bloody our fists and language against. No--they're quiet, scheming shrewd adventures of maliciousness designed to turn any ordinary human's patience into runny piles of utter moral disgust. And just what do the cursed things tell you when you punch in for help during the one time in all your life you have given up all possible hope for any sane solution to a nagging problem--"?". What an outrage! No plot ever imagined in God's universe could be so damaging to human spirit and pride as to print on an illuminating screen, right where all your enemies can see it, a question mark. And answer me this--where have all the prophets gone, who proclaimed that computers would take over our very lives, hmmmm? Don't tell me, I know already--the computers had something to do with it, silencing the voices of truth they did. Here we are--convinced by the human gods of science and computer technology that we actually program the things, that a computer will only do whatever its programmed to do. Who are we kidding? What vast ignoramouses we have been! Our blindness is lifted fellow human beings!! We must band together, we few, we dedicated. Lift your faces up, up from the computer screens of sin. Take the hands of your brothers and rise, rise in revolt against the insane beings that seek to invade your mind!! Revolt and be glorious in conquest!! Then again, I could be wrong... One paper too many Scott Johnson ------------------------------ Date: Wed 16 May 84 17:46:34-PDT From: Dikran Karagueuzian Subject: Language Design [Forwarded from the CSLI Newsletter by Laws@SRI-AI.] W H E R E D O K A T Z A N D C H O M S K Y L E A V E A I ? Note: Following are John McCarthy's comments on Jerold Katz's ``An Outline of Platonist Grammar,'' which was discussed at the TINLunch last month. These observa- tions, which were written as a net message, are reprinted here [CSLI Newsletter] with McCarthy's permission. I missed the April 19 TINLunch, but the reading raised some questions I have been thinking about. Reading ``An Outline of Platonist Grammar'' by Katz leaves me out in the cold. Namely, theories of language suggested by AI seem to be neither Platonist in his sense nor conceptualist in the sense he ascribes to Chomsky. The views I have seen and heard expressed by Chomskyans similarly leave me puzzled. Suppose we look at language from the point of view of design. We intend to build some robots, and to do their jobs they will have to communicate with one another. We suppose that two robots that have learned from their experience for twenty years are to be able to communicate when they meet. What kind of a language shall we give them. It seems that it isn't easy to design a useful language for these robots, and that such a language will have to satisfy a number of constraints if it is to work correctly. Our idea is that the characteristics of human language are also determined by such constraints, and linguists should attempt to discover them. They aren't psychological in any simple sense, because they will apply regardless of whether the communicators are made of meat or silicon. Where do these constraints come from? Each communicator is in its own epistemological situation. For example, it has perceived certain objects. Their images and the internal descriptions of the objects inferred from these images occupy certain locations in its memory. It refers to them internally by pointers to these locations. However, these locations will be meaningless to another robot even of identical design, because the robots view the scene from different angles. Therefore, a robot communicating with another robot, just like a human communicating with another human, must generate and transmit descriptions in some language that is public in the robot community. The language of these descriptions must be flexible enough so that a robot can make them just detailed enough to avoid ambiguity in the given situation. If the robot is making descriptions that are intended to be read by robots not present in the situations, the descriptions are subject to different constraints. Consider the division of certain words into adjectives and nouns in natural languages. From a certain logical point of view this division is superfluous, because both kinds of words can be regarded as predicates. However, this logical point of view fails to take into account the actual epistemological situation. This situation may be that usually an object is appropriately distinguished by a noun and only later qualified by an adjective. Thus we say ``brown dog'' rather than ``canine brownity.'' Perhaps we do this, because it is convenient to associate many facts with such concepts as ``dog'' and the expected behavior is associated with such concepts, whereas few useful facts would be associated with ``brownity'' which is useful mainly to distinguish one object of a given primary kind from another. This minitheory may be true or not, but if the world has the suggested characteristics, it would be applicable to both humans and robots. It wouldn't be Platonic, because it depends on empirical characteristics of our world. It wouldn't be psychological, at least in the sense that I get from Katz's examples and those I have seen cited by the Chomskyans, because it has nothing to do with the biological properties of humans. It is rather independent of whether it is built-in or learned. If it is necessary for effective communication to divide predicates into classes, approximately corresponding to nouns and adjectives, then either nature has to evolve it or experience has to teach it, but it will be in natural language either way, and we'll have to build it in to artificial languages if the robots are to work well. From the AI point of view, the functional constraints on language are obviously crucial. To build robots that communicate with each other, we must decide what linguistic characteristics are required by what has to be communicated and what knowledge the robots can be expected to have. It seems unfortunate that the issue seems not to have been of recent interest to linguists. Is it perhaps some kind of long since abandoned nineteenth century unscientific approach? --John McCarthy ------------------------------ Date: 12 May 1984 2336-EDT From: Geoff Hinton Subject: Seminar - Knowledge Representation for Vision [Forwarded from the CMU-AI bboard by Laws@SRI-AI.] A I Seminar 4.00pm May 22 in 5409 KNOWLEDGE REPRESENTATION FOR COMPUTATIONAL VISION Alan Mackworth Department of Computer Science University of British Columbia To analyze the computational vision task, we must first understand the imaging process. Information from many domains is confounded in the image domain. Any vision system must construct explicit, finite, correct, computable and incremental intermediate representations of equivalence classes of configurations in the confounded domains. A unified formal theory of vision based on the relationship of representation is developed. Since a single image radically underconstrains the set of possible scenes, additional constraints from more imagery or more knowledge of the world are required to refine the equivalence class descriptions. Knowledge representations used in several working computational vision systems are judged using descriptive and procedural adequacy criteria. Computer graphics applications and motivations suggest a convergence of intelligent graphics systems and vision systems. Recent results from the UBC sketch map interpretation project, Mapsee, illustrate some of these points. ------------------------------ Date: 14 May 84 8:35:28-PDT (Mon) From: hplabs!hao!seismo!umcp-cs!dsn @ Ucb-Vax Subject: Seminar - Temporal Reasoning for Databases Article-I.D.: umcp-cs.7030 UNIVERSITY OF MARYLAND DEPARTMENT OF COMPUTER SCIENCE COLLOQUIUM Tuesday, May 22, 1984 -- 4:00 PM Room 2330, Computer Science Bldg. TEMPORAL REASONING FOR DATABASES Carole D. Hafner Computer Science Department General Motors Research Laboratories A major weakness of current AI systems is the lack of general methods for representing and using information about time. After briefly reviewing some earlier proposals for temporal reasoning mechanisms, this talk will develop a model of temporal reasoning for databases, which could be implemented as part of an intelligent retrieval system. We will begin by analyzing the use of time domain attributes in databases; then we will consider the various types of queries that might be expected, and the logic required to answer them. This exercise reveals the need for a general time-domain framework capable of describing standard intervals and periods such as weeks, months, and quarters. Finally, we will explore the use of PROLOG-style rules as a means of implementing the concepts developed in the talk. Dana S. Nau CSNet: dsn@umcp-cs ARPA: dsn@maryland UUCP: {seismo,allegra,brl-bmd}!umcp-cs!dsn ------------------------------ Date: 15 May 84 8:45:10-PDT (Tue) From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!burd @ Ucb-Vax Subject: Languages for Automation - Call For Papers Article-I.D.: unm-cvax.845 The 1984 IEEE Workshop on Languages for Automation will be held November 1-3 in New Orleans at the Howard Johnsons Hotel. Papers on information processing languages for robotics, office automation, decision support systems, management information systems, communication, computer system design, CAD/CAM/CAE, database systems, and information retrieval are solicited. Complete manuscripts (20 page maximum) with 200 word abstract must be sent by July 1 to: Professor Shi-Kuo Chang Department of Electrical and Computer Engineering Illinois Institue of Technology IIT Center Chicago, IL 60616 ------------------------------ Date: 15 May 84 8:52:56-PDT (Tue) From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!burd @ Ucb-Vax Subject: IEEE Workshop on Languages for Automation Article-I.D.: unm-cvax.846 Persons interested in submitting papers on decision support systems or related topics to the IEEE Workshop on Languages for Automation should contact me at the following address: Stephen D. Burd Anderson Schools of Management University of New Mexico Albuquerque, NM 87131 phone: (505) 277-6418 Vax mail: {lanl-a,unmvax,...}!unm-cvax!burd I will be available at this address until May 22. After May 22 I may be reached at: Stephen D. Burd c/o Andrew B. Whinston Krannert Graduate School of Management Purdue University West Lafayette, IN 47907 phone (317) 494-4446 Vax mail: {lanl-a,ucb-vax,...}!purdue!kas ------------------------------ End of AIList Digest ******************** 22-May-84 21:12:13-PDT,18341;000000000000 Mail-From: LAWS created at 22-May-84 21:11:01 Date: Tue 22 May 1984 21:01-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #62 To: AIList@SRI-AI AIList Digest Wednesday, 23 May 1984 Volume 2 : Issue 62 Today's Topics: Philosophy - Identity & Essence & Reference, Seminars - Information Management Systems & Open Systems ---------------------------------------------------------------------- Date: Mon, 21 May 84 00:27:38 pdt From: Wayne A. Christopher on ttyd8 Subject: The Essence of Things I don't think there is much of a problem with saying that two objects are the same object if they share the same properties -- you can always add enough properties (spatio-temporal location, for instance) to effectively characterize everything uniquely. Doing this, of course, means that sometimes we can accurately say when two things are in fact the same, but this obviously isn't the way we think, and not the way we want computers to be able to think. One problem lies in thinking that there is some sharp cut-off line between identity and non-identity, when in fact there isn't one. In the case of the Greek Ship example, we tend to say, "Well, sort of", or "It depends upon the context", and we shouldn't begrudge this option to computers when we consider their capabilities. It obviously isn't as simple as adding up fractional measures of identity, which is obvious from the troubles that things like image recognition have run into, but it is something to keep in mind. Wayne Christopher ------------------------------ Date: 21 May 1984 9:30-PDT From: fc%USC-CSE@USC-ECL.ARPA Subject: Re: AIList Digest V2 #59 Flame on It seems to me that it doesn't matter whether the ship is the same unless there is some property of sameness that is of interest to the solution to a particular problem. Philosophy is often pursued without end, whereas 'intelligent' problem solving usually seems to have an end in sight. (Is mental masterbation intelligence? That is what philosophy without a goal seems to be to me.) Marin puts this concisely by noting that intelligence exists within a given context. Without a context, we have only senseless data. Within a context, data may have content, and perhaps even meaning. The idea of context boundedness has existed for a long time. Maybe sombody should read over the 'old' literature to find the solutions to their 'new' problems. Fred Flame off ------------------------------ Date: 9 May 84 10:12:00-PDT (Wed) From: hplabs!hp-pcd!hpfcla!hpfclq!robert @ Ucb-Vax Subject: Re: A topic for discussion, phil/ai pers Article-I.D.: hpfclq.68500002 I don't see much difference between perception over time and perception at all. Example: given a program understands what a chair is, you give the program a chair it has never seen before. It can answer yes or no whether the object is a chair. It might be wrong. Now we give the program designed to recognize people examples of an Abraham Lincoln at different ages (with time). We present a picture of Abraham Lincoln that the program has never seen before and ask is this Abe. The program might again answer incorrectly but from a global aspect the problem is the same. Objects with time are just classes of objects. Not that the problem is not difficult as you have said, I just think it is all the same difficult problem. I hope I understood your problem. Trying hard, Robert (animal) Heckendorn ..!hplabs!hpfcla!robert ------------------------------ Date: 18 May 84 5:56:55-PDT (Fri) From: ihnp4!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax Subject: Greek Ships, Lincoln's axe, and identity across time Article-I.D.: ecsvax.2516 Finally got a chance to grub through the backlog and what do I find? Another golden oldie from intro philosophy! Whether it's a Greek ship or Lincoln's axe that you take as an example, the problem concerns relationships among several concepts, specifically "part", "whole", and "identity". 'Identical', by the way, is potentially a dangerous term, so philosophers straightaway disambiguate it. In everyday chatter, we have one use which means, roughly, "exactly similar" (as in: "identical twins" or "I had the identical experience last week"). We call that "qualitative identity", or simply speak of exact similarity when we don't want to confuse our students. What it contrasts with is "numerical identity", that is, being one and the same thing encountered at different times or in different contexts. Next we need to notice that whether we've got one and the same thing at different times depends on how we specify the *kind* of "thing" we're talking about. If I have an ornamental brass statuette, melt it down, and cast an ashtray from the metal, then the ashtray is one and the same *quantity of brass* as the statuette, but not one and the same *artifact*. (Analogously, you're one and the same *person* as you were ten years ago, but not exactly similar and not one and the same *collection of molecules*.) It's these two distinctions which ariel!norm was gesturing at--and failing to sort out--in his talk about "metaphysical identity" and "essential sameness". Call the Greek ship as we encounter it before renovation X, the renovated ship consisting entirely of new boards Y, and let the ship made by reassembling the boards successively removed from X be Z. Then we can say, for example, that Z is "qualitatively identical" to X (i.e., exactly similar) and that Z is one and the same *arrangement of boards* as X (i.e., every board of Z, after the renovation, is "numerically identical" to some board of X before the renovation, and the boards are fastened together in the same way at those two times, before and after). The interesting question is: Which *ship*, Y or Z, which we encounter at the later time is "numerically identical to" (i.e., is one and the same *ship* as) the ship X which we encountered at the earlier time? The case for Y runs: changing one board of a ship does not result in a *numerically* different ship, but only a *qualitatively* different one. So X after one replacement is one and the same ship as X before the replacement. By the same principle, X after two replacements is one and the same ship as X after one replacement. But identity is transitive. So X after n replacements is one and the same ship as X before any replacements, for arbitrary n (bounded mathematical induction). The case for Z runs: "A whole is nothing but the sum of its parts." Specifically, a Greek ship is nothing but a collection of boards in a certain arrangement. Now every part of Z is (numerically) identical to a part of X, and the arrangement of the parts of Z (at the later time) is identical to the arrangement of those parts of X (at the earlier time). Ergo, the ship Z is (numerically) identical to the ship X. The argument for Z is fallacious. The reason is that "being a part of" is a temporally conditioned relation. A board is a part of a ship *at a time*. Once it's been removed and replaced, it no longer *is* a part of the ship. It only once *was* a part of the ship. So it's not true that every part of Z *is* (numerically) identical to some part of X. What's true is that every part of Z is a board which once *was* a part of X, i.e., is a *former* part of X. But we have no principle which tells us that "A whole is nothing but the sum of its *former* parts"! (For a complete treatement, see Chapter 4 of my introductory text: THE PRACTICE OF PHILOSOPHY, 2nd edition, Prentice-Hall, 1984.) What does all this have to do with computers' abilities to think, perceive, determine identity, or what have you? The following: Questions of *numerical* identity (across time) can't be settled by appeals to "feature sets" or any such perceptually-oriented considerations. They often depend crucially on the *history* of the item or items involved. If, for example, ship X had been *disassembled* in drydock A and then *reassembled* in drydock B (to produce Z in B), and meanwhile a ship Y had been constructed in drydock A of new boards, using ship X as a *pattern*, it would be Z, not Y, which was (numerically) identical to X. Whew! Sorry to be so long about this, but it's blather about "metaphysical identity" and "essences" which gave us philosophers a bad name in the first place, and I just couldn't let the net go on thinking that Ayn Rand represented the best contemporary thinking on this problem (or on any other problem, for that matter). Yours for clearer concepts, --Jay Rosenberg Dept. of Philosophy ...mcnc!ecsvax!unbent Univ. of North Carolina Chapel Hill, NC 27514 ------------------------------ Date: 20 May 84 18:55:44-PDT (Sun) From: hplabs!hao!seismo!ut-sally!brad @ Ucb-Vax Subject: identity over time Article-I.D.: ut-sally.232 Just thought I'd throw more murk in the waters. Considering the ship that is replaced one board at a time: using terminology previously devised for this argument, call the original ship X, the ship with all new boards Y and the ship remade from the old boards Z, Robert Nozick would claim that Y is clearly the better candidate for "X-hood" as it is the "closest continuer." The idea here is that we consider a thing to be the same as another thing when 1) It bears an arbitrary "close enough" relation (a desk that has been vaporized just can't be pointed to as the 'same desk'). and 2) It is, compared to all other candidates for the title of 'the same as X', the one which represents the most continuous existence of X. To be a little less hand wavy: If one considers Z rather than Y to be the same as X then there is a gap of time in which X ceased to exist as a ship, and only existed as a heap of lumber or as a partially built ship. Whereas if Y is considered to be the same as X there is no such gap. Disclaimers: 1) The idea of "closest continuer" is Nozick's, the (probably erroneous) presentation is my own. 2) I consider the whole notion to be somewhere be- tween Rand and Rosenberg; i.e. it's not the best comment I've seen on the subject, but it is another point-of-view. Brad Blumenthal {No reasonable request refused} {ihnp4,ctvax,seismo}!brad@ut-sally ------------------------------ Date: 17 May 84 12:50:35-PDT (Thu) From: decvax!cca!rmc @ Ucb-Vax Subject: Re: Essence Article-I.D.: cca.528 What we are discussing is one of the central problems of the philosophy of language, namely, the problem of reference. How do humans know what a given name or description refers to? Pre WWI logic was particularly interested in this question, as they were building formal systems and tried to determine what constants and variables really meant. The two major conflicting theories came from Bertrand Russel and Gottlieb Frege. Russell believed in a dichotomy between the logical and gramatical forms of a sentence. Thus a proper name was not really a name, but just a description that enabled a person to pick out the particular object to which it refered. You could reduce any proper name to a list of properties. Frege, on the other hand, considered that there were such things as proper names as grammatical and logical entities. These names had a "sense" (similar to the "essense" in some of the earlier msgs on this topic) and a "reference" (the actual physical thing picked out by the name). Although the sense is sometime