what we commonly call intelligence will eventually turn out to be fully amenable to mechanistic reduction. However, we cannot extrapolate from our assumptions to statements about the essence of one's being, first because assumptions are not facts yet, secondly because intelligence and consciousness may not be the same thing. Therefore claiming that essential aspects do not exist in the phenomenon of consciousness is in the present state of scientific knowledge an unreasonable reaction that unnecessarily narrows the field of our investigation. I even consider it a regrettable impoverishment because of the meaningful personal experiences one may be able to find in the course of an essential quest. Intellectual honesty should deter us from making such unfounded statements even if they seem to fit well in a common form of scientific paradigm. Rather it should inspire us to objectively assess the frontiers of our knowledge and understanding, and to strive to expand them without preconceptions to the best of our abilities and the extent of our individual concerns. Etienne Wenger ------------------------------ Date: 3 May 84 10:13:04 EDT From: BORGIDA@RUTGERS.ARPA Subject: Seminar - Using PROLOG to Access Databases [Forwarded from the Rutgers bboard by Laws@SRI-AI.] May 3 AT 2:50 in HILL 705: USING PROLOG TO PLAN ACCESS TO CODASYL DATABASES P.M.D. Gray Department of Computing Science, Aberdeen University A program generator which plans a program structure to access records stored in a Codasyl database, in answer to queries formulated against a relational view, has been written in Prolog. The program uses two stages: 1. Rewriting the query; Generation and selection of alternative programs. The generated programs are in Fortran or Cobol, using Codasyl DML. The talk will discuss the pros and cons of this approach and compare it with Warren's approach of generating and re-ordering a Prolog form of the query. (Note added by Malcolm Atkinson) The Astrid system previously developed by Peter had a relational algebra query language, and an interactive (by example) method of debugging queries and of specifying report formats, which provided an effective interface to Codasyl databases. Peter's current work is on the construction of a system to explain to people what the schema implies and what a database contains - he is using PS-algol and Prolog for this. ------------------------------ End of AIList Digest ******************** 31-May-84 22:38:52-PDT,16689;000000000000 Mail-From: LAWS created at 31-May-84 22:34:30 Date: Thu 31 May 1984 22:23-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #67 To: AIList@SRI-AI AIList Digest Friday, 1 Jun 1984 Volume 2 : Issue 67 Today's Topics: Natural Language - Request, Expert Systems - KS300 Reference, AI Literature - CSLI Report on Bolzano, Scientific Method - Hardware Prototyping, Perception - Identity, Seminar - Perceptual Organization for Visual Recognition ---------------------------------------------------------------------- Date: 4 Jun 84 8:08:13-EDT (Mon) From: ihnp4!houxm!houxz!vax135!ukc!west44!ellis @ Ucb-Vax.arpa Subject: Pointers to natural language interfacing Article-I.D.: west44.214 I am investigating the feasibility of writing a natural language interface for the UNIX operating system, and need some pointers to good articles/papers/books dealing with natural language intrerpreting. Any help would be gratefully appreciated as I am fairly 'green' in this area. mcvax | ukc!root44!west44!ellis / \ vax135 hou3b \ / akgua Mark Ellis, Wesfield College, Univ. of London, England. [In addition to any natural language references, you should certainly see "Talking to UNIX in English: An Overview of an On-line UNIX Consultant" by Robert Wilensky, The AI Magazine, Spring 1984, pp. 29-39. Elaine Rich also mentioned this work briefly in her introduction to the May 1984 issue of IEEE Computer. -- KIL] ------------------------------ Date: 28 May 84 12:55:37-PDT (Mon) From: hplabs!hao!seismo!cmcl2!floyd!vax135!cornell!jqj @ Ucb-Vax.arpa Subject: Re: KS300 Question Article-I.D.: cornell.195 KS300 is owned by (and a trademark of) Teknowledge, Inc. Although it is largeley based on Emycin, it was extensively reworked for greater maintainability and reliability, particularly for Interlisp-D environments (the Emycin it was based on ran only on DEC-20 Interlisp). Teknowledge can be reached by phone (no net address, I think) at (415) 327-6600. ------------------------------ Date: Wed 30 May 84 19:41:17-PDT From: Dikran Karagueuzian Subject: CSLI Report [Forwarded from the CSLI newsletter by Laws@SRI-AI.] New CSLI-Report Available ``Lessons from Bolzano'' by Johan van Benthem, the latest CSLI-Report, is now available. To obtain a copy of Report No. CSLI-84-6, contact Dikran Karagueuzian at 497-1712 (Casita Hall, Room 40) or Dikran at SU-CSLI. ------------------------------ Date: Thu 31 May 84 11:15:35-PDT From: Al Davis Subject: Hardware Prototyping On the issue of the Stone - Shaw wars. I doubt that there really is a viable "research paradigm shift" in the holistic sense. The main problem that we face in the design of new AI architectures is that there is a distinct possibility that we can't let existing ideas simply evolve. If this is true then the new systems will have to try to incorporate a lot of new strategies which create a number of complex problems, i.e. 1. Each new area means that our experience may not be valid. 2. Interactions between these areas may be the problem, rather than the individual design choices - namely efficient consistency is a difficult thing to achieve. In this light it will be hard to do true experiments where one factor gets isolated and tested. Computer systems are complex beasts and the problem is even harder to solve when there are few fundamental metrics that can be applied microscopically to indicate success or failure. Macroscopically there is always cost/performance for job X, or set of tasks Y. The experience will come at some point, but not soon in my opinion. It will be important for people like Shaw to go out on a limb and communicate the results to the extent that they are known. At some point from all this chaos will emerge some real experience that will help create the future systems which we need now. I for one refuse to believe that an evolved Von Neumann architecture is all there is. We need projects like DADO, Non-Von, the Connection Machine, ILLIAC, STAR, Symbol, the Cosmic Cube, MU5, S1, .... this goes on for a long time ..., --------------- if given the opportunity a lot can be learned about alternative ways to do things. In my view the product of research is knowlege about what to do next. Even at the commercial level very interesting machines have failed miserably (cf. B1700, and CDC star) and rather Ho-Hum Dingers (M68000, IBM 360 and the Prime clones) have been tremendous successes. I applaud Shaw and company for giving it a go along with countless others. They will almost certainly fail to beat IBM in the market place. Hopefully they aren't even trying. Every 7 seconds somebody buys an IBM PC - if that isn't an inspiration for any budding architect to do better then what is? Additionally, the big debate over whether CS or AI is THE way is absurd. CS has a lot to do with computers and little to do with science, and AI has a lot to do with artificial and little to do with intelligence. Both will and have given us something worthwhile, and a lot of drivel too. The "drivel factor" could be radically reduced if egotism and the ambition were replaced with honesty and responsibility. Enough said. Al Davis FLAIR ------------------------------ Date: Mon, 28 May 84 14:28:32 PDT From: Charlie Crummer Subject: Identity The thing about sameness and difference is that humans create them; back to the metaphor and similie question again. We say, "Oh, he's the same old Bill.", and in some sense we know that Bill differs from "old Bill" in many ways we cannot know. (He got a heart transplant, ...) We define by declaration the context within which we organize the set of sensory perceptions we call Bill and within that we recognize "the same old Bill" and think that the sameness is an attribute of Bill! No wonder the eastern sages say that we are asleep! [Read Hubert Dreyfus' book "What Computers Can't Do".] --Charlie ------------------------------ Date: Wed, 30 May 1984 16:15 EDT From: MONTALVO%MIT-OZ@MIT-MC.ARPA Subject: A restatement of the problem (phil/ai) From: (Alan Wexelblat) decvax!ittvax!wxlvax!rlw @ Ucb-Vax Suppose that, while touring through the grounds of a Hollywood movie studio, I approach what, at first, I take to be a tree. As I come near to it, I suddenly realize that what I have been approaching is, in fact, not a tree at all but a cleverly constructed stage prop. So, let me re-pose my original question: As I understand it, issues of perception in AI today are taken to be issues of feature-recognition. But since no set of features (including spatial and temporal ones) can ever possibly uniquely identify an object across time, it seems to me (us) that this approach is a priori doomed to failure. Spatial and temporal features, and other properties of objects that have to do with continuity and coherence in space and time DO identify objects in time. That's what motion, location, and speed detectors in our brains to. Maybe they don't identify objects uniquely, but they do a good enough job most of the time for us to make the INFERENCE of object identity. In the example above, the visual features remained largely the same or changed continuously --- color, texture normalized by distance, certainly continuity of boundary and position. It was the conceptual category that changed: from tree to stage prop. These latter properties are conceptual, not particularly visual (although presumably it was minute visual cues that revealed the identity in the first place). The bug in the above example is that no distiction is made between visual features and higher-level conceptual properties, such as what a thing is for. Also, identity is seen to be this unitary thing, which, I think, it is not. Similarities between objects are relative to contexts. The above stage prop had spatio-termporal continuity (i.e., identity) but not conceptual continuity. Fanya Montalvo ------------------------------ Date: Wed, 30 May 84 09:18 EDT From: Izchak Miller Subject: The experience of cross-time identity. A follow-up to Rosenberg's reply [greatings, Jay]. Most commentators on Alan's original statement of the problem have failed to distinguish between two different (even if related) questions: (a) what are the conditions for the cross-time (numerical) identity of OBJECTS, and (b) what are the features constitutive of our cross-time EXPERIENCE of the (numerical) identity of objects. The first is an ontological (metaphysical) question, the second is an epis- temological question--a question about the structure of cognition. Most commentators addressed the first question, and Rosenberg suggests a good answer to it. But it is the second question which is of importance to AI. For, if AI is to simulate perception, it must first find out how perception works. The reigning view is that the cross-time experience of the (numerical) identity of objects is facilitated by PATTERN RECOGNITION. However, while it does indeed play a role in the cognition of identity, there are good grounds for doubting that pattern recognition can, by itself, account for our cross-time PERCEPTUAL experience of the (numerical) sameness of objects. The reasons for this doubt originate from considerations of cases of EXPERIENCE of misperception. Put briefly, two features are characteristic of the EXPERIENCE of misperception: first, we undergo a "change of mind" regar- ding the properties we attribute to the object; we end up attributing to it properties *incompatible* with properties we attributed to it earlier. But-- and this is the second feature--despite this change we take the object to have remained *numerically one and the same*. Now, there do not seem to be constraints on our perceptual "change of mind": we can take ourselves to have misperceived ANY (and any number) of the object's properties -- including its spatio-temporal ones -- and still experience the object to be numerically the same one we experienced all along. The question is how do we maintain a conscious "fix" on the object across such radical "changes of mind"? Clearly, "pattern recognition" does not seem a good answer anymore since it is precisely the patterns of our expectations regarding the attributes of the object which change radically, and incom- patibly, across the experience of misperception. It seems reasonable to con- clude that we maintain such a fix "demonstratively" (indexically), that is independently whether or not the object satisfies the attributive content (or "pattern") of our perception. All this does not by itself spell doom (as Alan enthusiastically seems to suggest) for AI, but it does suggest that insofar as "pattern recognition" is the guiding principle of AI's research toward modeling perception, this research is probably dead end. Izchak (Isaac) Miller Dept. of Philosophy University of Pennsylvania ------------------------------ Date: 24 May 84 9:04:56-PDT (Thu) From: hplabs!sdcrdcf!sdcsvax!akgua!clyde!burl!ulysses!unc!mcnc!ncsu!uvacs!gmf @ Ucb-Vax.arpa Subject: Comment on Greek ship problem Article-I.D.: uvacs.1317 Reading about the Greek ship problem reminded me of an old joke -- recorded in fact by one Hierocles, 5th century A.D. (Lord knows how old it was then): A foolish fellow who had a house to sell took a brick from one wall to show as a sample. Cf. Jay Rosenberg: "A board is a part of a ship *at a time*. Once it's been removed and replaced, it no longer *is* a part of the ship. It only once *was* a part of the ship." Hierocles is referred to as a "new Platonist", so maybe he was a philosopher. On the other hand, maybe he was a gag-writer. Another by him: During a storm, the passengers on board a vessel that appeared in danger, seized different implements to aid them in swimming, and one of them picked for this purpose the anchor. Rosenberg's remark quoted above becomes even clearer if "board" is replaced by "anchor" (due, no doubt, to the relative anonymity of boards, as compared with anchors). Gordon Fisher ------------------------------ Date: 4 Jun 84 7:47:08-EDT (Mon) From: ihnp4!houxm!houxz!vax135!ukc!west44!gurr @ Ucb-Vax.arpa Subject: Re: "I see", said the carpenter as he picked up his hammer and saw. Article-I.D.: west44.211 The point being, if WE can't decide logically what constitudes a "REAL" perception for ourselves (and I contend that there is no LOGICAL way out of the subjectivist trap) how in the WORLD can we decide on a LOGICAL basis if another human, not to mention a computer, has perception? We can't!! Therefore we operate on a faith basis a la Turing and move forward on a practical level and don't ask silly questions like, "Can Computers Think?". For an in depth discussion on this, read "The Mind's I" by Douglas R. Hofstatder and Daniel C. Dennett - this also brings in the idea that you can't even prove that YOU, not to mention another human being, can have perception! mcvax / ukc!root44!west44!gurr / \ vax135 hou3b \ / akgua Dave Gurr, Westfield College, Univ. of London, England. ------------------------------ Date: Tue 29 May 84 08:44:42-PDT From: Sharon Bergman Subject: Ph.D. Oral - Perceptual Organization for Visual Recognition [Forwarded from the Stanford bboard by Laws@SRI-AI.] Ph.D. Oral Friday, June 1, 1984 at 2:15 Margaret Jacks Hall, Room 146 The Use of Perceptual Organization for Visual Recognition By David Lowe (Stanford Univ., CS Dept.) Perceptual organization refers to the capability of the human visual to spontaneously derive groupings and structures from an image without higher-level knowledge of its contents. This capability is currently missing from most computer vision systems. It will be shown that perceptual groupings can play at least three important roles in visual recognition: 1) image segmentation, 2) direct inference of three-space relations, and 3) indexing world knowledge for subsequent matching. These functions are based upon the expectation that image groupings reflect actual structure of the scene rather than accidental alignment of image elements. A number of principles of perceptual organization will be derived from this criterion of non-accidentalness and from the need to limit computational complexity. The use of perceptual groupings will be demonstrated for segmenting image curves and for the direct inference of three-space properties from the image. These methods will be compared and contrasted with the work on perceptual organization done in Gestalt psychology. Much computer vision research has been based on the assumption that recognition will proceed bottom-up from the image to an intermediate depth representation, and subsequently to model-based recognition. While perceptual groupings can contribute to this depth representation, they can also provide an alternate pathway to recognition for those cases in which there is insufficient information for bottom-up derivation of the depth representation. Methods will be presented for using perceptual groupings to index world knowledge and for subsequently matching three-dimensional models directly to the image for verification. Examples will be given in which this alternate pathway seems to be the only possible route to recognition. ------------------------------ End of AIList Digest ******************** 1-Jun-84 16:07:40-PDT,15938;000000000000 Mail-From: LAWS created at 1-Jun-84 16:06:09 Date: Fri 1 Jun 1984 15:58-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #68 To: AIList@SRI-AI AIList Digest Saturday, 2 Jun 1984 Volume 2 : Issue 68 Today's Topics: Scientific Method - Perception, Philosophy - Essence & Soul, Parapsychology - Scientific Method & Electromagnetics, Seminars - Knowledge-Based Plant Diagnosis & Learning Procedures ---------------------------------------------------------------------- Date: 31 May 84 9:00:56-PDT (Thu) From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax.arpa Subject: Re: "I see", said the carpenter... (PERCEPTION) Article-I.D.: ariel.652 The idea of proof or disproof rests, in part, on the recognition that the senses are valid and that perceptions do exist... Any attempt to disprove the existence of perceptions is an attempt to undercut all proof and all knowledge. --ariel!norm ------------------------------ Date: Wed 30 May 84 12:18:42-PDT From: WYLAND@SRI-KL.ARPA Subject: Essences and soul In response to Minsky's comments about soul (AIList vol 2, #63): this is a "straw man" argument, based on a particular concept of soul - "... The common concept of soul says that ...". Like a straw man, this particular concept is easily attacked; however, the general question of soul as a concept is not addressed. This bothers me because I think that raising the question in this manner can result in generating a lot of heat (flames) at the expense of light. I hope the following thoughts contribute more light than heat. Soul has been used to name (at least) two similar concepts: o Soul as the essence of consciousness, and o Soul as a form of consciousness separate from the body. The concept of soul as the essence of consciousness we can handle as simply another name for consciousness. The concept of soul as a form of consciousness separate from the body is more difficult: it is the mind/body problem revisited. You can take a catagorical position on the existance of the soul/mind as separate from the body (DOES!/DOESN'T!) but proving or disproving it is more difficult. To prove the concept requires public evidence of phenomena that require this concept for their reasonable explanation; to disprove the concept requires proving that it clearly contradicts other known facts. Since neither situation seems to hold, we are left to shave with Occam's Razor, and we should note our comments on the hypothesis as opinions, not facts. The concept of soul/consciousness as the result of growth, of learning, seems right: I am what I have learned - what I have experienced plus my decisions and actions concerning these experiences. I wouldn't be "me" without them. However, it is also possible to create various theories of "disembodied" soul which are compatible with learning. For example, you could have a reincarnation theory that has past experiences shut off during the current life so that they do not interfere with fresh learning, etc. Please note: I am not proposing any theories of disembodied soul. I am arguing against unproven, catagorical positions for or against such theories. I believe that a scientist, speaking as a scientist, should be an agnostic - neither a theist nor an athiest. It may be that souls do not exist; on the other hand, it may be that they do. Science is open, not closed. There are many things that - regardless of our fear of the unknown and disorder - occur publicly and regularly for which we have no convincing explanation based on current science. Meteors as stones falling from heaven did not exist according to earlier scientists - until there was such a fall of them in France in the 1800's that their existance had to be accepted. There will be a 21st and a 22nd century science, and they will probably look back on our times with the same bemused nostalgia and incredulity that we view 18th and 19th century science. ------------------------------ Date: Thu, 31 May 1984 18:27 EDT From: MINSKY%MIT-OZ@MIT-MC.ARPA Subject: Essences and Soul I can't make much sense of Menger's reply: Therefore claiming that essential aspects do not exist in the phenomenon of consciousness is in the present state of scientific knowledge an unreasonable reaction that unnecessarily narrows the field of our investigation. I wasn't talking about consciousness. Actually, I thnk consciousness will turn out to be relatively simple, namely the phenomenon connected with the procedures we use for managing very short term memory, duration about 1 second, and which we use to analyse what some of our mental processes have been doing lately. The reason consciouness seems so hard to describe is just that it uses these processes and screws up when applied to itself. But Menger seems intent on mixing everything up: However, we cannot extrapolate from our assumptions to statements about the essence of one's being, first because assumptions are not facts yet, secondly because intelligence and consciousness may not be the same thing. Who said anything about intelligence and consciousness? If soul is the whole mind, then fine, but if he is going to talk about essences that change along with this, well, I don't thing anything is being discussed except convictions of self-importance, regardless of any measure of importance. --- Minsky ------------------------------ Date: 31 May 84 15:31:58-PDT (Thu) From: ...decvax!decwrl!dec-rhea!dec-pbsvax!cooper Subject: Re: Dreams: A Far-Out Suggestion Article-I.D.: decwrl.894 Ken Laws summarizes an article in the May Dr. Dobb's Journal called "Sixth Generation Computers" by Richard Grigonis. Among other things it proposes that standing waves of very low frequency electromagnetic radiation (5 to 20 Hz apparently) be used to explain telepathy. As the only person of I know of with significant involvement in both the fields of AI and parapsychology I felt I should respond. 1) Though there is "growing evidence" that ESP works, there is none that telepathy does. We can order the major classes of ESP phenomena by their a priori believability; from most believable to least: telepathy (mind-to-mind communication), clairvoyance (remote perception) and precognition (perception of events which have not yet taken place). "Some-kind-of mental radio" doesn't seem too strange. "Some-kind-of mental radar" is stretching it. While precognition seems to be something akin (literally) to black magic. There is thus a tendency, even among parapsychologists, to think of ESP in terms of telepathy. Unfortunately it is fairly easy to design an experiment in which telepathy cannot be an element but precognition or clairvoyance is. Experiments which exclude telepathy as an explanation have roughly the same success rate (approximately 1 experiment out of 3 show statistical significance above the p=.01 level) as experiments whose results could be explained by telepathy. Furthermore, in any well controlled telepathy experiment a record must be made of the targets (i.e. what was thought). Since an external record is kept, clairvoyance and/or precognition cannot be excluded as an explanation for the results in a telepathy experiment. For this reason experiments designed to allow telepathy as a mechanism are known in parapsychology as "general ESP" (GESP) experiments. Telepathy still might be proven as a separate phenomenon if a positive differential effect could be shown (i.e. if having someone else looking at the target improves the score). Several researchers have claimed just such an effect. None have, however, to the best of my knowledge, eliminated from their experiments two alternate explanations for the differential: 1) The subjects are more comfortable with telepathy than with other ESP and thus score higher (subject expectation is strongly correlated with success in ESP). 2) Two subjects working together for a result would get higher scores whether or not one of them knows the targets. Its rather difficult to eliminate both of these alternatives from an experiment simultaneously. The proposed mechanism MIGHT be used to explain rather gross clairvoyance (e.g. dowsing) but would be hard pressed to distinguish, for example, ink in the shape of a circle from that of a square on a playing card. It is obviously no help at all in explaining precognition results. 2) Experiments have frequently been conducted from within a Faraday cage (this is a necessity if a sensitive EKG is used of course) and even completely sealed metal containers. It was just this discovery which led the Soviets to decide in the late 20s (early 30s?) that ESP violated dialectic materialism, and was thus an obvious capitalist plot. Officially sanctioned research in parapsychology did not get started again in the Soviet Union until the early 70s when some major US news source (the NY Times? Time magazine?) apparently reported a rumor (apparently inaccurate) that the US DoD was conducting experiments in the use of ESP to communicate with submarines. 3) Low frequency means low bandwidth. ESP seems to operate over a high bandwidth channel with lots of noise (since very high information messages seem to come through it sometimes). 4) Natural interference (low frequency electromagnetic waves are for example generated by geological processes) would tend to make the position of the nodes in the standing waves virtually unpredictable. 5) Low frequency (long wavelength) requires a big antenna both for effective broadcast and reception. The unmoving human brain is rather small for this since the wavelength of an electromagnetic wave with a frequency of 5 Hz is about 37200 miles. Synthetic aperture radar compensates for a small antenna by comparing the signal before and after movement (actually the movement in continuous). I'm not sure of the typical size of the antennas used in SAP, but the SAP aboard LandSAT operated at a frequency of 1.275 GHz which corresponds to a wavelength of about 9.25 inches. The antenna is probably about one wavelength long. To use that technique the antenna (in this case brain) would have to move a distance comparable to a wavelength (37200 miles) at the least, and the signal would have to be static over the time needed to move the distance. This doesn't seem to fit the bill. I'm out of my depth in signal detection theory, but it might be practical to measure the potential of the wave at a single location relative to some static reference and integrate over time. The static reference would require something like a Faraday cage in ones head. Does anyone know if this is practical? We'd still have a serious bandwidth problem. The last possibility would be the techniques used in Long Baseline Radio Interferometry (large array radio telescopes). This consists of using several antennas distributed in space to "synthesize" a large antenna. Unfortunately the antenna have to communicate over another channel, and that channel would (if the antennas are brains) be equivalent to a second telepathy channel and we have explained nothing except the completely undemonstrated ability of human beings to decode very low frequency electromagnetic radiation. In summary: Even if you accept the evidence for ESP (as I do) the proposed mechanism does not seem to explain it. I'll be glad to receive replies to the above via mail, but unless it's relevant to AI (e.g. a discussion of the implications of ESP for mechanistic models of brain function) we should move this discussion elsewhere. Topher Cooper (The above opinions are my own and do not necessarily represent those of my employer, my friends or the parapsychological research community). USENET: ...decvax!decwrl!dec-rhea!dec-pbsvax!cooper ARPA: COOPER.DIGITAL@CSNET-RELAY ------------------------------ Date: 23 May 84 16:04:38 EDT From: WATANABE@RUTGERS.ARPA Subject: Seminar - Knowledge-Based Plant Diagnosis [Forwarded from the Rutgers bboard by Laws@SRI-AI.] Date: June 14 (Thursday), 1984 Time: 1:30-2:30PM Place: Hill 705 Title: Preliminary Study of Plant Diagnosis by Knowledge about System Description Speaker: Dr. Hiroshi Motoda Energy Research Laboratory, Hitachi Ltd., 1168 Moriyamacho, Hitachi, Ibaraki 316, Japan INTRODUCTION: Some model, whatever form it is, is required to perform plant diagnosis. Generally, this model describes anomaly propagation and can be regarded as knowledge about cause and consequence relationships of anomaly situations. Knowledge engineering is a software technique that uses knowledge in problem solving. One of its characteristics is the separation of knowledge from inference mechanism, in which the latter builds logic of events on the basis of the former. The knowledge can be supplied piecewisely and is easily modified for improvement. Possibility is suggested of making diagnosis by collecting many piece of knowledge about causality relationships. The power lies in the knowledge, not in the inference mechanism. What is not in the knowledge base is out of the scope of the diagnosis. Use of resolution in the predicate calculus logic has shown the possibility of using knowledge about system description (structure and behavior of the plant) to generate knowledge directly useful for diagnosis. The problem of this approach was its inefficiency. It was felt necessary to devise a mechanism that performs the same logical operation much faster. Efficiency has been improved by 1) expressing the knowledge in frames and 2) enhancing the memory management capability of LISP to control the data in global memory in which the data used commonly in both LISP (for symbolic manipulation) and FORTRAN (for numeric computation) are stored. REFERENCES: Yamada,N. and Motoda,H.; "A Diagnosis Method of Dynamic System using the Knowledge on System Description," Proc. of IJCAI-83, 225, 1983. ------------------------------ Date: 31 May 1984 1146-EDT From: Wendy Gissendanner Subject: Seminar - Learning Procedures [Forwarded from the CMU-AI bboard by Laws@SRI-AI.] AI SEMINAR Tueday June 5, 5409 Wean Hall Speaker: Kurt Van Lehn (Xerox Parc) Title: Learning Procedures One Disjunct Per Lesson How can procedures be learned from examples? A new technique is to use the manner in which the examples are presented, their sequence and how they are partitioned into lessons. Two manner constraints will be discussed: (a) that the learner acquires at most one disjunct per lesson (e.g., one conditional branch per lesson), and (b) that nests of functions be taught using examples that display the intermediate results (show-work examples) before the regular examples, which do not display intermediate results. Using these constraints, plus several standard AI techniques, a computer system, Sierra, has learned procedures for arithmetic, algebra and other symbol manipulation skills. Sierra is the model (i.e., prediction calculator) for Step Theory, a fairly well tested theory of how people learn (and mislearn) certain procedural skills. ------------------------------ End of AIList Digest ******************** 5-Jun-84 10:18:04-PDT,16858;000000000000 Mail-From: LAWS created at 5-Jun-84 10:13:32 Date: Tue 5 Jun 1984 10:06-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #69 To: AIList@SRI-AI AIList Digest Tuesday, 5 Jun 1984 Volume 2 : Issue 69 Today's Topics: Parapsychology - ESP, Philosophy - Correction & Essences, Cognitive Psychology - Mental Partitioning, Seminars - Knowledge Representation & Expert Systems ---------------------------------------------------------------------- Date: Mon, 4 Jun 84 18:50:50 PDT From: Michael Dyer Subject: ESP to: Topher Cooper & others who claim to believe in ESP 1. this discussion SHOULD be moved off AIList. 2. the technical discussion of wavelengths, etc is fine but 3. anyone who claims to believe in current ESP should FIRST read the book: FLIM-FLAM by James Randi (the "Skeptical Inquirer" journal has already been mentioned once but deserves a second mention) ------------------------------ Date: 31 May 84 19:31:04-PDT (Thu) From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax.arpa Subject: Message for all phil/ai persons Article-I.D.: wxlvax.287 Dear net readers, I must now apologize for a serious error that I have committed. Recently, I posted two messages on the topic of philosophy of AI. These messages concerned a topic that I had discussed with one of my professors, Dr. Izchak Miller. I signed those messages with both his name and mine. Unfortunately, he did not see those messages before they were posted. He has now indicated to me that he wishes to disassociate himself from the contents of those messages. Since I have no way of knowing which of you saw my error, I am posting this apology publicly, for all to see. All responses to those messages should be directed exclusively to me, at the address below. I am sorry for taking up net resources with this message, but I feel that this matter is important enough. Again, I apologize, and accept all responsibility for the messages. --Alan Wexelblat (currently appearing at: ...decvax!ittvax!wxlvax!rlw. Please put "For Alan" in all mail headers.) ------------------------------ Date: Mon 4 Jun 84 13:49:58-PDT From: WYLAND@SRI-KL.ARPA Subject: Essences, objects, and modelling All the net conversation about essences is fascinating, but can be a little fuzzy making. It made me go back and review some of the basics. At the risk of displaying naivete and/or a firm grasp of the obvious, I thought I would pass some of my thoughts along. The problem of essences has been treated in philosophy under the heading of metaphysics, specifically ontology. I have found a good book covering these problems in short, clear text. It is: "Problems & Theories of Philosophy" by Ajdukiewicz, Cambridge University Press, 1975, 170 pp. in paperback. About substance (from the book, p. 78): ".... the fundamental one is that which it was given by Aristotle. He describes substance as that of which something can be predicated but which cannot itself be predicated of anything else. In other words, substance is everything to which some properties can be attributed, which can stand in a certain relationship to something else, which can be in this state, etc., but which is not itself a property, relation or a state, etc. Examples of substances are: this, this table, this person, in a word concrete individual things and persons. To substance are opposed properties which in contradistinction to substances can be predicated of something, relations which also in contradistinction can obtain between certain objects, states, etc. The scholastics emphasized the self-subsistance of substance in contrast to the non-self-subsistance of properties, relations, states, etc. The property of redness, for example, cannot exist except in a substance that possesses it. This particular rose, however, of which redness is an attribute, does not need any foundations for its existance but exists on its own. This self-subsistance of substance they considered to be its essential property and they defined substance as 'res, qui convenit esse in se vel per se'." To me, this implies that an object/substance is an axiomatic "thing" that exists independantly - it is the rock that kicks back each time I kick it - with the characteristic that it is "there", meaning that each time I kick at the rock, it is there to kick back. You can hang attributes on it in order to identify it from some other thing, both now and over time. The Greek Ship problem in this approach becomes one of identifying that Object, the Greek Ship, which has maintained continuous existance as The Greek Ship - i.e., can "be kicked" at any time. This brings us to one of the problems being addressed by this discussion of essences, which is distinguishing between objects and abstractions of objects, i.e. between proper nouns and abstract/general nouns. A proper known refers to a real object, which can never - logically - be fully known in the sense that we cannot be sure that we know *all* of its attributes or that we *know* that the attributes we do know are unchanging or completely predictable. We can always be surprised, and any inferences we make from "known" attributes are subject to change. Real objects are messy and ornery. An abstract object, like pure mathematics, is much neater: it has *only* those attributes we give it in its definition, and there WILL BE no surprises. The amazing thing is that mathematics works: a study of abstractions can predict things in the real world of objects! This seems to work on the "Principle of Minimum Astonishment" (phrase stolen from Lou Schaefer @ SRI), which I interpret to mean that "To the extent that this real object posseses the same characteristics as that abstract object, this real object will act the same as that abstract object, *assuming that it doesn't do anything else particularly astonishing*." And how many carefully planned experiments have foundered on THAT one. There is *nothing* that says that the sun *will* come up tomorrow except the Principle of Minimum Astonishment. So what? So, studies of abstractions are useful; however, an abstract object is not the same as a real object: the model is not the same as the thing being modelled. There is not an infinite recursion of attributes, somewhere there is a rock that kicks back, a source of data/experience from *outside* the system. The problem is - usually - to create/select/update an abstract model of this external object and to predict our interactions with it on the basis of the model. The problem of "identifying" an object is typically not identifying *which* real object it is but *what kind* of object is it - what is the model to use to predict the results of our interaction with it. It seems to me that model forming and testing is one of the big, interesting problems in AI. I think that is why we are all interested in abstraction, metaphor, analogy, pholosophy, etc. I think that keeping the distinction between the model and the object/reality is useful. To me, it tends to imply two sets of data about an object: the historical interaction data and the abstracted data contained in the current model of the object. Perhaps these two data sets should be kept more formally separate than is often done. This has gotten quite long winded - it's fun stuff. I hope that this is useful/interesting/fun! Dave Wyland WYLAND@SRI ------------------------------ Date: Sat, 2 Jun 84 13:11:35 PDT From: Philip Kahn Subject: Relevance of "essences" and "souls" to Artificial Intelligence Quite a bit of the AILIST has been devoted of late to metaphysical discussions of "essences" (e.g., the Greek ship "problem") and "souls." I don't argue the authors' viewpoints, but the discussion has strayed far from the intent of the original Greek ship problem. In short, the problem with "essences" and "souls" are the questions posed, and not the answers given. We are concerned with creating intelligent machines (whether we consider it "artificial" or "authentic"). The "problem" of "essence" is only caused by the necessity that a hard-and-fast, black-and-white discrimination is being asked whether "The reassembled ship is 'essentially' the same." It should be clear that the question phrased as such cannot be answered adequately because it is not relevant. You can say "it looks the same," "it weighs the same," "it has the same components," but how useful is it for the purposes of an intelligent machine (or person) to know whether it is "essentially" the same ship? The field of AI is so young that we do not even have a decent method of determining that it even IS a Greek ship. Before we attempt to incorporate such philosophical determinations in a machine, wouldn't it be more useful to solve the more pressing problem of object identification before problems of esoteric object distinctions are examined? The problem of "souls" is also not relevant to the study of AI (though it is undoubtedly of great import to our understanding of our role as humans in the universe). A "soul," like the concept of "essence," is undefinable. The problem of "cognition" is far more relevent to the study of AI because it can be defined within some domain; it is the object oriented interpretation of some phenomena (e.g., vision, auditory, context, etc.). Whether "cognition" constitutes a "soul" is again not relevent. The more pressing problem is the problem of creating a sufficient "cognitive" machine that can make object-oriented interpretations of sensory data and contextual information. While the question of whether a "soul" falls out of this mechanism may be be of philosophical interest, it moves us no closer to the description of such a mechanism. Another writer's opinion, P.K. ------------------------------ Date: 3 Jun 84 12:24:57-PDT (Sun) From: decvax!cwruecmp!borgia @ Ucb-Vax.arpa Subject: Re: Essences and soul Article-I.D.: cwruecmp.1173 ** This is somewhat long ... You might learn something new ... ... from Intellectuals Anonymous (IA not AI) ** A few years ago, I became acquainted with an international group called Community International that operates through a technique called Guided Experiences to assist individuals in their progress towards self actualization. I remember that some of the techniques like Dis-tension, and the Experience of Peace were so effective that the Gurus in the group were sought by major corporations for their Executive Development programs. The Community itself is a non-profit, self-sustaining organization that originated somewhere in South America. The Community had a very interesting (scientific?) model for the body and soul (existence and essence) problem. The model is based on levels or Centers for the Mind. I will summarize what I remember about the Centers of the Mind. 1. The major Centers of the Mind are the Physiological Center, the Motor Center, the Emotional Center, and the Intellectual Center. 2. The functional parts of the Mind belong to different (matrix) cells in a tabulation of major Center X major Center. To illustrate the power of this abstraction, consider the following assertions where the loaded words have the usual meaning. The intellectual part of the intellectual center deals with reason or cognition. The rationalist AI persons must already feel very small. Reliance on reason alone indicates a poverty of the mind! The motor part of the intellectual center deals with imagination and creativity. The emotional part of the intellectual center deals with intuition. Similarly the motor center has intellectual, emotional and motor parts that control functions like learning to walk, the Olympics, and reflexes. The emotional center has intellectual, emotional, and motor parts that control faith and beliefs, the usual emotions like fear, anger, joy etc. and stuff like euphoria, erotica. The Physiological center is unfortunately the least understood. The center controls the survival drives for food, sex, safety etc. (And I believe, rational economic behaviour, free markets etc.) The thesis is that the lower centers (Physiological) must be developed before the higher centers can be productive. This must seem obvious since we don't expect a starving man to cry out with joy, or an emotionally disturbed person to reason effectively. ************************************************************************ I would appreciate any comments, anonymous or otherwise. Does this make any sense to you? Does this change your picture of your own mind? ************************************************************************ ------------------------------ Date: Mon, 4 Jun 84 17:07:34 PDT From: Joe Halpern Subject: Seminars - Knowledge Representation [Forwarded from the IBM/Halpern distribution by Laws@SRI-AI.] The knowledge seminar will be meeting again at 10 AM, Friday, June 8, in Auditorium A of Building 28 at IBM. This week Joe Karnicky will speak on "Knowledge Representation and Manipulation in an Expert System" and I will speak on work in progress entitled "Towards a Theory of Knowledge and Ignorance". I have appended the abstracts below. I have speakers lined up for three more sessions, which will be held June 22, July 6, and July 20. After that the seminar will stop, unless we can find some more volunteers to speak. As you can see by my talk, discussing work in progress is perfectly reasonable, as is talking about research other than your own. If you have any suggestions for speakers, or directions the seminar might take, please let me know. 10 AM -- Knowledge Representation and Manipulation in an Expert System Joe Karnicky, Varian Systems and Techniques Lab (Palo Alto) We are constructing an expert advisory system for chromatography, i.e. a computer program which is to perform as an advisor to analytical chemists (chromatographers) with functionality on the level of human experts. One of the most important considerations in the design of such a program is the choice of techniques for the representation and manipulation of the knowledge in the system. I will discuss these choices of knowledge representation, the results we have achieved, and the advantages and disadvantages we have discovered. The techniques to be discussed include: PREDICATE LOGIC-inference by a prologue-type interpreter (backward chaining + unification) modified to include certainty factors and predicates to be evaluated outside of the rule base. PRODUCTION SYSTEMS-collections of situation-action (if...,then...)rules. FRAMES-heirarchically related data structures. PROCEDURES- small programs for specific tasks in specific situations. ANALOG REPRESENTATIONS-in this case, a detector's output signal vs. time. 11 AM. -- Towards a Theory of Knowledge and Ignorance Joe Halpern, IBM Research Suppose you only have partial information about a particular domain. What can you be said to know in that case? This turns out to be a surprisingly tricky question to answer, especially if we assume that you have introspective knowledge about your knowledge. In particular, you know far more than the logical consequences of your information. For example, if my partial information does not tell me anything about the price of tea in China, then I know I don't know anything about the price of tea in China. Moreover, I know that no one else knows that I know the price of tea in China (since in fact I don't). Yet this knowledge is not a logical consequence of my information, which doesn't mention the price of tea in China at all! I will discuss the problem of characterizing an agent's state of when s/he has partial information, and give such a characterization in both the single agent and multi-agent case. The multi-agent case turns out to be much harder than the single agent case, and we're still not quite sure that we have the right characterization there. I will also try to relate this work to results of Konolige, Moore, and Stark, on non-monotonic logic and circumscriptive ignorance. ------------------------------ End of AIList Digest ******************** 5-Jun-84 21:43:10-PDT,17697;000000000000 Mail-From: LAWS created at 5-Jun-84 21:41:54 Date: Tue 5 Jun 1984 21:36-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #70 To: AIList@SRI-AI AIList Digest Wednesday, 6 Jun 1984 Volume 2 : Issue 70 Today's Topics: Games - Computer War Games Request, AI Tools - Stanford Computer Plans, Scientific Method - Hardware Prototyping, Seminar - Expert System for Maintenance ---------------------------------------------------------------------- Date: 1 Jun 84 13:22:15-PDT (Fri) From: hplabs!intelca!cem @ Ucb-Vax.arpa Subject: Computer War Games Article-I.D.: intelca.287 This may be a rather simple problem, but it least it has no philosophical ramifications. I am developing a game that plays very similarly to the standard combat situation type games that Avalon Hill is famous for. Basically, it has various pieces of hardware, such as battleships, aircraft carriers, destroyers, transports, tanks, armies, various aircraft, etc. and the purpose is to build a fighting force using captured cities and defeat the opposing force. It is fairly simple to make the computer a "game board" however I would also like it to be at least one of the opponents also. So I need some pointers on how to make the program smart enough to play a decent game. I suspect there will be some similarities to chess since it to is essentially a war game. The abilities I hope to endow my computer with are those of building a defense, initiating an offense, and a certain amount of learnablity. Ok world, what text or tome describes techniques to do this ? I have a book on "gaming theory" that is nearly useless, I suspect. One that was a little more practical and less "and this is the proof ...", 10 pages later the next sentence begins. Maybe something like Newman and Sproul's graphics text but for AI. --Chuck McManis ihnp4! Disclaimer : All opinions expressed herein are my \ own and not those of my employer, my dual! proper! friends, or my avacado plant. / \ / fortune! \ / X--------> intelca!cem ucbvax! / \ \ / \ hplabs! rocks34! ARPAnet : "hplabs!intelca!cem"@Berkeley / hao! ------------------------------ Date: Fri 1 Jun 84 15:17:06-PDT From: Mark Crispin Subject: Stanford University News Service press release [Forwarded from the Stanford bboard by CC.Clive@UTEXAS-20.] [Forwarded from the UTexas-20 bboard by CMP.Werner@UTEXAS-20.] STANFORD UNIVERSITY NEWS SERVICE STANFORD, CALIFORNIA 94305 (415) 497-2558 FOR INFORMATION CONTACT: Joel Shurkin FOR IMMEDIATE RELEASE STANFORD COMMISSIONS COMPUTER TO REPLACE LARGE DEC-20'S. STANFORD-- Stanford University is negotiating with a small Silicon Valley company to build large computers to replace the ubiquitous DECSYSTEM-20s now ``orphaned'' by their manufacturer, Digital Equipment Corp. (DEC). The proposed contract, which would total around $1.4 million, would commision two machines from Foonly Inc. of Mountain View for delivery early in 1986. Foonly is owned by former Stanford student David Poole. According to Len Bosack, director of the Computer Science Department's Computer Facilities, the Foonly F1B computer system is about four times faster than the DEC model 2060 and 10 times faster when doing floating-point computations (where the decimal point need not be in the same place in each of the numbers calculated) that are characteristic of large-scale engineering and scientific problems. Ralph Gorin, director of Stanford's Low Overhead Time Sharing (LOTS) Facility -- the academic computer center -- said the Foonly F1B system, which is totally compatible with the DEC-20, is an outgrowth of design work done by Poole and others while at the Stanford Artificial Intelligence Laboratory. Since 1977, Foonly has built one large system, the F1, and several dozen smaller systems. The Foonly F1B is a descendant of the original F1, with changes reflecting advances in integrated circuit technology and the architectural refinements (internal design) of the latest DEC-20s. A spokesman for DEC said the company announced last year it had discontinued work on a successor to the DEC-20, code named ``Jupiter,'' and would continue to sell enhanced versions of the large mainframe. Service on the machines was promised for the next ten years. However, said Sandra Lerner, director of the Computing Facilities at the Graduate School of Business, the discontinuation of DEC-20 development left approximately 1,000 customers world-wide without a practicable ``growth path.'' Ten DECSYSTEM-20 computers on campus make that machine the most numerous large system at Stanford. The Graduate School of Business uses its two DEC-20s for administration, coursework, and research. The Computer Science Department uses two systems for research and administration. LOTS, the academic computer facility, supports instruction and unsponsored research on three systems and hopes to add one more in the time before the F1B is available. Other DEC-20s are at the Department of Electrical Engineering, the artifical intelligence project at the Medical Center (SUMEX), and the recently formed Center for the Study of Language and Information (CSLI). The Stanford University Network (SUNet), the main university computer communications network, links together the 10 DEC-20s, approximately 30 mid-size computers, about 100 high-performance workstations, and nearly 400 terminals and personal computers. The DEC-20 has been a cornerstone of research in artificial intelligence (AI). Most of the large AI systems evolved on the DEC-20 and its predecessors. For this reason, Stanford and other computer science centers depend on these systems for their on-going research. Lerner said the alternative to the new systems would entail prohibitive expense to change all programs accumulated over nearly twenty years at Stanford and to retrain several thousand student, faculty, and staff users of these systems. The acquisition of the Foonly systems would be a deliberate effort to preserve these university investments. 6-1-84 -30- JNS3A EDITORS: Lerner may be reached at (415) 497-9717, Gorin at 497-3236, and Bosack at 497-0445. ------------------------------ Date: Mon 4 Jun 84 22:22:51-EDT From: David Shaw Subject: Correcting Stone's Mosaic comments Reluctant as I am to engage in a computer-mediated professional spat, it is clear that I can no longer let the inaccuracies suggested by Harold Stone's Mosaic quote go uncorrected. During the past two weeks, I've been inundated with computer mail asking me to clarify the issues he raised. In my last message, I tried to characterize what I saw as the basic philosophical differences underlying Harold's attacks on our research. Upon reading John Nagle's last message, however, it has become clear to me that it is more important to first straighten out the surface facts. First, I should emphasize that I do not in any way hold John Nagle responsible for propagating these inaccuracies. Nagle interpreted Stone's remarks in Mosaic exactly as I would have, and was careful to add an "according to the writer quoted" clause in just the right place. I also agree with Nagle that Stone's observations would have been of interest to the AI community, had they been true, and thus can not object to his decision to circulate them over the ARPANET. As it happens, though, the obvious interpretation of Stone's published remarks, as both Nagle and I interpreted them, were, quite simply, counterfactual. Nagle interpreted Stone's remarks, as I did, to imply that (in Nagle's words) "NON-VON's 1 to 3 are either unfinished or were never started." (Stone's exact words were "Why is there a third revision when the first machine wasn't finished?") In fact, a minimal (3 processing element) NON-VON 1 has already been completed and thoroughly tested. The custom IC on which it is based has been extensively tested, and has proved to be 100% functional. Construction of a non-trivial (though, at 128 PE's, still quite small) NON-VON 1 machine awaits only the receipt from ARPA's MOSIS system of enough chips to build a working prototype. If MOSIS is in fact able to deliver these parts according to the estimated timetable they have given us, we should be able to demonstrate operation of the 128-node prototype before our original milestone date of 12/84. In fact, we have proceeded with all implementation efforts for which we have received funding, have developed and tested working chips in an unusually short period of time, and have met each and every one of our project milestones without a single schedule overrun. When the editors of Mosaic sent me a draft copy of the text of their article for my review, I called Stone, and left a message on his answering device suggesting that (even if he was not aware of, did not understand, or had some principled objection to our phased development strategy) he might want to change the words "wasn't finished" to "hasn't yet been finished" in the interest of factual accuracy. He never returned my call, and apparently never contacted Mosaic to correct these inaccuracies. For the record, let me try to explain why NON-VON has so many numbers attached to its name. NON-VON 2 was a (successful) "paper-and-pencil" exercise intended to explore the conceptual boundaries of SIMD vs. MIMD execution in massively parallel machines. As we have emphasized both in publications and in public talks, this architecture was never slated for physical implementation. To be fair to Stone, he never explicitly said that it was. Still, I (along with Nagle and others who have since communicated with me) felt that Stone's remarks SUGGESTED that NON-VON 2 provided further evidence that we were continually changing our mind about what we wanted to build, and abandoning our efforts in midstream. This is not true. NON-VON 3, on the other hand, was in fact proposed for actual implementation. Although we have not yet received funding to build a working prototype, and will probably not "freeze" its detailed design for some months, considerable progress has been made in a tentative design and layout for a NON-VON 3 chip containing eight 8-bit PE's. The NON-VON 3 PE is based on the same general architectural principles as the working NON-VON 1 PE, but incorporates a number of improvements derived from detailed area, timing, and electrical measurements we have obtained from the NON-VON 1 chip. In addition, we are incorporating a few features that were considered for implementation in NON-VON 1, but were deemed too complex for inclusion in the first custom chip to be produced at Columbia. While we still expect to learn a great deal from the construction of a 128-node NON-VON 1 prototype, the results we have obtained in constructing the NON-VON 1 chip have already paid impressive dividends in guiding our design for NON-VON 3, and in increasing the probability of obtaining a working, high-performance, 65,000-transistor chip within the foreseeable future. Based on his comments, I can only assume that, in my position, Stone would have attempted to jump directly from an "armchair design" to a working, highly optimized 65,000-transistor nMOS chip without wasting any silicon on interim experimentation. This strategy has two major drawbacks: 1. It tends to result in architectures that micro-optimize (in both the area and time dimensions) things that ultimately don't turn out to make much difference, at the expense of things that do. 2. It often seems to result in chips that never work. Even when they do, the total expenditure for development, measured in e