From in%@vtcs1 Fri Feb 13 00:49:35 1987 Date: Fri, 13 Feb 87 00:49:23 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #38 Status: R AIList Digest Wednesday, 11 Feb 1987 Volume 5 : Issue 38 Today's Topics: Seminars - High-Level Architecture for LISP (SMU) & Combinatorics of Rule-Based Expert Systems (Rutgers) & Optimal Histories for Default Reasoning (SU), Conference - Extended Deadline for AAAI Uncertainty in AI Workshop & IEEE First Annual Conference on Neural Networks & Office Knowledge ---------------------------------------------------------------------- Date: WED, 10 oct 86 17:02:23 CDT From: leff%smu@csnet-relay Subject: Seminar - High-Level Architecture for LISP (SMU) Wednesday, February 11, 1987, Computer Science and Engineering, Southern Methodist University, Dallas, Texas 315SIC, 1:30 PM High-Level Language Architecture for LISP Steve Krueger (kreuger%home@TI-CSL) Symbolic Computing Laboratory Texas Instruments The TI LISP Machine family utilizes a high-level language architecture for LISP in order to gain high performance, preserve the full dynamic behavior of LISP and support software debugging. These processors support a complex high-level language instruction set for Common LISP (a rich dialect of LISP) implemented in hardware and microcode. Support for LISP and the instruction set gives high LISP performance. An overview and motivation of the HLL instruction set will be given, as will an overview of TI's LISP architecture. Steven D. Krueger S.M. Massachusetts Institute of Technology, Computer Science, 1980. S.B. Massachusetts Institute of Technology, Electrical Engineering, 1980. Mr Krueger is a Sr. Member of Technical Staff in TI Computer Science Center where his research interests are in computer architecture and hardware/software interfaces. He is responsible for the architecture of the Explorer Lisp Machine processor and its successors. He has been involved in Explorer since early 1983 and has made contributions to the processor and system architecture, and was leader of the hardware and software integration team. He also contributed to the architecture of the single chip Lisp processor (CLM) and is responsible for an improved instruction set architecture for Explorer and CLM. ------------------------------ Date: 4 Feb 87 22:30:28 EST From: KALANTARI@RED.RUTGERS.EDU Subject: Seminar - Combinatorics of Rule-Based Expert Systems (Rutgers) RUTGERS COMPUTER SCIENCE AND RUTCOR COLLOQUIUM SCHEDULE - SPRING 1987 Computer Science Department Colloquium : DATE: Friday, Feb. 6, 1987 SPEAKER: Wiktor Marek AFFILIATION: University of Kentucky TITLE: "On the logic and combinatorics of rule-based expert systems" TIME: 2:50 (Coffee and Cookies will be setup at 2:30) PLACE: Hill Center, Room 705 We discuss some basic issues of rule-based expert systems, their logic and the main complexity issues related to the algorithms of deciding the consistency and completeness of such systems. In addition we study the connections to the theory of non-first normal form relational databases and find the applications of our theory to non-first normal form relations. ------------------------------ Date: 06 Feb 87 1028 PST From: Vladimir Lifschitz Subject: Seminar - Optimal Histories for Default Reasoning (SU) Commonsense and Nonmonotonic Reasoning Seminar OPTIMAL HISTORIES: A TEMPORAL APPROACH TO DEFAULT REASONING Van Nguyen IBM T.J.Watson Research Center Yorktown Heights, NY 10598 Thursday, February 12, 4pm Bldg. 160, Room 161K A new technique in default reasoning (non-monotonic reasoning) is presented. It is based on the notion of optimal histories. Intuitively, an optimal history contains a sequence of sets S(n), n = 0, 1, ..., of first-order formulae. Each S(n) is a description of the state of the world, as seen by some computing agent, at time (situation) n. State S(n+1) is computed from S(n) and the event (action) E(n+1) that occurs at time n+1 by a default-inference rule, so that facts that are true in S(n) tend to stay true in S(n+1), unless something falsifies them. Other parameters of an optimal history are the deductive ability of the computing agent and a set of basic axioms and constraints. Thus an optimal history is a description of how the world changes with new events, as time passes. The technique is applicable to such problems in default reasoning as belief revision, dealing with exceptions to general rules, the frame problem of McCarthy and Hayes, the qualification problem of McCarthy, and the temporal projection problem of Hanks and McDermott. Optimal histories can also be formulated in the framework of temporal logic of Manna and Pnueli. ------------------------------ Date: Sat, 7 Feb 87 21:39:25 pst From: levitt@ads.ARPA (Tod Levitt) Subject: Conference - Extended Deadline for AAAI Uncertainty in AI Workshop EXTENSION OF SUBMISSION DEADLINE for AAAI UNCERTAINTY IN ARTIFICIAL INTELLIGENCE WORKSHOP Seattle, Washington July 10-12, 1987 Due to conflicts with a number of other submission deadlines for related conferences and workshops, the deadline for the 1987 Uncertainty in AI workshop is being extended until March 10, 1987. Please send four copies of papers or extended abstracts to Tod S. Levitt c/o Advanced Decision Systems 201 San Antonio Circle, Suite 286 Mountain View, California 94040 ------------------------------ Date: Fri, 6 Feb 87 17:58 EDT From: MIKE@BUCASA.BITNET Subject: Conference - IEEE First Annual Conference on Neural Networks From: (Stephen Grossberg) IEEE First Annual Conference on Neural Networks, San Diego, California, 21-24 June 1987. Requests from many scientists who heard about the meeting only recently have led to a revised deadline for abstracts and papers. Extended abstracts should be submitted for conference presentation by April 1, 1987 Abstracts received after April 1, 1987 will be returned. Please submit abstract plus 4 clean copies. Abstracts must be neatly typed, single spaced, and no more than four pages. Abstracts will be carefully refereed as they are received. Authors of accepted abstracts will be notified as soon after receipt as possible, and no later than the first week of May. Authors of accepted abstracts will promptly be sent materials for paper preparation. Papers can be up to 8 pages in length. Final papers for publication in the book of proceedings are due no later than June 21, 1987 at the meeting. The proceedings will be published in the Fall of 1987. Address all correspondence referring to abstracts and papers to: Maureen Caudill IEEE - ICNN 10615G Tierrasanta Blvd. Suite 346 San Diego, California 92124 Telephone: (619) 457-5550, ext. 221 ------------------------------ Date: Mon, 9 Feb 87 07:49:30 est From: rba@flash.bellcore.com (Robert B. Allen) Subject: Conference - Office Knowledge CALL FOR PARTICIPATION IFIP WG8.4 Workshop on Office Knowledge: Representation, Management and Utilization 17-19 August 1987 University of Toronto Toronto, Canada WORKSHOP CHAIRMAN PROGRAM CHAIRMAN Prof. Dr. Alex A. Verrijn-Stuart Dr. Winfried Lamersdorf University of Leiden IBM European Networking Center ORGANIZING CHAIRMAN Prof. Fred H. Lochovsky University of Toronto This workshop is intended as a forum and focus for research in the representation, management and utilization of knowledge in the office. This research area draws from and extends techniques in the areas of artificial intelligence, data base management systems, programming languages, and communication systems. The workshop program will consist of one day of invited presentations from key researchers in the area plus one and one half days of contributed presentations. Extended abstracts, in English, of 4-8 double-spaced pages (1,000-2,000 words) are invited. Each submission will be screened for relevance and potential to stimulate discussion. There will be no formal workshop proceedings. However, accepted submissions will appear as 2 submitted in a special issue of the WG8.4 newsletter and will be made available to workshop participants. How to submit Four copies of double-spaced extended abstracts in English of 1,000-2,000 words (4-8 pages) should be submitted by 15 April 1987 to the Program Chairman: Dr. Winfried Lamersdorf IBM European Networking Center Tiergartenstrasse 15 Postfach 10 30 68 D-6900 Heidelberg West Germany Important Dates Extended abstracts due: 15 April 1987 Notification of acceptance for presentation: 1 June 1987 Workshop: 17-19 August 1987 ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Fri Feb 13 00:49:55 1987 Date: Fri, 13 Feb 87 00:49:39 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #39 Status: R AIList Digest Wednesday, 11 Feb 1987 Volume 5 : Issue 39 Today's Topics: Queries - LISP Conversion & Symbolics Termcap, AI Tools - Coral Object Logo & MICE Expert System Shell, Education - Introductory AI Books, Representations - Richness and Flexibility ---------------------------------------------------------------------- Date: Mon, 09 Feb 87 11:00:18 -0800 From: simmons@aerospace.aero.org Subject: LISP Conversion I am gathering information concerning the conversion or translation of programs written is LISP to procedural languages (especially interested in LISP to Fortran). I would appreciate comments from anyone who knows of work being done in this area. I will summarize replies for the AILIST. Thanks, Charles Simmons (simmons@aerospace.arpa) ------------------------------ Date: Tue, 10 Feb 87 09:18:16 -0800 From: Amnon Meyers Subject: symbolics question I'm having trouble getting the Symbolics 3600 to behave properly as a terminal, when logged into UNIX systems. Editors like VI and EMACS don't work right, even though vt100 emulation mode is set up. If someone has a TERMCAP line that works well, or can otherwise help, I'd appreciate it. Thanks, Amnon Meyers ------------------------------ Date: 9 Feb 87 15:15:09 GMT From: mdc@EDDIE.MIT.EDU (Martin Connor) Subject: Coral Object Logo In article <1725@PUCC.BITNET> 6065833@PUCC.BITNET writes: > Can anyone recommend another LOGO for the macintosh? Has anyone found > a way to print graphics windows on >a laserwriter? Any information > would be greatly appreciated. Object Logo from Coral Software in Cambridge, Mass is a good value. It is about $80 and has loads of features, Comes with Finder 5.3 and system 3.2, lots of examples, a good reference manual, and is supported by a solid bunch of hackers (I know some of them). They have an ad in this month's MACWORLD, with ordering info. I've used it, and I recommend it highly. I hope some schools pick up on it and use it. ------------------------------ Date: Mon 9 Feb 87 18:08:06-PST From: Matt Heffron Subject: MICE Expert System Shell We ordered the $20 MICE system. I haven't used it yet, as I haven't had time to really sit down and read and understand the manuals. From the various features referred to in the Table of Contents of the 3 manuals (yes, 3. User Reference, Technical Reference, and Graphic System User Manual (total: approx 140 pgs) it looks very impressive. The knowledge representation appears to be primarily Semantic Net based, and there is support for graphically perusing the network and building custom graphic objects for use with the system. HOWEVER, after reading some of the User Reference manual, it quickly begins to look like "you get what you pay for". E.g. the definitions of FORWARD CHAINING and BACKWARD CHAINING in the User Reference are *REVERSED* (from what I understand them to be): "In general, facts make up evidence; in the process of determining the validity of a fact, further evidence may be required. This propagation of the thought process continues until a fundamental fact is encountered which requires no further evidence. This fundamental fact is called an ATOMIC FACT. And, this thought process is called FORWARD CHAINING. Forward chaining is often used by human experts to validate assumptions. On the other hand, if a given fact is being used to support the validity of more than one fact, validating the other fact will often cause the human expert to consider the other alternatives which it supports. This thought process is called BACKWARD CHAINING." Also, it becomes clear from the included price list that this is really only designed to be a DEMO system. The KB is limited to 12K. For versions that support larger KB's the price goes WAY up (e.g. 20K is $200, 100K is $1750, 1M is $9000). I also recently received a letter asking if I wanted to subscribe to the monthly users newsletter and/or program updates (at $60/year for the newsletter, $80/year for the updates, or $120/year for both). For $20 (without the support) it will probably be OK for simple prototypes. Matt Heffron BEC.HEFFRON@USC-ECL.ARPA Standard Disclaimer about these being my opinions, not those of my employer. ------------------------------ Date: 9 Feb 87 14:32:36 GMT From: atux01!jlc@rutgers.rutgers.edu (J. Collymore) Subject: A List of AI Books (for beginners) I have received a number of requests for a posting any of my replies on my query regarding good books on Artificial Intelligence for the beginner. Well, here are those replies. Thank you to all who responded to my query. Jim Collymore ******************************************************************************* Re: Need References to VERY BASIC Concepts of AI & Preferred Comp. Langs. Artificial Intelligence, by Patrick Winston =============================================================================== Re: Need References to VERY BASIC Concepts of AI & Preferred Comp. Langs. Newsgroups: comp.ai,comp.misc Organization: MIT Media Lab, Cambridge MA The following two books are the most recommended ones I have seen and are coordinated to introduce (1) concepts and (2) techniques of AI. (1) Artificial Intelligence Patrick Henry Winston Addison Wesley (2) Lisp Patrick Henry Winston and Berthold Klaus Horn Addison Wesley As for good languages for AI, Lisp is good because with it you think more about the solution than about the implementation and because it allows you to develop the language you would have like to have to solve the problem with in the first place. This latter requirement seems to be important for the kind of approach used for AI these days. There are two compilations of papers available which are of interest. Titles are: Readings in Artificial Intelligence Readings in Knowledge Representation I will try to get the publisher's name for you. --Mario =============================================================================== Subject: AI Learn LISP and PROLOG. Winston's or Steele's book on COMMON LISP are good. Steele is more of a reference book. Clocksin and Mellish is the default standard of PROLOG. However, Bratko is easier to learn from. Bratko also provides a good intro to AI. I highly recommend reading Bratko. Winston's book on AI is TERRIBLE for a beginning book. For some history MIT puts out some collected papers. -------- LISP, Winston. COMMON LISP, Steele. Programming in Prolog, Clocksin and Mellish. Programming in Prolog for Artificial Intelligence, Bratko. =============================================================================== Subject: AI programming languages Have you thought about trying Logo? This Department used it for years, though we have now moved to Edinburgh Prolog. Try reading Alan Bundy's book "Artificial Intelligence, an introductory course", paperback published by Edinburgh University Press ------------------------------ Date: 10 Feb 87 17:40:17 GMT From: jennifer!lyang@sun.com (Larry Yang) Subject: Learing about AI (was Re: A List of AI Books (for beginners)) >Learn LISP and PROLOG. When I took a class on Artificial Intelligence at Stanford (CS223, for those who care), I figured I was ready. I knew PROLOG and LISP. And I was all set to learn about this great thing called 'AI', at the place where big names made it happen. I was in for a surprise. Based on my experience, if you want to learn about hard-core, theoretical artificial intelligence, then you must have a strong (I mean STRONG) background in formal logic. My understanding of PROLOG (which resembles predicate logic) was very helpful, but it wasn't enough. If you want to go out and build expert systems, or perform some other intelligence engineering task, then PROLOG and LISP and a basic grasp of logic are probably enough. But if you want to follow the latest research (and maybe eventually do some of it), then a formal training in logic is a must. ================================================================================ Whydoesn'titsnowintherightplaces? --Larry Yang | *A REAL signature* _|> /\ | lyang@sun.com,{backbone}!sun!lyang | "Limit? We don't | | | /-\ |-\ /-\ Sun Microsystems, Inc. | need no stinkin' <|_/ \_| \_/\_| |_\_| Mountain View, California | 4-line limit! " _/ _/ ------------------------------ Date: 9 Feb 87 11:06:42 est From: Walter Hamscher Subject: representation languages: richness and flexibility Date: 5 Feb 87 03:37:30 GMT From: berleant@sally.utexas.edu (Dan Berleant) Hmm. I just attended a lecture in which frame based representation schemes were touted on the basis of the fact that representation languages should be rich and flexible. Well, it sounds good, it even sounds simple, but I'm sure not sure what it means! In the context of representation languages, what is 'rich', and what is 'flexible'? Good question. Flame on... The term ``representation language'' is redundant. What other kind of language could there be? Just think about languages, period, and the terms make more sense. Languages are symbol structures that have an interpreter. And since the terms are relative, it makes more sense to ask ``what makes language A richer than language B'' and ``what makes language X more flexible than language Y.'' Here's one way to characterize richness: A is richer than B if symbol structures in A can finitely denote facts (i.e., the interpreter can interpret as) that B can't. E.g., 1st order predicate calculus is richer than propositional calculus because it has quantification, which allows you to express infinitely large propositional conjunctions and disjunctions. Frame languages, semantic nets, etc, differ as to whether they correspond to first-, second-, or omega-order logics, and that's probably the best way to characterize its richness in a technical sense. If you replace finiteness with compactness, it becomes more a matter of taste: frame languages print nicely because they supress some redundancies, but does the computer really care about that? Here's one way to characterize flexibility: X is more flexible than Y if a local incremental change to the denotation of a symbol structure in X can be done by changing fewer symbols and relations. This actually turns out to go along with richness sometimes. For example, a frame based language with inheritance and cancellation is more flexible than 1st order predicate calculus because (to beat on a tired example) you can say that birds fly and then later say that penguins, which are birds, don't fly, without having to go back and change the original statement about how birds fly. You make a local addition and you don't have to go around the whole symbol structure fixing a lot of things up. What this goes to show is that a frame language with these features has second order properties; if you go to 2nd order predicate calculus via circumscription, you get this locality property back. Now you get to the real question: what are the properties of the interpreter that come packaged with the language? Does it give you some kind of guarantee about completeness, about variant queries, about constant time complexity for query answering, or what? Does the language come with a basic set of facts about the world that you can build on (like a subroutine library in a programming language)? Or does it just stuff things into a database and let you figure out what to do with them later? The richness and flexibility of the language itself are not very interesting properties, it's the interpreter that matters. What people usually mean when they say ``representation language'' is ``belief language'', since they're talking about a language whose purpose is to denote the beliefs of an agent. But if you expect the interpreter of your belief language to do a lot of automatic inferences that solve a significant part of the software engineering problem for you, then you're probably expecting too much from it: that's the job of a programming language and environment. Flame off... Walter Hamscher ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Fri Feb 13 00:51:39 1987 Date: Fri, 13 Feb 87 00:51:25 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #40 Status: R AIList Digest Thursday, 12 Feb 1987 Volume 5 : Issue 40 Today's Topics: Queries - Pattern Recognition/Graphs & Mac PD Prolog & Print Driver Extension & DEC AI Workstation & J.M. Spivey, Representation - Language Comparisons, AI Tools - Against the Tide of Common LISP ---------------------------------------------------------------------- Date: Wed, 11 Feb 87 10:42:08 n From: DAVIS@EMBL.BITNET Subject: pattern recognition/graphs Does anyone out there in the electronic village have any familiarity or knowledge of pattern recognition algorithms which are or may be of particular use in the identification of fuzzy predefined graph features ? Whilst I have a couple of approaches of my own, I'd be very interested to hear about any other methods. I guess that 'template matching' with arbitrary match coefficients is the most obvious, but any other offers ? netmail: davis@embl.bitnet (from uucp thats psuvax1!embl.bitnet!davis) wetmail: embl, postfach 10.2209, 6900 Heidleberg, west germany paul davis ------------------------------ Date: 10 Feb 87 22:51:12 GMT From: mendozag@ee.ecn.purdue.edu (Grado) Subject: Mac PD Prolog wanted Does anyone have the sources for a PD Prolog for the Mac+? How about any other PD Prolog, so I can port it to the Mac? Please, let me know by e-mail. Thanks in advance, Victor M Grado School of EE Purdue University West Lafayette, IN 47907 (317) 494-3494 mendozag@ecn.purdue.edu pur-ee!mendozag ------------------------------ Date: Wed, 11 Feb 87 15:53 EST From: DON%atc.bendix.com@RELAY.CS.NET Subject: Print driver extension for HP Laser printers on LMI Does anyone have a print driver adapted for the HP Laser printer written for LMI, Symbolics, or TI explorer? I'm looking for the ability to set tab stops and use multiple fonts. [I like to be as helpful as possible, but several readers have pointed out that termcap entries and other hardware queries have nothing to do with AI. There are lists (SLUG@UTEXAS-20, INFO-TI-EXPLORER@SUMEX-AIM, INFO-1100@SUMEX-AIM, WORKS@RUTGERS, etc.) devoted to specific hardware and operating systems. -- KIL] ------------------------------ Date: Wed, 11 Feb 87 08:42 EST From: DON%atc.bendix.com@RELAY.CS.NET Subject: DEC AI Workstation One of my colleagues is thinking of buying an AI workstation from DEC. I have heard nothing good about them. However, the negative remarks have not come from people who have actually used them. In order to better advise my colleague, I would like to hear from people who have used the workstations. Of particular interest to me are remarks from people who have used the DEC workstation and one of the standard Lisp workstations (XEROX, Symbolics, LMI, TI, Sun, Apollo). What about the Lisp Sensitive Editor. Is that worth anything? How does it compare to ZMACS? Thank you, Don Mitchell ------------------------------ Date: Wed, 11 Feb 87 09:35 EDT From: Peter Heitman Subject: Looking for J.M. Spivey, the author of Portable Prolog Can anyone help me locate J.M. Spivey, the author of Portable Prolog? He was at the University of York years ago and then went to Edinburgh for a while after that. Any help tracking him down will be appreciated. Peter Heitman heitman@cs.umass.edu ------------------------------ Date: Wed, 11 Feb 87 12:35:50 pst From: ladkin@kestrel.ARPA (Peter Ladkin) Subject: language comparisons Walter Hamscher writes: > Here's one way to characterize richness: A is richer than B if symbol > structures in A can finitely denote facts (i.e., the interpreter can > interpret as) that B can't. I suppose the intention of `richer than' is to be an aymmetric comparative. Thus, he needs to add some condition such as: A can also finitely denote all facts that B can't to rule out cases where both A is richer than B and B is richer than A. A case of this would be first-order logic and modal logic. Each may express conditions that are inexpressible in the other (e.g. irreflexivity for modal logic, well-cappedness for first-order logic). peter ladkin ladkin@kestrel.arpa ------------------------------ Date: Tue, 10 Feb 87 19:24:09 pst From: well!jjacobs@lll-lcc.ARPA (Jeffrey Jacobs) Reply-to: well!jjacobs@lll-lcc.ARPA (Jeffrey Jacobs) Subject: Against the Tide of Common LISP "Against the Tide of Common LISP" Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc., P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802 Bix ID: jeffjacobs, CIS Userid 75076,2603 Reproduction by electronic means is permitted, provided that it is not for commercial gain, and that this copyright notice remains intact." The following are from various correspondences and notes on Common LISP: Since you were brave enough to ask about Common Lisp, sit down for my answer: I think CL is the WORST thing that could possibly happen to LISP. In fact, I consider it a language different from "true" LISP. CL has everything in the world in it, usually in 3 different forms and 4 different flavors, with 6 different options. I think the only thing they left out was FEXPRs... It is obviously intended to be a "compiled" language, not an interpreted language. By nature it will be very slow; somebody would have to spend quite a bit of time and $ to make a "fast" interpreted version (say for a VAX). The grotesque complexity and plethora of data types presents incredible problems to the developer; it was several years before Golden Hill had lexical scoping, and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!! It just eventually eats up it's entire VAX/VMS virtual memory and dies... Further, there are inconsistencies and flat out errors in the book. So many things are left vague, poorly defined and "to the developer". The entire INTERLISP arena is left out of the range of compatability. As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in Common LISP. Once again we hear that LISP is "too slow" for such things, when a large part of it is the use of Common LISP as opposed to a "faster" form (i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they should have left the &aux, etc as macros). Every operation in CL is very expensive in terms of CPU... ______________________________________________________________ I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to allow both dynamic and lexical makes the performance even worse. To me, lexical scoping was and should be a compiler OPTIMIZATION, not an inherent part of the language semantics. I can accept SCHEME, where you always know that it's lexical, but CL could drive you crazy (especially if you were testing/debugging other people's code). This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super duper complex system that will do everything I can build. Who cares if it's incredibly difficult and costly to build and understand, and that most of the features will only get used because "they are there", driving up the cpu useage and making the whole development process more costly... BTW, I think the book is poorly written and assume a great deal of knowledge about LISP and MACLISP in particular. I wouldn't give it to ANYBODY to learn LISP ...Not only does he assume you know a lot about LISP, he assume you know a LOT about half the other existing implementations to boot. I am inclined to doubt that it is possible to write a good introductory text on Common LISP; you d**n near need to understand ALL of it before you can start to use it. There is nowhere near the basic underlying set of primitives (or philosophy) to start with, as there is in Real LISP (RL vs CL). You'll notice that there is almost NO defining of functions using LISP in the Steele book. Yet one of the best things about Real LISP is the precise definition of a function! Even when using Common LISP (NIL), I deliberately use a subset. I'm always amazed when I pick up the book; I always find something that makes me curse. Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I think, the author's name escapes me). The author uses SETF instead of SETQ, stating that SETF will eventually replace SETQ and SET (!!). Thinking that this was an error, I checked in Steel; lo and behold, tis true (sort of). In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom of page 94! And it isn't even clear; if the variable is lexically bound AND dynamically bound, which gets changed (or is it BOTH)? Who knows? Where is the definitive reference? "For consistency, it is legal to write (SETF)"; (a) in my book, that should be an error, (b) if it's not an error, why isn't there a definition using the approprate & keywords? Consistency? Generating an "insufficient args" error seems more consistent to me... Care to explain this to a "beginner"? Not to mention that SETF is a MACRO, by definition, which will always take longer to evaluate. Then try explaining why SET only affects dynamic bindings (a most glaring error, in my opinion). Again, how many years of training, understanding and textbooks are suddenly rendered obsolete? How many books say (SETQ X Y) is a convenient form of (SET (QUOTE X) Y)? Probably all but two... Then try to introduce them to DEFVAR, which may or may not get evaluated who knows when! (And which aren't implemented correctly very often, e.g. Franz Common and Golden Hill). I don't think you can get 40% of the points in 4 readings! I'm constantly amazed at what I find in there, and it's always the opposite of Real LISP! MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER used EQ instead of EQUAL. I only checked about 4 books and manuals (UCILSP, INTERLISP, IQLISP and a couple of others). David correctly pointed out that CL defaults to EQ unless you use the keyword syntax. So years of training, learning and ingrained habit go out the window. How many bugs will this introduce. MEMQ wasn't good enough? MEMBER isn't the only case... While I'm at it, let me pick on the book itself a little. Even though CL translates lower case to upper case, every instance of LISP names, code, examples, etc are in **>> lower <<** case and lighter type. In fact, everything that is not descriptive text is in lighter or smaller type. It's VERY difficult to read just from the point of eye strain; instead of the names and definitions leaping out to embed themselves in your brain, you have to squint and strain, producing a nice avoidance response. Not to mention that you can't skim it worth beans. Although it's probably hopeless, I wish more implementors would take a stand against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP" is more than most would-be implementors can resist. Even I occasionally find myself thinking "how would I implement that"; fortunately I then ask myself WHY? Jeffrey M. Jacobs CONSART Systems Inc. Technical and Managerial Consultants P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802 CIS:75076,2603 BIX:jeffjacobs USENET: jjacobs@well.UUCP (originally written in late 1985 and early 1986; more to come RSN) ------------------------------ Date: Wed, 11 Feb 87 23:04:46 pst From: well!jjacobs@lll-lcc.ARPA (Jeffrey Jacobs) Reply-to: well!jjacobs@lll-lcc.ARPA (Jeffrey Jacobs) Subject: Re: Against the Tide of Common LISP Some comments on "Against the Tide of Common LISP". First, let me point out that this is a repeat of material that appeared here last June. There are several reasons that I have repeated it: 1) To gauge the ongoing change in reaction over the past two years. The first time parts of it appeared in 1985, the reaction was uniformly pro-CL. When it appeared last year, the results were 3:1 *against* CL, mostly via Mail. Now, being "Against the Tide..." is almost fashionable... 2) To lay the groundwork for some new material that is in progress and will be ready RSN. I did not edit it since it last appeared, so let me briefly repeat some of the comments made last summer: I. My complaint that "both dynamic and lexical makes the performance" even worse refers *mainly* to interpreted code. I have already pointed out that in compiled code the difference in performance is insignificant. 2. The same thing applies to macros. In interpreted code, a macro takes significantly more time to evaluate. I do not believe that it is acceptable for a macro in interpreted code to by destructively exanded, except under user control. 3. SET has always been a nasty problem; CL didn't fix the problem, it only changed it. Getting rid of it and using a new name would have been better. After all, maybe somebody *wants* SET to set a lexical variable if that's what it gets... I will, however, concede that CL's SET is indeed generally the desired result. 4. CL did not fix the problems associated with dynamic vs lexical scoping and compilation, it only compounded them. My comment that >"lexical scoping was and should be a compiler OPTIMIZATION" is a *historical* viewpoint. In the 'early' days, it was recognized that most well written code was written in such a manner that it was an easy and effective optimization to treat variables as being lexical/local in scope. The interpreter/compiler dichotomy is effectively a *historical accident* rather than design or intent of the early builders of LISP. UCI LISP should have been released with the compiler default as SPECIAL. If it had been, would everybody now have a different perspective? BTW, it is trivial for a compiler to default to dynamic scoping... 5. >I checked in Steel; lo and behold, tis true (sort of). >In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom >of page 94! I was picking on the book, not the language. But thanks for all the explanations anyway... 6. >"For consistency, it is legal to write (SETF)" I have so much heartburn with SETF as a "primitive" that I'll save it for another day. 7. >MEMBER used EQ instead of EQUAL. Mea culpa, it uses EQL! 8. I only refer to Common LISP as defined in the Steele Book, and to the Common LISP community's subsequent inability to make any meaningful changes or create a subset. (Excluding current ANSI efforts). Some additional points: 1. Interpreter Performance I believe that development under an interpreter provides a substantially better development environment, and that compiling should be a final step in development. It is also one of LISP's major features that anonymous functions get generated as non-compiled functions and must be interpreted. As such, interpreter performance is important. 3. "Against the Tide of Common LISP" The title expresses my 'agenda'. Common LISP is not a practical, real world language. It will result in the ongoing rejection of LISP by the real world; it is too big and too expensive. To be accepted, LISP must be able to run on general purpose, multi-user computers. It is choking off acceptance of other avenues and paths of development in the United States. There must be a greater understanding of the problems, and benefits of Common LISP, particularly by the 'naive' would be user. Selling it as the 'ultimate' LISP standard is dangerous and self-defeating! Jeffrey M. Jacobs CONSART Systems Inc. Technical and Managerial Consultants P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802 CIS:75076,2603 BIX:jeffjacobs USENET: jjacobs@well.UUCP ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Feb 14 00:34:45 1987 Date: Sat, 14 Feb 87 00:34:39 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #41 Status: R AIList Digest Friday, 13 Feb 1987 Volume 5 : Issue 41 Today's Topics: Seminars - Large Optical Expert Systems (CMU) & Knowledge Acquisition in Magnetic Resonance Imaging (CMU) & Methods for treating Uncertainty in AI (CMU) & The PRL Mathematics Environment (CMU), Conference - IEEE Conference on Neural Nets: Student Special & 2nd Conference on Artificial Intelligence and Sea & European Conference on AI in Medicine ---------------------------------------------------------------------- Date: 10 Feb 87 16:49:48 EST From: Patty.Hodgson@isl1.ri.cmu.edu Subject: Seminar - Large Optical Expert Systems (CMU) SPECIAL SEMINAR TOPIC: Large Optical Expert Systems SPEAKER: Dr. Alastair D. McAulay Wright State University Department of Computer Science DATE: Thursday, February 12 TIME: 10:30 am PLACE: Doherty Hall 3313, CMU ABSTRACT: Fast expert systems are required in such areas as plant diagnosis, and robotics. The advantages of optics over electronics for such systems are discussed and include: massively parallel logic and pattern matching, and global communications and search. New computer architectures are required to efficiently utilize fast 2-D optical spatial light modulators in development. A proposed real-time optical recursive probabilistic expert system is described. Conventionally, matching in optics is performed in an analog manner. An alternative digital symbolic substitution approach is described for matching and variable instantiation in logic programming languages. ************************************************************************* Alastair holds a PhD from Carnegie Mellon University, and M.A. and B.A. degrees with honors from Cambridge University. He is NCR Distinguished Professor in the Department of Computer Science at Wright State University. He has numerous publications involving optical computing, scientific computation, signal processing, and parallel computation. Previously, for the past eight years, he was Program Manager and Principal Investigator in the Corporate Computer Science Laboratory at Texas Instruments for a DARPA/ONR optical computing contract. He is a Senior member of IEEE and a member of SPIE and SEG. He founded the Dallas IEEE Computer Society and was Chairman of the Dallas Section. ***************************************************************************** If you are interested in an appointment with Dr. McAulay contact Patty at 8818 or send mail pah@d. ------------------------------ Date: 10 Feb 87 18:58:05 EST From: Steven.Minton@k.cs.cmu.edu Subject: Seminar - Knowledge Acquisition in Magnetic Resonance Imaging (CMU) This week's speaker in the Grad AI Seminar is Mark Perlin. (The seminar is held weekly in 7220 Wean, at 3:15 on Fridays.) Mark is going to be describing work that he recently wrote up for a AAAI paper. Here's the title and abstract from the paper: Title: Knowledge Acquisition in Magnetic Resonance Imaging We have been observing and analyzing expert problem solving behavior in the magnetic resonance imaging (MRI) domain for over a year. Our methodology has included the collection and analysis of verbal transcripts and computer-assisted protocols. Our version of protocol analysis, which incorporates detailed followup interviewing, proved useful in the formulation of an effective computer procedure for the domain task. We will outline the approach, with numerous domain examples, and discuss what we learned. The key points are: -- the effectiveness of protocol analysis -- the usefulness of our expert's mental pictures -- the elucidation of domain independent heuristics. ------------------------------ Date: 10 Feb 1987 2020-EST From: David A. Evans Subject: Seminar - Methods for treating Uncertainty in AI (CMU) Artificial Intelligence in Medicine (AIM) Seminar Friday, February 13, 1987 1:30-4:00 PM Wean 8220 "Comparing Methods for Treating Uncertainty in AI" Max Henrion Engineering and Public Policy Carnegie Mellon University As schemes for representing uncertainty in expert systems proliferate, the debate about their relative merits and drawbacks is heating up. Current contenders include Mycin's Certainty Factors, the Prospector scheme, Fuzzy Logic, Dempster-Shafer Theory, qualitative/verbal approaches, and a variety of coherent probabilistic schemes, including Bayesian belief nets, influence diagrams, and Maximum Entropy approaches. I will discuss various criteria for comparing them, including epistemological (do they represent what we mean by "uncertainty"?), heuristic (Are they computationally practical? Are they "good enough"?), and transductional (Can you easily encode human judgment and can you explain the results?). I will examine treatment of dependent evidence, causal and diagnostic reasoning, with simple medical examples. I will also describe a recent experiment comparing knowledge engineering for a rule-based expert system with a decision analysis/Bayes' net approach to the same task. Papers available from Max Henrion (maxh@Andrew) ------------------------------ Date: 11 Feb 87 10:30:03 EST From: Theona.Stefanis@g.cs.cmu.edu Subject: Seminar - The PRL Mathematics Environment (CMU) PS SEMINAR MONDAY, 16 February WeH 5409 3:30 The PRL Mathematics Environment: A Knowledge Based Medium Joseph Bates Cornell University A computer system, NuPRL, has been developed at Cornell over the last six years to serve as a dynamic electronic medium for mathematicians. Users of the system interactively create libraries of terminology, proofs, and ways of reasoning that constitute particular areas of mathematics. The system assists in creating these libraries, validates them, and extracts executable programs from proofs that implicitly describe computation methods. This behavior is not lost as the mathematics becomes increasingly abstract. NuPRL libraries have been developed for parts of number theory, real analysis, a theory of concurrency, automata theory, and several other areas. The system has been distributed to a dozen research groups and is being used at the University of Edinburgh as the foundation for their next generation mathematics environment. Much of the NuPRL architecture does not depend on the domain being mathematics. This observation together with experience using NuPRL has led us to begin designing a framework for providing active media in a variety of domains. After presenting the NuPRL architecture we will discuss what we have learned and then describe MetaPrl, our new framework for "knowledge based media". ------- To schedule an appointment with Joseph Bates, contact Becky Alden at X3772 or send mail to alden@gnome. ------------------------------ Date: Wed, 11 Feb 87 16:27 EDT From: MIKE%BUCASA.BITNET@wiscvm.wisc.edu Subject: Conference - IEEE Conference on Neural Nets: Student Special Student Special! IEEE First Annual Conference on Neural Networks, San Diego, June 21-24, 1987 San Diego, California Undergraduate and graduate student registration fee is $50.00. This includes attendance at all scientific sessions and social occasions. A valid university ID and picture ID must be presented at the meeting. Send registration fee to: Maureen Caudill IEEE - ICNN 10615G Tierrasanta Blvd. Suite 346 San Diego, California 92124 For further information call her at the telephone number listed below. Telephone: (619) 457-5550, ext. 221 ------------------------------ Date: 12 Feb 1987 18:17:44 EST From: Herve.Lambert@PS3.CS.CMU.EDU Subject: Conference - 2nd Conference on Artificial Intelligence and Sea Please POST 2nd INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLLIGENCE AND SEA ___________ Marseilles (France), June 18-19, 1987 Sponsored by: International Institute of Robotics and Artificial Intelligence Organization: Viviane Bernadac Phone: 33 91 91 36 72 IIRIAM/CMCI Telefax:33 91 91 70 24 2 rue Henri Barbusse Telex: MISTEL 440 860 F 13241 Marseille Cedex 1 FRANCE Objectives: The objectives of this second ORIA conference are to show, through real applications, that actual developments in Artificial Intelligence and especially in the area of expert systems are out of laboratories. They are now in the industrial world, particularly in the sea linked business, such as: offshore, shipbuilding, fishing, harbours installations,... Communications on the state of the art and different tools available will be followed by conferences on the present applications. An exhibition of industrial products and prototypes involved in hardware and software will be at hand. CALL FOR PAPERS Authors are invited to contribute papers on applications in: - Offshore Process Control (platforms, ships, drilling semi-subs, harbour installations, ...) - Underwater Robotics (mobile robots, UMC, subsea stations,...) - CAD and naval building (ships, platforms, piping,...) Deadline: February 28th Instructions to authors: Send 4 copies of the paper (up to 15 pages) to Viviane Bernadac, IIRIAM/CMCI (address mentionned above) : 1st page: Title of the paper Name of Authors Addresses Telephone, telex and telefax numbers Abstract (15 lines) ------------------------------ Date: 12 Feb 1987 20:17:29 EST From: Herve.Lambert@PS3.CS.CMU.EDU Subject: Conference - European Conference on AI in medicine Please POST EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE IN MEDICINE _______________ Marseilles (France), August 31st - September 3rd 1987 Following proposals at the International Conference on Artificial Intelligence in Medicine, Pavia, November 1985 the European Society for Artificial Intelligence in Medicine (AIME) has been established to foster fundamental and applied research in artificial intelligence and symbolic information processing techniques for medical care and medical research. AIME also wishes to assist industry in identifying high quality medical products which exploit these techniques. A major AIME activity will be a biannual series of intenational conferences, the next of which will be in Marseilles, France, following the International Conference on Artificial Intelligence in Milan, August 1987. CALL FOR PAPERS ______ Papers are invited on any aspect of the theory, design or application of medical AI systems. Submissions will be refereed by an international panel on the basis of complete but succinct papers. These should be in English, length 2000 - 4000 words. Criteria for acceptance will include originality, practical significance, contribution to theory of bmethodology and clarity of presentation. Submissions for a poster session are also invited; these should be a maximum of 500 words or one A4 page. The conference proceedings of papers and poster summaries will be available at the conference. DEADLINES - April 1st, 1987 Final date for receipt of full short paper camera ready. - May 15th, 1987 Notifications of acceptance of papers distribution of the Preliminary Program. - July 1st, 1987 Register for reduced registration fee until now. ADDRESS Viviane Bernadac - AIME 87 IIRIAM 2 rue Henri Barbusse 13241 Marseille Cedex 1 FRANCE ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Feb 15 00:35:29 1987 Date: Sun, 15 Feb 87 00:35:22 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #42 Status: R AIList Digest Saturday, 14 Feb 1987 Volume 5 : Issue 42 Today's Topics: Philosophy - Emotions & Consciousness & Methodology ---------------------------------------------------------------------- Date: Mon, 9 Feb 1987 18:52 EST From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU Subject: Glands and Psychic Function In asking about my qualifications for endorsing Eric Drexler's book about nanotechnology, Tim Maroney says > The psychology {in Eric Drexler's "Engines of Creation"} is so > amazingly shallow; e.g., reducing identity to a matter of memory, > ignoring effects of the glands and digestion on personality. > ...in my opinion his approach is very anti-humanistic. It is not a matter of reducing identity to memory alone, but, if he will read what Drexler said, a matter of replacing each minute section of the brain by some machinery that is functionally the same. Naturally, many of those functions will be affected by chemicals that, in turn, are partially controlled by other brain activities. A functional duplicate of the brain will have to be embedded in a system that duplicates enough of those non-neurological functions. However, in the view of many thinkers concerned with what is sometimes called the "downloading" enterprise, the functions of glands, digestion, and the rest are much simpler than those embodied in the brain; furthermore, they are common to all of us - and to all mammals as well, with presumably minor variations; in this sense they are not particularly involved in what we think of as individual identity. I should add that it is in order to avoid falling prey to such conventional superstitions, as this one - that emotions are much harder to comprehend and duplicate than are intellectual functions - that it is the requisite if sometimes unpleasant obligation of the good psychologist to try to be as anti-humanistic as possible; that is, in the sense of assuming that our oldest beliefs must be preserved, no matter what the scientific cost. ------------------------------ Date: 10 Feb 87 06:19:58 GMT From: well!wcalvin@lll-lcc.arpa (William Calvin) Subject: Re: More on Minsky on Mind(s) Sender: Reply-To: wcalvin@well.UUCP (William Calvin) Followup-To: Distribution: Organization: Whole Earth 'Lectronic Link, Sausalito, CA Keywords: Consciousness, throwing, command buffer, evolution, foresight Reply to Peter O. Mikes email remarks: > The ability to form 'the model of reality' and to exercise that model is > (I believe) a necessary attribute of 'sentient' being and the richness > of such model may one-day point a way to 'something better' then > word-logic. Certainly, the machines which exist so far, do not indeed > have any model of universe 'to speak off' and are not conscious. A model of reality is not uniquely human; I'd ascribe it to a spider as well as my pet cat. Similarly, rehearsing with peripherals switched off is probably not very different from the "get set" behavior of said cat when about to pounce. Choosing between behaviors isn't unique either, as when the cat chooses between taking an interest in my shoe-laces vs. washing a little more. What is, I suspect, different about humans is the wide range of simulations and scenario-spinning. To use the railroad analogy again, it isn't having two short candidate trains to choose between, but having many strings of a half-dozen each, being shaped up into more realistic scenarios all the time by testing against memory -- and being able to select the best of that lot as one's next act. I'd agree that present machines aren't conscious, but that's because they aren't Darwin machines with this random element, followed by successive selection steps. Granted, they don't have even a spider's model of the (spider's limited) universe; improve that all you like, and you still won't have human-like forecasting-the-future worry-fretting-joy. It takes that touch of the random, as W. Ross Ashby noted back in 1956 in his cybernetics book, to create anything really new -- and I'd bet on a Darwin- machine-like process such as multitrack stochastic sequencing as the source of both our continuing production of novelty and our uniquely-human aspects of consciousness. William H. Calvin University of Washington 206/328-1192 or 206/543-1648 Biology Program NJ-15 BITNET: wcalvin@uwalocke Seattle WA 98195 USA USENET: wcalvin@well.uucp ------------------------------ Date: Tue, 10 Feb 87 13:32:05 n From: DAVIS@EMBL.BITNET Subject: oh no, not more philosophy! From: "CUGINI, JOHN" > I (and Reed and Taylor?) been pushing the "brain-as-criterion" based > on a very simple line of reasoning: > 1. my brain causes my consciousness. > ....... > Now, when I say simple things like this, Harnad says complicated things like: > re 1: how do you KNOW your brain causes your consciousness? How can you have > causal knowledge without a good theory of mind-brain interaction? > Re 2: How do you KNOW your brain is similar to others'? Similar wrt > what features? How do you know these are the relevant features? > ..... > We are dealing with the mind-body problem. That's enough of a philosophical > problem to keep us busy. I have noticed (although I can't explain why), > that when you start discussing the mind-body problem, people (even me, once > in a while) start to use it as a hook on which to hang every other > known philosophical problem: > 1. well how do we know anything at all, much less our neighbors' mental states ? (skepticism and epistemology). > ........ > All of these are perfectly legitimate philosophical questions, but > they are general problems, NOT peculiar to the mind-body problem. > When addressing the mind-body problem, we should deal with its > peculiar features (of which there are enough), and not get mired in > more general problems * unless they are truly in doubt and thus their > solution truly necessary for M-B purposes. * > I do not believe that this is so of the issues Harnad raises. Sorry John, but you can't get away with this sort of 'simple' stuff. Dressing up complex issues in straightforward clothing is not an answer. Firstly, as Ken Laws recently indicated with considerable flair (though to my mind, insufficient force), we have to deal with your assertion that 'my brain causes my conciousness'. Harnad's question may or may not be relevant, but *IF* we are going to get bogged down in subjective conciousness (which is of little relevance to AI for the next 30 years AT LEAST), then we must begin by questioning even this most basic assumption. I don't think its necessary to take you through the argument, only to note that we end up with Nagel in asserting that "it is like something to be me/us". Its not difficult to assert and to cogently argue that conciousness is an illusion, but what is not so easily got around is that *something* could be having an illusion. The mere fact that we are aware (yes, I know, that's what conciousness *used* to mean!) immediately propels us to question how "anything can know anything at all". This question is absolutely central to the M-B problem, and there is no getting around by arguing for ways in which we might organise concious experience. The simple fact that we either *are* or even just *seem to be* concious immediately forces to deal with this issue. Of course, you can avoid it if you want to return to pre-computational philosophy, and put the M-B problem simply as the issue as the localisation of concious activity, but that seems to me to be as enourmous a bypass of the *real* issue as you can get. Speaking personally, I must say that it seems initially easier to suppose that we only suffer an illusion of conciousness - by which I mean we only suffer the illusion of being aware of possessing motivation, desire, intention, (maybe even intension !!!!) and emotion. In a superficial sense this clears everything up quite nicely, since it tends to be sort of things that have been referred to (implicitly or not) during the Minsky Meanderings. However, it DOES NOT get around the fact that there still seems to be a 'we' being the subject of these (magnificent) illusions. And that, my friends, must surely be the central issue. It makes not an iota of difference what our 'concious experiences' actually consist of, it makes no difference how our neural networks are linked to allow us to access previous events, to formulate reasons, to plan, to rehearse (re: Calvin). The problem at the heart of all this is simply that as individuals we are aware of *something*, and that is the biggest problem of all. Buts its irrelevant for ai. We will never be the computers we have designed, and hence they will always be 'other minds'. Hence, the issue for practical ai is simply one of nomenclature, and can never (?) be one of design.C'est ca. I don't think I explained this too well - maybe a prod will help me rearrange my thoughts..... so, robot cow -bolts or electronic battering rams to: paul ("the answers come easy - you have any questions ?") davis netmail: davis@embl.bitnet wetmail: embl, postfach 10.2209, 6900 Heidelberg, FRG. "conciousness is as a butterfly, which, chased after with great fervour, will never be yours. but if you will only sit down quietly, to admire the view, may alight gently upon your arm." with apologies to Nathaniel Hawthorne (I think) ------------------------------ Date: 9 Feb 87 14:48:28 GMT From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad) Subject: Re: More on Minsky on Mind(s) wcalvin@well.UUCP (William Calvin), Whole Earth 'Lectronic Link, Sausalito, CA writes: > Rehearsing movements may be the key to appreciating the brain > mechanisms [of consciousness and free will] But WHY do the functional mechanisms of planning have to be conscious? What does experience, awareness, etc., have to do with the causal processes involved in the fanciest plan you may care to describe? This is not a teleological why-question I'm asking (as other contributors have mistakenly suggested); it is a purely causal and functional one: Every one of the internal functions described for a planning, past/future-oriented device of the kind Minsky describes (and we too could conceivably be) would be physically, causally and functionally EXACTLY THE SAME -- i.e., would accomplish the EXACT same things, by EXACTLY the same means -- WITHOUT being interpreted as being conscious. So what functional work is the consciousness doing? And if none, what is the justification for the conscious interpretation of any such processes (except in my own private case -- and of course that can't be claimed to the credit of Minsky's hypothetical processes)? [As to "free will" -- apart from the aspect that is redundant with the consciousness-problem [namely, the experience, surely illusory, of free will], I sure wouldn't want to have to defend a functional blueprint for that...] -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ Date: 9 Feb 87 19:28:40 GMT From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad) Subject: Re: More on Minsky on Mind(s) (Reply to Davis) Causality Summary: On the "how" vs. the "why" of consciousness References: <460@mind.UUCP> <1032@cuuxb.UUCP> <465@mind.UUCP> <2556@well.UUCP> <491@mind.UUCP> Paul Davis (davis@embl.bitnet) EMBL,postfach 10.22.09, 6900 Heidleberg, FRG wrote on mod.ai: > we see Harnad struggling with why's and not how's... > conciousness is a *biological* phenomenon... because > this is so, the question of *why* conciousness is used > is quite irrelevant in this context...[Davis cites Armstrong, > etc., on "conciousness as a means for social interaction"]... > conciousness would certainly seem to be here -- leave it to > the evolutionary biologists to sort out why, while we get on > with the how... I'm concerned ONLY with "how," not "why." That's what the TTT and methodological epiphenomenalism are about. When I ask pointedly about "why," I am not asking a teleological question or even an evolutionary one. [In prior iterations I explained why evolutionary accounts of the origins and "survival value" of consciousness are doomed: because they're turing-indistinguishable from the IDENTICAL selective-advantage scenario, minus consciousness.] My "why" is a logical and methodological challenge to inadequate, overinterpreted "how" stories (including evolutionary "just-so" stories, e.g., "social" ones): Why couldn't the objectively identical "how" features stand alone, without being conscious? What functional work is the consciousness itself doing, as opposed to piggy-backing on the real functional work? If there's no answer to that, then there is no justification for the conscious interpretation of the "how." [If we're not causal dualists, it's not even clear whether we would WANT consciousness to be doing any independent work. But if we wouldn't, then why does it figure in our functional accounts? -- Just give me the objective "how," without the frills.] > the mystery of the C-1: How can ANYTHING *know* ANYTHING at all? The problem of consciousness is not really the same as the problem of knowledge (although they're linked, since, until shown otherwise, only conscious devices have knowledge). To know X is not the same as to experience X. In fact, I don't think knowledge is a C-1-level phenomenon. [I know (C-2) THAT I experience pain, but does the cow know THAT she experiences pain? Yet she presumably does experience pain (C-1).] Moreover, "knowledge" is mired in epistemological and even ontological issues that cog-sci would do well to steer clear of (such as the difference between knowing X and merely believing X, with justification, when X is true). -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ Date: 9 Feb 87 18:33:49 GMT From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad) Subject: Re: More on Minsky on Mind(s) (Reply to Laws) Ken Laws wrote on mod.ai: > I'm not so sure that I'm conscious... I'm not sure I do experience > the pain because I'm not sure what "I" is doing the experiencing This is a tough condition to remedy. How about this for a start: The inferential story, involving "I" and objects, etc. (i.e., C-2) may have the details wrong. Never mind who or what seems to be doing the experiencing of what. The question of C-1 is whether there is any experience going on at all. That's not a linguistic matter. And it's something we presumably share with speechless, unreflective cows. > on the other hand, I'm not sure that silicon systems > can't experience pain in essentially the same way. Neither am I. But there's been a critical inversion of the null hypothesis here. From the certainty that there's experience going on in one privileged case (the first one), one cannot be too triumphant about the ordinary inductive uncertainty attending all other cases. That's called the other-minds problem, and the validity of that ineference is what's at issue here. The substantive problem is characterizing the functional capacities of artificial and natural systems that warrant inferring they're conscious. > Instead of claiming that robots can be conscious, I am just as > willing to claim that consciousness is an illusion and that I am > just as unconscious as any robot. If what you're saying is that you feel nothing (or, if you prefer, "no feeling is going on") when I pinch you, then I must of course defer to your higher authority on whether or not you are really an unconscious robot. If you're simply saying that some features of the experience of pain and how we describe it are inferential (or "linguistic," if you prefer) and may be wrong, I agree, but that's beside the point (and a C-2 matter, not a C-1 matter). If you're saying that the contents of experience, even its form of presentation, may be illusory -- i.e., the way things seem may not be the way things are -- I again agree, and again remind you that that's not the issue. But if you're saying that the fact THAT there's an experience going on is an illusion, then it would seem that you're either saying something (1) incoherent or (in MY case, in any event) (2) false. It's incoherent to say that it's illusory that there is experience because the experience is illusory. If it's an experience, it's an experience (rather than something else, say, an inert event), irrespective of its relation to reality or to any interpretations and inferences we may wrap it in. And it's false (of me, at any rate) that there's no experience going on at all when I say (and feel) I have a toothache. As for the case of the robot, well, that's what's at issue here. [Cartesian exercise: Try to apply Descartes' method of doubt -- which so easily undermines "I have a toothache" -- to "It feels as if I have a toothache." This, by the way, is to extend the "cogito" (validly) even further than its author saw it as leading. You can doubt that things ARE as they seem, but you can't doubt that things SEEM as they seem. And that's the problem of experience (of appearances, if you will). Calling them "illusions" just doesn't help.] > One way out is to assume that neurons themselves are aware of pain Out of what? The other-minds problem? This sounds more like an instance of it than a way out. (And assumption hardly seems to amount to solution.) > How do we know that we experience pain? I'm not sure about the "I," and the specifics of the pain and its characterization are negotiable, but THAT there is SOME experience going on when "I" feel "pain" is something that anyone but an unconscious robot can experience for himself. And that's how one "knows" it. > I propose that... our "experience" or "awareness" of pain is > an illusion, replicable in all relevant respects by inorganic systems. Replicate that "illusion" -- design devices that can experience the illusion of pain -- and you've won the battle. [One little question: How are you going to know whether the device really experiences that illusion, rather than your merely being under the illusion that it does?] As to inorganic systems: As ever, I think I have no more (or less) reason to deny that an inorganic system that can pass the TTT has a mind than I do to deny that anyone else other than myself has a mind. That really is a "way out" of the other-minds problem. But inorganic systems that can't pass the TTT... -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Feb 15 00:35:44 1987 Date: Sun, 15 Feb 87 00:35:34 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #43 Status: R AIList Digest Saturday, 14 Feb 1987 Volume 5 : Issue 43 Today's Topics: Philosophy - Consciousness & Methodology & Zen ---------------------------------------------------------------------- Date: 10 Feb 87 19:41:21 GMT From: Diamond!aweinste@husc6.harvard.edu (Anders Weinstein) Subject: Harnad's epiphenomenalism In defending his thesis of "methodological epiphenomenalism", one of Harnad's favorite strategies is apparently a variant of G.E. Moore's "naturalistic fallacy" argument: For any proposed definition of consciousness, he will ask: "You say consciousness is X, but why couldn't you just as well have X WITHOUT consciousness?" If we concede the meaningfulness of this question in all cases, obviously this objection will be decisive. But, I think this argument is as question-begging now as it was when Moore used it in ethical philosophy. The definer is proposing that X is just what consciousness IS. Accordingly, he does *not* grant that you could have X without consciousness since, on his view, X and consciousness are one and the same. Put another way, the materialist is not trying to ADD anything to the objective, causal story of X by calling it consciousness. Rather, he is attempting to illuminate the problematic common-sense notion of consciousness by showing how it is interpretable in naturalistic terms. Obviously the adequacy of any proposed definition of consciousness will need to be established; the issues to be considered will pertain to whether or not the definition does reasonable justice to the pre-analytic application of the term, etc. But these issues are just the usual ones for inter-theoretical identification, and don't present any special problem in the case of mind and brain. Another point that Harnad has often stated is that behavior is in practice our only criterion for the ascription of consciousness. While this is currently true, it does not at all preclude the revision of our theory in the direction of a more refined criterion. Compare, say, the definition of "gold." At one time, this substance was identifiable solely on the basis of its superficial properties such as color, hardness, and specific gravity. With the growth of scientific knowledge, a new definition of gold in terms of atomic structure has come to be accepted, and this criterion now supersedes the earlier ones. If you like, you might say that atomic theory came to reveal the "essence" of gold. I see no reason to suppose an analagous shift couldn't arise out of the study of the mind and brain. Harnad's "methodological epiphenomenalism" is a apparently an unavoidable consequence of his philosophy of mind, which seems to be epiphenomenalism simpliciter. I am surprised to find many of Harnad's interlocutors essentially granting him this controversial premise. Whatever happened to materialism? As I understood it, the whole field of cognitive science -- the rehabilitation of mentalistic theorizing in psychology -- was inspired by the philosophical insight that the functional states of computers seemed to have just the right sorts of features we would want for psycho-physical identification. Harnad must believe that this philosophy has failed, dooming us to return to an uneasy and unappealling view: ontological dualism coupled with methodological behaviorism -- the worst of both worlds. Well, I don't think we ought to give this up so easily. I would urge that cognitivists *not* buy into the premise of so many of Harnad's replies: the existence of some weird parallel universe of subjective experience. (Actually, *multiple* such universes, one per conscious subject, though of course the existence of more than my own is always open to doubt.) We should recognize no such private worlds. The most promising prospect we have is that conscious experiences are either to be identified with functional states of the brain or eliminated from our ultimate picture of the world. How this reduction is to be carried out in detail is naturally a matter for empirical study to reveal, but this should remain one (distant) goal of mind/brain inquiry. Anders Weinstein aweinste@DIAMOND.BBN.COM BBN Labs, Cambridge MA ------------------------------ Date: 10 Feb 87 20:09:44 GMT From: ihnp4!houxm!houem!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT) Subject: Re: More on Minsky on Mind(s) In article <490@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes: > wcalvin@well.UUCP (William Calvin), Whole Earth 'Lectronic Link, Sausalito, CA > writes: > > Rehearsing movements may be the key to appreciating the brain > > mechanisms [of consciousness and free will] > > But WHY do the functional mechanisms of planning have to be conscious? > What does experience, awareness, etc., have to do with the causal > processes involved in the fanciest plan you may care to describe?... I have the gall to answer an answer to an answer without having read Minsky. But then, my interest in AI is untutored and practical. Here goes: My notion is that a being that thinks is not necessarily conscious, but a being that thinks about thinking, and knows when it is just thinking and when it is actually doing, must be called conscious. In UNIX(tm) there is a program called "make" that reads a script of instructions, compares the ages of various files named in the instructions, and follows the instructions by updating only the files that need to be updated. It can be said to be acting with some sort of rudimentary intelligence. If you invoke the "make" command with the "-n" flag, it doesn't do any updating, it just tells you what it would do. It is rehearsing a potential future action. In a sense, it's thinking about what it would do. But it doesn't have to know that it's only thinking and not doing. It could simply have its actuators cut off from its rudimentary intelligence, so that it thinks it's acting but really isn't. Now suppose the "make" command could, under its own internal program, run through its instructions with a simulated "-n" flag, varying some conditions until the result of the "thinking without doing" satisfied some objective, and then could remove the "-n" flag and actually do what it had just thought about. This "make" would appear to know when it is thinking and when it is acting, because it decided when to think and when to act. In fact, in its diagnostic output it could say first "I am thinking about the following alternative," and then finally say, "The last run looked good, so this time I'm really going to do it." Not only would it appear to be conscious, but it would be accomplishing a practical purpose in a manner that requires it to distinguish internally between introspection and action. I think that version of "make" would be within the current state of the art of programming, and I would call it conscious. So we're not far from artificial consciousness. Marty M. B. Brilliant (201)-949-1858 AT&T-BL HO 3D-520 houem!marty1 ------------------------------ Date: 11 Feb 87 19:44:14 GMT From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad) Subject: Re: Harnad's epiphenomenalism aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Labs, Cambridge, MA, writes: > For any proposed definition of consciousness, [Harnad] will > ask: "You say consciousness is X, but why couldn't you just as > well have X WITHOUT consciousness?" > I think this argument is as question-begging now as it was when Moore > used it in ethical philosophy. The definer is proposing that X is > just what consciousness IS. Accordingly, he does *not* grant that > you could have X without consciousness since, on his view, X and > consciousness are one and the same. It unfortunately has to be relentlessly reiterated that these matters are not settled by definitions or obiter dicta. It simply won't do to say "On my view, consciousness and X [say, memory, learning, self-referential capacity, linguistic capacity, or what have you] are one and the same." It is perfectly legitimate -- indeed, mandatory, if SOMEONE is going to exercise some self-critical constraints on mentalistic interpretation -- to ask WHY a candidate process should be interpreted as conscious. If all the functional answers to that question -- "it's so it can accomplish X," or "it's so it can accomplish Y this way rather than that way" -- would be the SAME for an unconscious process, then there are indeed strong grounds for supposing that the mentalistic interpretation is methodologically (I might even say, to bait the functionalists more pointedly, "functionally") superfluous. (It's not the skepticism that's question-begging, but the mentalistic interpretation that's supererogatory.) I have no idea how or why Moore used a similar argument in ethics. My own argument is purely methodological (and functional -- I am a kind of functionalist too): I am concerned with how to get devices we build (and hence understand) to DO what minds can do. These devices may also turn out to BE what minds are (namely conscious), but I do not believe that there is any objective, scientific way to ascertain that. Nor do I think it is methodologically possible or relevant (or, a fortiori, necessary) to do so. My pointed "why" questions are intended to pare off the unjustified and distracting mentalistic hype and leave a clearer image of just how far we really have or haven't gotten in answering the "how" questions, which are the only scientifically tractable ones in the area of theoretical bioengineering that mind-modeling occupies. > the materialist is attempting to illuminate the problematic > common-sense notion of consciousness by showing how it is > interpretable in naturalistic terms. Obviously the adequacy of > any proposed definition of consciousness will need to be established; > the issues to be considered will pertain to whether or not the > definition does reasonable justice to the pre-analytic application > of the term, etc. But these issues are just the usual ones for > inter-theoretical identification, and don't present any special > problem in the case of mind and brain. But it is just the question of whether these issues are indeed the "usual" ones in the mind/brain case that is at issue. I've given lots of logical and methodological reasons why they're not. Wishful thinking, hopeful overinterpretation and scientistic dogma seem to be the only rejoinders I'm hearing. (I'm a materialist too; methodological constraints on theoretical inference and its deliverances are what's at issue here.) > Compare, say, the definition of "gold."... > growth of scientific knowledge...new definition of gold > I see no reason to suppose an analogous shift > couldn't arise out of the study of the mind and brain. I like the way Nagel handled this old reductionist chesnut: In a chesnut-shell, he pointed out that all of the standard reduction/revision scenarios of science have always consisted of one objective account of an objective phenomenon being superseded or subsumed by another objective account of an objective phenomenon (heat --> mean molecular motion, etc.). There's nothing in this standard revision-scenario that applies to -- much less can handle -- redefining subjective phenomena objectively. That prominent disanalogy is yet another of the faces of the mind/body problem (that functionalist euphoria sometimes overlooks). As it stands, the faith in an eventual successful "redefinition" is just that: a faith. One wonders why it does not founder in the sea of counter-examples and disanalogies rightly generated by Moore's (if it's really his) method of pointed "why" challenges. But there's no accounting for faith. > Harnad's "methodological epiphenomenalism" is apparently an > unavoidable consequence of his philosophy of mind, which seems to > be epiphenomenalism simpliciter. No, I'm not an ontological epiphenomenalist (which I suppose is a kind of dualism), just a methodological one. I don't think consciousness can enter into scientific theory-building and theory-testing, for the reasons I've stated. In fact, I think it retards theory-building to try to account for consciousness or to dress theory up with conscious interpretations. (Among other things, it masks the performance work that still remains to be done, and lionizes possible nonstarters.) However, I have no doubt that consciousness exists, and no serious doubts that organisms are conscious. Moreover, I'm quite prepared to believe the same of devices that pass the TTT, and on exactly the same grounds. These devices may well have "captured" consciousness functionally. Yet not only is there no way of knowing whether or not they really have; it even makes no methodological difference to their functioning or to our theoretical understanding of it whether or not they have really captured consciousness. This is not an ontological issue. The mind/body problem simply represents a methodological constraint on what can be known objectively, i.e., scientifically. (Note that this constraint is not just the ordinary underdetermination of scientific inferences about unobservables; it's much worse. For, as I've pointed out several times before, although hypthesized entities such as quarks or superstrings are no more observable or "verifiable" than consciousness, it is a methodological fact that the respective theories from which they come cannot account for the objective phenomena without positing their existence, whereas any theory of the objective phenomena of mind -- i.e., I/O performance capacity, perhaps supplemented by structure and function -- will work just as well with or without a mentalistic interpretation.) > the whole field of cognitive science -- the rehabilitation of > mentalistic theorizing in psychology -- was inspired by the > philosophical insight that the functional states of computers > seemed to have just the right sorts of features we would want for > psycho-physical identification. Harnad must believe that this > philosophy has failed, dooming us to return to an uneasy and > unappealling view: ontological dualism coupled with methodological > behaviorism -- the worst of both worlds. I certainly believe that the view has failed methodologically. But I don't think the consequence is ontological dualism (for the reasons I've stated) and it's not clear what "methodological behaviorism" is (or was, I'll return to this important point). Nor do I consider cognitive science to be synonymous with mentalistic theorizing; nor do I consider the field to be inspired by the the psycho-physical identificatory hopes aroused by the computer. If you want to know what I think, it's this: Behaviorism, in a reaction against the sterility of introspectionism, rejected reflecting and theorizing on what went on in the mind, suggesting instead that psychology's task was to study observable behavior. But in its animus against mentalistic theory, behaviorism managed to do in or trivialize theory altogether. Put another way, not only was behaviorism opposed to (observing or) theorizing about what went on in the MIND, it also opposed theorizing about what went on in the HEAD. As a consequence, behavioristic psychology effectively became a "science" without a theoretical or inferential branch to speak of. Now what I think happened with the advent of cognitive science was that, again, just as unobservable mental processes and unobservable (shall we call them) "inernal" processes had been jointly banned from the citadel, they were, with the rise of computer modeling (and neural modeling), jointly readmitted. The mistake, as I see it, was to embrace indiscriminately BOTH the legitimate right (and need) to make theoretical inferences about the unobservable functional subtrates of behavior AND the temptation to make mentalistic interpretations of them. In my view, the first advances empirical progress (in fact is essential for it), the second beclouds and retards it. Cognitive science is (or should be) behaviorism-with-a-theory (or theories) at last. If that's "methodological behaviorism," then it took the computer era to make it so. > Well, I don't think we ought to give this up so easily. > I would urge that cognitivists *not* buy into the premise of > so many of Harnad's replies: the existence of some weird parallel > universe of subjective experience... conscious experiences are > either to be identified with functional states of the brain or > eliminated from our ultimate picture of the world. How this > reduction is to be carried out in detail is naturally a matter for > empirical study to reveal, but this should remain one (distant) > goal of mind/brain inquiry. Identify it with the functional states if you like. But then FORGET about it until you've GOT the functional states that deliver the performance (TTT) goods. When you've got those -- i.e., when all the objective questions there are to be answered are answered -- then no harm whatever will be done by an orgy of mentalistic interpretation of the objective story. No "weird parallel universe." Just the familiar subjective one we all know at first hand. Plus the methodological constraint that the complete scientific picture is doomed to fail to account to our satisfaction for the existence, nature, and utility of subjectivity. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ Date: 11 Feb 87 17:39:14 GMT From: mcvax!ukc!cheviot!rosa@seismo.css.gov (Rosa Michaelson - U of Dundee) Subject: Re: More on Minsky on Mind(s) (Reply to Davis) This is really afollow up to Cuigini but I do not have the moderators address. Please refer to McCarthy's seminal work "The conciousness of Thermostats". All good AI believers emphasize with thermostats rather than other humans. Thank goodness I do computer science...(:-) Has Zen and the art of Programming not gone far enough??? Please no more philosophy, I admit it I do Not care about conciousness/minsky/the mind brain identity problem.... Is it the cursor that moves, the computer that thinks or the human that controls? None of these, grasshopper, only a small data error on the tape of life. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sun Feb 15 00:36:00 1987 Date: Sun, 15 Feb 87 00:35:46 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #44 Status: R AIList Digest Saturday, 14 Feb 1987 Volume 5 : Issue 44 Today's Topics: AI Methodology - Symbolic Logic vs. Analog Representation & Pragmatic Definitions of AI/Cognitive Terms ---------------------------------------------------------------------- Date: 12 Feb 87 17:52:55 GMT From: vax1!czhj@cu-arpa.cs.cornell.edu (Ted ) Subject: Re: Learing about AI In article <12992@sun.uucp> lyang%jennifer@Sun.COM (Larry Yang) writes: >.... >I was in for a surprise. Based on my experience, if you want >to learn about hard-core, theoretical artificial intelligence, >then you must have a strong (I mean STRONG) background in formal >logic. This is EXACTLY the problem with AI research as it is commonly done today. (and perhaps yesterday as well). The problem is that mathematicians, logicians and computer scientists, with their background in formal logic have no other recourse than to attack the AI problem using these tools that are available to them. Perhaps this is why the field makes such slow progress? AI is an ENORMOUS problem, to say the least and research into it should not be bound by the conventional thinking that is going on. We have to look at the problem in NEW ways in order to make progress. I am strongly under the impression that people with a strict theoretical training will actually HINDER the field rather than advance it because of the constraints on the ideas that they come up with just because of their background. Now, I'm NOT saying that nobody in CS, MATH, or LOGIC is capable of original thought, however, from much of the research that is being done, and from the scope of the discussions on the NET, it seems safe to say that many people of these disciplines discount less formal accounts as frivolous. But look at the approach that LOGIC gives AI. It is a purely reductionist view, akin to studying global plate motion at the level of sub-atomic particles. It is simply the wrong level at which to approach the problem. A far more RATIONAL approach would be to integrate a number of disciplines towards the goal of understanding intelligence. COMPUTER SCIENCE has a major role because of the power of computer modeling, efficient data structures and models of efficient parallel computation. Beyond that, it seems that computer science should take a back seat. LOGIC, well, where would that fit in? Maybe at the very lowest level, but most of that is taken for granted by computer science. PHILOSOPHY tends to be a DEAD END, as can clearly be noted by the arguments going on on the NET :) Honestly, the philosophy arguments tend to get so jumbled (though logical), that they really add little to the field. COGNITIVE PSYCHOLOGY is a quickly emerging field that is producing some interesting findings, however, at this stage, it is more descriptive than anything else. There is some interesting speculation into the processes that are going on behind thought in this field, and they should be looked at carefully. However, there is simply so much fluff and pointless experiments that it takes quite a while to wade through and get anything significant. LINGUISTICS is a similar field. The work of Chomsky and others has given us some fascinating ideas and may get somewhere in terms of biological constraints on knowledge and such. Even NEUROBIOLOGY should get involved. Reasearch in this field gives us more insight into internal constraints. Furthermore, by studying people with brain disorders (both congenital and through accident) we can gain some insight into what types of structures are innate or have a SPECIFIC locus of control. In sum, I call for using many different disciplines to solve the basic problems in knowledge, learning and perception. No single approach will do. ---Ted Inoue ------------------------------ Date: 13 Feb 87 14:17:24 GMT From: sher@CS.ROCHESTER.EDU (David Sher) Subject: Re: Learing about AI If I didn't respond to this I'd have to work on my thesis so here goes: I think there seems to be something of a misconception regarding the place of logic wrt AI and computer science in general. To start with I will declare this: Logic is a language for expressing mathematical constructs. It is not a science and as far as artificial intelligence is concerned the mathematics of logic are not very relevant. Its main feature is that it can be used for precise expression. So why use logic rather than a more familiar language, like english. One can be precise in english, writers like Edgar Allen Poe, Issac Asimov, and George Gamov all have written very precise english on a variety of topics. However the problem is that few of us knowledge engineers have the talent to be precise in our everyday language. There are few great, or even very good writers among AI practitioners. Thus for decades engineers, scientists, and statisticians have used logic to express their ideas since even an incompetent speaker can be clear and precise using logical formalisms. However like any language with expressive power one can be totally incomprehensible using logic. I have seen logical expressions that even the author did not understand. Thus logic is not a panacea, it is merely a tool. But it is a very useful and important tool (you can chop down trees with a boy scout knife but I'll take an axe any day and a chain saw is even better). Also like english or any other language the more logic you know the more clearly and compactly you can state your ideas (if you can avoid the temptation to use false erudition and use your document to demonstrate your formal facility rather than what you are trying to say). Thus if you know modal or second order logics you can express more than you can with simple 1st order predicate calculus and you can express it better. Of course, not everyones goals are to express themselves clearly. Some people's business is to confuse and obfuscate. While logic can be put to this purpose it is easier to use english for this task. It takes an uncommon level of expertise to be really confusing without appearing incompetant with logic. Note: I am not a logician but I use a lot of logic in my everyday work which is probabilistic analysis of computer vision problems (anyone got a job?). -- -David Sher sher@rochester {allegra,seismo}!rochester!sher ------------------------------ Date: Fri, 13 Feb 87 14:02:51 pst From: Ray Allis Subject: Other Minds Some of you may be after the fame and great wealth associated with AI research, but MY goal all along has been to BUILD an "other mind"; a machine who thinks *at least* as well as I do. If current "expert systems" are good enough for you, please skip this. Homo Sap.'s distinguished success among inhabitants of this planet is primarily due to our ability to think. We will continue to exist only if we act intelligently, and we can use all the help we can get. I am not convinced that Mutual Assured Destruction is the most intelligent behavior we can come up with. It's clear the planetary population can benefit from help in the management of complexity, and it is difficult for me to imagine a goal more relevant than improving the chances for survival by increasing our ability to act intelligently. However, no machine yet thinks nearly as well as a human, let alone better. I wouldn't trust any computer I know to babysit my child, or my country. Why? Machines don't understand! Anything! The reason for this poor performance is an inadequate paradigm of human intelligence. The Physical Symbol System Hypothesis does not in fact account for human intelligent behavior. Parenthetically, there's no more excitement in symbol-processing computers; that's what digital computers have been doing right along, taking the symbol for two and the symbol for two, performing the defined operation "ADD" and producing the symbol for four. We may have lost interest in analog systems prematurely. Manipulation of symbols is insufficient by itself to duplicate human performance; it is necessary to treat the perceptions and experiences the symbols *symbolize*. Put a symbol for red and a symbol for blue in a pot, and stir as you will, there will be no trace of magenta. I have developed a large suite of ideas concerning symbols and representations, analog and digital "computing", induction and deduction, natural language, consciousness and related concepts which are inextricably intertwined and somewhat radical, and the following is necessarily a too-brief introduction. But maybe it will supply some fuel for discussion. Definition of terms: By intelligence, I mean intelligent behavior; intelligent is an adjective describing behavior, and intelligence is a name for the ability of an organism to behave in a way we can call intelligent. Symbols and representations: There are two quite distinct notions denoted by *symbolize* and *represent*. Here is an illustration by example: Voodoo dolls are intended as symbols, not necessarily as faithful images of a person. A photo of your family is representative, not symbolic. A picture of Old Glory *represents* a flag, which in turn *symbolizes* some concepts we have concerning our nation. An evoked potential in the visual cortex *represents* some event or condition in the environment, but does not *symbolize* it. The essence of this notion of symbolism is that humans can associate phenomena "arbitrarily"; we are not limited to representations. Any phenomenon can "stand for" any other. That which any symbol symbolizes is a human experience. Human, because we appear to be the only symbol users on the planet. Experience, because that is symbolism's ultimate referent, not other symbols. Sensory experience stops any recursion. Noises and marks "symbolize" phenomenological experience, independent of whether those noises and marks are "representative". Consciousness: Consciousness is self-consciousness; you aren't conscious of your environment, you are conscious of your perceptions of your environment. Sensory neurons synapse in the thalamus. From there, neurons project to the cortex, and from the cortex, other neurons project back to the thalamus, so there, in associative contiguity, lie the input lines and reflections of the results of the perceptive mechanisms. The brain has information as to the effects of its own actions. Whether it is resident in thalamic neurons or distributed throughout the brain mass, that loop is where YOU are, and life experience builds your identity; that hand is part of YOU, that hammer is not. One benefit of consciousness is that it extends an organism's time horizon into the past and the future, improving its chance for survival. Consciousness may be necessary for symbol use. Natural language: Words, spoken or written, are *symbols*. But human natural language is not a symbol system; there are no useful interactions among the symbols themselves. Human language is evocative; its function is to evoke experiences in minds, including the originating mind. Words do not interact with each other; their connotations, the evoked responses in human minds interact with each other. Responses are based on human experience; touch, smell, vision, sound, emotional effects. Communication between two minds requires some "common ground"; if we humans are to communicate with the minds we create, we and they must have some experiential "common ground". That's why no machine will "really understand" human natural language until that machine can possess the experiences the symbols evoke in humans. Induction and deduction: Induction, as defined here, consists in the cumulative effect of experience on our behavior, as implemented by neural structures and components. Induction is the effect on an organism's behavior; not a procedure effected by the organism. That is to say, the "act" of induction is only detectable through its effects. All living organisms' behavior is modified by experience, though only humans seem to be self-aware of the phenomenon. Induction treats *representations*, rather than *symbols*; the operation is on *representation* of experience, quite different from symbolic deduction. Deduction treats the *relationships among symbols*, that which Hume described as "Relations of Ideas". There is absolute certainty concerning all valid operations, and hence the resulting statements. The intent is to manipulate a specific set of symbols using a specific set of operations in a mechanical way, having made the process sufficiently explicit that we can believe in the results. But deduction is an operation on the *form* of a symbol system; a "formal" operation, and deliberately says nothing at all concerning the content. Deductive, symbolic reasoning may be the highest ability of humans, but there's more to minds than that. Analogy: One definition of analogy is as the belief that if two objects or events are alike in some observed attributes they are alike in other, unobserved, attributes. It follows that the prime requisite for analogy is the perception of "similarity". It could be argued that the detection of similarity is one of the most basic abilities an organism must have to survive. Similarity and analogy are relationships among *representations*, not among *symbols*. Significant similarities, (i.e. analogy and metaphor) are not to be found among the symbols representing mental perceptions, but among the perceptions themselves. Similarity is perceived among experiences, as recorded in the central nervous system. The mechanism is that symbols evoke, through association, the identical effects in the nervous system as are evoked by the environmental senses. Associative memory operates using sensory phenomena; that is, not symbols, but *that which is symbolized* and evoked by the symbols. We don't perceive analogies between symbols, but between the experiences the symbols evoke in our minds. Analog and digital: The physical substrate supporting intelligent behavior in humans is the central nervous system. The model for understanding the CNS is the analog "gadget" which "solves problems", as in A. K. Dewdney's Scientific American articles, not Von Neumann computers; nor symbol systems of any kind. The "neural net" approaches look promising, if they are considered to be modifiable analog devices, rather than alternative designs for algorithmic digital computers. Learning and knowledge: Learning is inductive; by definition the addition of knowledge. "Deductive logic is tautological"; i.e. implications of present knowledge can be made explicit, but no new knowledge is introduced by deductive operations. There is no certainty with induction, though: "And this kind of association is not confined to men; in animals also it is very strong. A horse which has been often driven along a certain road resists the attempt to drive him in a different direction. Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken." [Bertrand Russell. 1912. "On Induction", Problems of Philosophy.] Thinking systems will be far too complex for us to construct in "mature" form; artificial minds must learn. Our most reasonable approach is to specify the initial conditions is terms of the physical implementation (e.g., sensory equipment and pre-wired associations) and influence the experience to which a mind is exposed, as with our children. What is meant by "learning"? One operational definition is this: can you apply your knowledge in appropriate ways? Some behavior must be modified. All through your childhood, all through life, your parents and teachers are checking whether you have learned something by asking you to apply it. As a generalization of applying, a teacher will ask if you can re-phrase or restate your knowledge. This demonstrates that you have internalized it, and can "translate" from internal to external, in symbols or in modified behavior. Language to internalized, and back to language... if you can do this, you "understand". Knowledge is the state of the central nervous system, either built in or acquired through experience. Experience is recorded in the CNS paths which "process" it. Recording experience essentially in the same lines which sense it saves space and totally eliminates access time. There is no retrieval problem; re-evocation, re-stimulation of the sensory path is retrieval, and that can be done by association with other experience, or with symbols. That's probably enough for one shot. Except to say I think the time is ripe for trying some of these ideas out on real machines. A few years ago there was no real possibility of building anything so complex as a Connection Machine or a million-node "neural net", and there's still no chance at constructing something as complex as a baby, but maybe there's enough technology to build something pretty interesting, anyway. Ray ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Feb 19 00:32:12 1987 Date: Thu, 19 Feb 87 00:32:06 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #45 Status: R AIList Digest Wednesday, 18 Feb 1987 Volume 5 : Issue 45 Today's Topics: Queries - Window System & And/Or Graphs & Organic Microchips & OPS5 in Standard/Cambridge Lisp & Lisp Sources for OPS5 & Legal Reasoning & Parallel Functional Programming Languages, AI Tools - DEC AI Workstation & Common LISP ---------------------------------------------------------------------- Date: Thu, 12 Feb 87 12:15:49 MEZ From: ZZZO%DHVRRZN1.BITNET@wiscvm.wisc.edu Subject: Window System Date: 12 February 1987, 12:07:58 MEZ From: Wolfgang Zocher (0511) 762-3684 ZZZO at DHVRRZN1 To: AILIST at SRI-STRIPE I'm looking for an powerful window-system written in LISP (pref. Commonlisp) to support an object-oriented KR-System (TLC-LISP on IBM PC/AT). My major task is the devellopment of KR; windows are only needed for better demon- stration... So, I would like public domain sources. Wolfgang Zocher zzzo@dhvrrzn1.bitnet ------------------------------ Date: Thursday, 12 February 1987 17:10:44 EST From: Kenneth.Goldberg@a.gp.cs.cmu.edu Subject: And/Or graphs Two queries concerning And/Or graphs (as opposed to trees): 1) Has anyone published a thorough survey of And/Or graph search algorithms? 2) What is the convention regarding And-nodes? Nilsson (Prob. Solving Methods in AI, pp. 87-88) labels those with incoming And-links as And-nodes. Winston (AI, p. 148) and Pearl (Heuristics, p. 25) labels those with outgoing And-links as And-nodes. More importantly, is there a convincing argument for either one? ------------------------------ Date: 12 Feb 87 22:26:26 GMT From: gorin@MEDIA-LAB.MIT.EDU (Amy Gorin) Subject: organic microchips if anybody has any information regarding organic systems for use in ai and computers in general,and especially the work of : Arieh Aviram and Philip Seiden of IBM Mark ratner of Northwestern Robert Metzger and Charless Panetta of U of Miss. Forest L. Carter, Naval Research Lab Pichart Potember , John's Hopkins Tim Posten and F. Eugene Yates, UCLA Please let me know (recent papers and articles, etc.) Thanks, * ARPA: gorin@media-lab.media.mit.edu * It's not who you know, * * UUCP: mit-eddie!mit-amt!gorin * it's whom you know * ------------------------------ Date: 15 Feb 87 19:58:50 GMT From: husc2!chabris@husc6.harvard.edu (chabris) Subject: OPS5 in Standard/Cambridge Lisp? I have the OPS5 source code in Franz Lisp and Common Lisp (as posted in the AI Forum on Compuserve) and am interested in porting it to Cambridge Lisp. Does anyone know if this has already been done, or if there is any OPS5 source in either Standard Lisp or Portable Standard Lisp, the ancestor dialects of Cambridge Lisp? Thank you very much. -- =============================================================================== Christopher F. Chabris Contributing Editor, START Magazine, Antic Publishing [Dunster F-61, Harvard University, Cambridge, MA, 02138 (617) 498-2239] [Permanent: 15 Sterling Road, Armonk, NY, 10504 (914) 273-8828] ARPAnet: chabris@husc4.harvard.edu Compuserve: 73277,305 UUCP: ...harvard!husc4!chabris Bitnet: chabris@harvunxu =============================================================================== ------------------------------ Date: 17 Feb 87 20:12 AST From: AXDRW%ALASKA.BITNET@wiscvm.wisc.edu Subject: Lisp Sources for OPS5 Hello, I have been asked to look for the Lisp source to OPS5. Does anyone out there know of where I might get this? I would perfer a net address if possible. Please EMAIL your responses directly to me. Thank you Don R Withey BITNET: AXDRW@ALASKA.BITNET University of Alaska BIX: dwithey 3221 UAA Drive Anchorage, Alaska 99508 907-786-4851 (work) 907-277-9063 (home) 907-274-6378 (other home) Any expressed opinion is my own, and in no way represent those of my employer, the University of Alaska. ------------------------------ Date: Sun, 15 Feb 87 22:44:17 est From: mayerk@eniac.seas.upenn.edu (Kenneth Mayer) Subject: Legal reasoning Could someone give some pointers into the literature about legal reasoning. Or better yet, someone you know whom I could contact. Ken /|---------------------------------------------------------------|\ / | ARPA: mayerk@eniac.upenn.seas.EDU | \ | | USnail: Kenneth Mayer | | | | University of Pennsylvania, Moore School of Eng.| | - | 305 S. 41st St | - | | Philadelphia, PA 19104 | | | | GENIE: MAYERK | | \ | CIS: [73537,3411] | / \|---------------------------------------------------------------|/ "It's a sky-blue sky, "The future is a place, Satellites are out tonite, About 70 miles east of here, Let X = X..." Where it's lighter..." ------------------------------ Date: 17 FEB 87 18:53 GMT From: U06Q%CBEBDA3T.BITNET@wiscvm.wisc.edu Subject: Parallel Functional Programming Languages Hello out there, I'm looking for books, papers news etc. about parallel functional programming languages and especially about possibilities to parallelize LISP (garbage collection, memory management etc). Is there anyone out there, who has some experience with that subject or who knows someone, who has experiences. I would be glad to receive book titles or to receive addresses of people, who are interested in that subject. My network address: U06Q@CBEBDA3T.BITNET thanks a lot Rene Rehmann Dep. of computer science University of Berne Switzerland ------------------------------ Date: Thu, 12 Feb 87 08:49:03 EST From: yerazuws@csv.rpi.edu (Crah) Subject: Re: DEC AI Workstation In article <8702120856.AA22369@ucbvax.Berkeley.EDU>, DON@atc.bendix.com writes: > .... Of particular interest to me are > remarks from people who have used the DEC workstation and one of the > standard Lisp workstations (XEROX, Symbolics, LMI, TI, Sun, Apollo). > First the disclaimer - I've worked for DEC over two summers now, and am hoping to work there permanently. However, the opinions below are (I believe) not significantly influenced by that- and I'm also a stockholder in Symbolics, so *there* :-) I've worked with 3600's, SUNs and AI VAXstations. The Symbolics used to be unquestionably superior- now I'm not so sure. Release 7 of Symbolics not only has proprietary code (and new microcode _again_), but now there are two different LISPS (Zeta and Common) and you have to be careful which LISP window you're typing at. The Symbolics also carry hefty price tags. The color display is a separate monitor- which takes up a good chunk of space. The tools are great, however. Window Debugger (c-m-W) is still unmatched elsewhere. I wouldn't bother with the SUN, especially in a diskless configuration. I wasted (yes, wasted) nine months trying to develop an architecture simulator on Sun 2's. Little things like a server being slow can completely hang your LISP and your editor - so you sit. And sit. And forget what you were doing... The problem is that when you page on a diskless SUN, you generate I/O requests at a HUGE rate compared to normal file I/O. Hence, a server which is only mildly busy as seen by fileio users is essentially locked up as far as the LISP user is concerned. I don't know if adding huge amounts of memory would help the SUN or not... but see the comments under "memory" below. Just so you understand HOW bad diskless SUN's are- We switched from the SUN workstations to a heavily loaded 4.2 BSD /780 and found that we were getting about ten times as much work done- even though we were sharing the machine with twenty other people. Now, the AI VAXstation. I like it a lot. I've got the simulator running (in LISP), the compiler for it (a LISP compiler, in LISP, with chunks migrating into OPS5), and most of my thesis written (in TeX). I've got C when I want to do C-like things, and FORTRAN when that's appropriate. I only have the black and white scope- but the color scope is usable without needing a b/w scope also. The LISP on VAXstations can do graphics, too. Very cleanly. I don't bother with the LISP Language Sensitive Editor, having been addicted to EMACS for so long. Sorry, can't help you there. Suggestion- if you buy the VAXstation, get lots of memory. Five megs is not enough if you have a LISP, three EMACSes and a DCL and are using them all- the LISP will thrash when you gc. Get nine megs (the one meg that comes on the CPU card, plus an eight-meg card) and you'll GC in about six seconds- which is much better than the Symbolics' time of ONE HOUR or more. I don't know if going to 16 megs (max addressable in a MicroVAX II) would improve anything- my system rarely pages at all in the above LISP/EMACS/DCL load configuration. I had Ultrix and Xwindows up for a while instead of DCL; I liked UIS better than X, so I accepted the DCL as part of the package. Besides there's a shell around somewhere.... Disclaimer repeated: I have been and hope again to be an employee of DEC. I am a stockholder of record in Symbolics, Inc. My best drinkin' buddy works for SUN Microsystems. -Bill Yerazunis ------------------------------ Date: 13 Feb 87 22:01:14 GMT From: brothers@topaz.rutgers.edu (Laurence R. Brothers) Subject: Re: Against the Tide of Common LISP The fun thing about common lisp, though, is that any given little utility function you care to write probably already exists.... I was working on a project last year that caused me to want to resize an array - I wrote the little routine, then something caused me to look in the arrays section of Steele, and -- lo and behold -- resize-array (or something like that). -- Laurence R. Brothers brothers@topaz.rutgers.edu {harvard,seismo,ut-sally,sri-iu,ihnp4!packard}!topaz!brothers "I can't control my fingers -- I can't control my brain -- Oh nooooo!" ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Feb 21 20:44:45 1987 Date: Sat, 21 Feb 87 20:44:39 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #46 Status: RO AIList Digest Thursday, 19 Feb 1987 Volume 5 : Issue 46 Today's Topics: Seminars - A Juggling Robot (SU) & Evaluating Data-Intensive Logic Programs (Rutgers), Meetings - Mid-Atlantic Universities Regional AI Meeting & Midwest AI and CogSci Society, Conferences - Workshops on Database Programming Languages& Change in Cognitive Science Conference, Course - 2nd European Advanced Course in Artificial Intelligence ---------------------------------------------------------------------- Date: 12 Feb 1987 1354-PST (Thursday) From: Grace Schmidt Subject: Seminar - A Juggling Robot (SU) CS 500 Computer Science Colloquium Feb. 17, 4:15 pm, Skilling Auditorium A Juggling Robot - Adventures in Real Time Control by Marc D. Donner IBM, T.J. Watson Research Center The computer community has generally addressed real-time control problems in an ad-hoc fashion using tools designed for information processing. In information processing the important issue is ensuring that the correct calculations are carried out in the correct order. In real-time problems there are deadlines that require that the calculations be completed before a certain time to be correct. This class of problems is interesting and is becoming more and more important as we increasingly use computers to control things and not just for information processing. In this talk I will describe work in real-time control at the Thomas J. Watson Research Center in Yorktown Heights and in particular, the juggling machine that we are constructing as a testbed for our ideas. This talk will cover the engineering of the machine, the design and construction of programming languages and operating systems for real-time control, and interesting problems in the mathematics of juggling. ------------------------------ Date: 16 Feb 87 02:38:15 EST From: KALANTARI@RED.RUTGERS.EDU Subject: Seminar - Evaluating Data-Intensive Logic Programs (Rutgers) RUTGERS COMPUTER SCIENCE AND RUTCOR COLLOQUIUM SCHEDULE - SPRING 1987 Computer Science Department Colloquium : Here is summary of speakers: Date(Feb), Time, Place(Hill), Speaker Title 18, 9:50 ,705, Raghu Ramakrishnan,EVALUATING DATA INTENSIVE LOGIC PROGRAMS 18, 4:30, 525, Allan Borodin,Parallel Complexity of Algebraic Problems 19, 2:50, 705, Victor Pan, Parallel Nested Dissection for Path Algebra Computations 20, 2:50, 705, Ben Cohen, "Knowledge-Based CAD-CAM Software Integration." DATE : Wednesday, February 18 SPEAKER: Raghu Ramakrishnan AFFILIATION: University of Texas at Austin TITLE: EVALUATING DATA INTENSIVE LOGIC PROGRAMS TIME: 9:50 PLACE: Hill 705 ABSTRACT There has been considerable interest recently in the problem of evaluating @u[logic queries] against relational databases. The evaluation methods that we consider rely upon @i[bottom-up fixpoint computation], which, unlike Prolog's depth-first strategy, is @i[complete]. These methods also take advantage of efficient database join techniques. The major criticism of such methods is that they do not fully utilize the constants in the query to restrict the search space, and thus perform unnecessary computation. Such constants are used by Prolog through a process of @i[sideways information passing], since variable bindings generated in solving a goal restrict the search space in solving subsequent goals. We define @i[sideways information passing] formally. Given a program, we show that any sideways information passing strategy may be implemented by rewriting the program and evaluating the rewritten program bottom-up, thus answering the above criticism. We describe several rewriting algorithms, generalizing some of the bottom-up methods described in the literature - Magic Sets, Counting, and their variants - to work with arbitrary logic programs. We also present the results of a performance analysis which provides some insight about the relative cost of these methods. ------------------------------ Date: Fri, 13-FEB-1987 13:11 EST From: MILLER%VTCS1.BITNET@wiscvm.wisc.edu Subject: Meeting - Mid-Atlantic Universities Regional AI Meeting ********************Mid-Atlantic Regional AI Meeting******************** The first annual meeting of AISMAS (the AI Society of the Mid-Atlantic States) will be held at Virginia Tech in Blacksburg, Virginia on March 6 and 7. The meeting will include a keynote speech by Prof. Gerry Dejong of the University of Illinois, panels on the value and capabilities of expert systems and AI architectures, and graduate student presentations of current research. As a special inducement towards graduate student attendance/participation, there will be free doughnuts and coffee, and no registration fee. Below is a preliminary schedule of the AISMAS meeting: Friday, March 6 Saturday, March 7 8:00pm Keynote speech: 8:30am Grad Student presentations Prof Gerry Dejong, U. of Ill. 10:00am Coffee Break 9:30pm Reception 10:15am Panel "What Expert Systems Can't Do" 11:15am Grad Student presentations 12:00 Lunch & program demos 1:30pm Grad Student presentations 3:00pm Coffee Break 3:15pm Panel "Special AI Architectures" 4:15pm Grad Student presentations 5:00pm AISMAS Business Meeting If you are doing AI research and in the Mid-Atlantic region (or near the Mid-Atlantic region and don't mind a longish trip) then your attendance and/or participation is encouraged. For more information about AISMAS contact your local AISMAS coordinator or Prof. David Miller Dept of Computer Science, Virginia Tech (703) 961-5605 miller%vtcs1@bitnet-relay.arpa This year's meeting is sponsored by the Automation and Robotics Project at the Jet Propulsion Laboratories and the Virginia Tech Department of Computer Science. ------------------------------ Date: Mon, 16 Feb 87 10:44:51 CST From: Kris Hammond Subject: Meeting - Midwest AI and CogSci Society The First Annual Meeting of The Midwest Artificial Intelligence and Cognitive Science Society April 24th and 25th University of Chicago Department of Computer Science Call for Abstracts Deadline: March 20th. MAICSS is a new organization designed to promote interaction between AI and Cognitive Science groups in the Midwest. Its activity is centered around an annual meeting including talks by both faculty and students. The first meeting is scheduled for April 24th and 25th at the University of Chicago. The emphasis in student talks is work in progress. The idea is to air new work at a time when feedback will be most helpful. Submissions for these talks will take the form of short abstracts (about 3 pages). Each submission should include three copies of the abstract, each with a title page including name, address and affiliation. The deadline for abstracts is March 20th, 1987. There is no registration fee, but we ask that anyone interested in attending please contact us so we can get a correct head count. Send submissions and inquires to: Kristian Hammond Department of Computer Science University of Chicago 1100 East 58th Street Chicago, IL 60637 Any questions can be sent to me via E-mail addressed to: kris@uchicago.csnet --- for CSnet mail. kris%uchicago.csnet-relay.arpa --- for ARPA mail. ------------------------------ Date: Sun, 15 Feb 87 15:57:49 EST From: Peter Buneman Subject: Conference - Workshops on Database Programming Languages Workshops on Persistent Object Systems and Formal Aspects of Database Programming Languages. Two workshops on these topics are to take place this summer in Europe immediately before and after VLDB. The first, to be held on the West coast of Scotland, August 25-28, will focus on the design and implementation of persistent object systems. The second, in Finistere, France, Sept 7-10, will discuss the relationship between the semantics of databases an programming languages as it appears in data types and data models, object oriented programming, logic programming, higher-order relations etc. The purpose of both workshops is to encourage informal discussions among researchers in these areas and presentations of current research. Attendance at these workshops is limited and will be decided on the basis of abstracts. For more information, send mail to one of the following addresses. In the US: Peter Buneman Rishiyur Nikhil CIS, Moore School/D2 Labporatory for Computer Science University of Pennsylvania Massachusetts Institute of Technology Philadelphia, PA 19104 545 Technology Square (Peter@cis.upenn.edu) Cambridge MA 02139 (Nikhil@xx.lcs.mit.edu) In Europe: Francois Bancilhon Malcolm Atkinson INRIA Department of Computing Science BP 105 University of Glasgow 78153 Le Chesnay Cedex Glasgow, G12 8QQ France Scotland (bancilhon@inria.uucp) (mpa@cs.glasgow.ac.uk) ------------------------------ Date: Sun, 15 Feb 87 22:52:48 PST From: levin@CS.UCLA.EDU Reply-to: levin@CS.UCLA.EDU (Stuart Levine) Subject: Conference - Change in Cognitive Science Conference I have been asked to post this by Prof. Earl Hunt. Note that there are two changes to the original: a new submission deadline, and info on camera-ready papers. Cognitive Science Society Announcement of Meeting and Preliminary call for Papers The Ninth Annual Conference of the Cognitive Science Society will be held on July 16-18, 1987 at the University of Washington. The dates have been chosen to allow people to attend this conference and the conference of the American Association for Artificial Intelligence, which meets in Seattle earlier in the week. The conference will feature symposia and invited speakers on the topics of mental models, educational and indus- trial applications of cognitive science, discourse comprehension, the relation between cognitive and neural sciences, and the use of connec- tionist models in the cognitive sciences. The conference schedule will include paper sessions and a poster session, covering the full range of the cognitive sciences. The proceedings of the conference will be published by L Erlbaum Associates. Submitted papers are invited. These should cover original, unreported work, research or analysis related to cognition. All submissions for paper and poster sessions will be refereed. All submitted papers and posters must include the following: Author's name, address, and telephone number. Set of four or fewer topic area keywords. Four copies of the full paper (4000 words maximum) or poster (2000 words maximum). Each copy should include a 100-250 word abstract. Indication of preference for paper or poster session. All papers MUST adhere to the following rules for preparation of camera-ready copy. NOTE: Papers will NOT be sent back after acceptance for modification. The accepted paper will be sent directly to the publisher. 1 inch margins on both sides, top, and bottom. Single spaced text. Figures centered on type page at top or bottom. Titles and author's names and institutions centered at top of first page. One line between title heading and text. Use American Psychological Association publication format. Authors are responsible for obtaining permission to reprint published material. Send submissions to Earl Hunt, Department of Psychology, University of Washington, Seattle, Wa 98195 Submissions are due by MARCH 16, 1987. NOTE NEW DATE All members of the Cognitive Science society will receive a further mailing discussing registration, accommodation, and travel. ------------------------------ Date: Mon, 16 Feb 87 18:58:56 +0100 From: aamodt%vax.runit.unit.uninett@NTA-VAX.ARPA Subject: Course - 2nd European Advanced Course in Artificial Intelligence ACAI-87 ECCAI's 2nd Advanced Course in Artificial Intelligence July 28 to August 7, 1987 Oslo, Norway Organizer: NAIS - Norwegian Artificial Intelligence Society Chairman : Rolf Nossum, Computas Expert Systems, N-1322 Hovik The European Coordinating Committee for Artificial Intelligence (ECCAI) organizes biannual Avanced Courses in Artificial Intelligence. This years course is the second of its kind, following the one held in Vignieu, France, 1985. Despite the spectrum of scientific activities in Artificial Intelligence research, covering such diverse domains as Knowledge Representation, Learning, Natural Language, Robotics, Vision, Program Synthesis, Automated Reasoning, AI-oriented Programming, there exists a common core of methods and techniques for Symbolic Information Processing. This common formal basis will be treated in depth during the course, and the use of general as well as special techniques in some selected subfields of AI will be the main emphasis. ACAI-87 will not be an introductory course in AI, but is intended to meet the needs of researchers and practitioners in the field. TOPICS LECTURERS Inference methods WOLFGANG BIBEL , Germany Machine Learning ALAN BIERMAN , USA Expert Systems Methodology WILLIAM CLANCEY , USA Qualitative Reasoning TONY COHN , England Natural Language JENS-ERIK FENSTAD, Norway Parallell and Rewriting Systems PHILLIPPE JORRAND, France AI Planning SAM STEEL , England Knowledge Acquisition BOB WIELINGA , Holland Fee: Appx. $900 (900 ECU). This covers accomodation, meals and course material. Interested? For more information, please write to: ACAI-87 P.O. Box 5030 Majorstua N-0301 OSLO 2 N o r w a y Specific questions may be sent to the network address below. DEADLINE FOR APPLICATION IS MARCH 1, 1987 ! Sent by: Agnar Aamodt, Knowledge Engineering Laboratory SINTEF-RUNIT, University of Trondheim, N-7034 Trondheim-NTH ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Feb 21 20:45:11 1987 Date: Sat, 21 Feb 87 20:45:02 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #47 Status: R AIList Digest Thursday, 19 Feb 1987 Volume 5 : Issue 47 Today's Topics: Queries - ISTEL's See Why & Quintus Computer Systems, Cognitive Science - Re: Learing about AI, Games - GO Program on PC (MAC), AI Tools - Rochester Connectionist Simulator, Book - Knowledge Systems and Prolog ---------------------------------------------------------------------- Date: Wed, 18 Feb 87 19:12 EST From: Troy Shinbrot <900380%UMDD.BITNET@wiscvm.wisc.edu> Subject: ISTEL, see why I am interested in comments from anyone who has experience with or information regarding ISTEL corporation's "See Why" system. It is apparently a Fortran (!) based simulation system with somewhat vague specifications. Thanks in advance. - Troy Shinbrot (aka 900380@umdd.bitnet) ------------------------------ Date: 18 Feb 87 09:47:29 +1000 (Wed) From: "ERIC Y.H. TSUI" Subject: Query - Quintus Computer Systems Has anyone has experience in using Quintus Prolog in Xerox (or other AI) workstation ? Does anyone have information about whether Quintus Computer Systems has a node on the network or not ? Appreciate any useful information on the above questions. Eric Tsui eric@aragorn.oz ------------------------------ Date: 16 Feb 87 16:52:00 GMT From: necntc!adelie!mirror!gabriel!inmet!sebes@husc6.harvard.edu Subject: Re: Learing about AI Ted Inoue's description of an interdisciplinary effort is essentially a description of cognitive science. I have two points to add to what he said: 1) such an interdisciplinary effort is not new, and has been going on for decades in some circles; it is only now that a broader awareness of the field as a field is developing 2) Ted's assessments of the various fields that can contribute to cognitve science is rather simplistic and harsh. I think, as many would agree, that each feild has importamt thigns to offer. Also, there can be varying combinations of the fields for various subjects of inquiry. For example, Stanford's Center for the Study of Language and Information is composed mostly of linguists, computer scientists (both academic and professional), and philosophers; in fact, the philosophers run the show. For further elaboration of these points, I recommend the introduction and first chapter of Martin Gardners's _The Mind's New Science_. -John Sebes ------------------------------ Date: 18 Feb 87 08:31:00 EST From: "CLSTR1::BECK" Reply-to: "CLSTR1::BECK" Subject: GO PROGRAM ON PC (MAC) I BELIEVE THERE WAS AN INQUIRY AS TO GO PROGRAMS ON PCS. I HAVE NOT USED THIS PROGRAM BUT PLAN TO SOMETIME THIS SPRING. Date: Sat, 7 Feb 87 01:23:36 PST From: Reply-to: LOGANJ%BYUVAX.BITNET@forsythe.stanford.edu Subject: Go program, v1.0b2 This is version 1.0B2 of the Go program for the Macintosh. This file is about 137,XXX bytes long. When unhexed it is 98.5K bytes. Recent improvements to the program are as follows: - You can now set the baud rate and other modem port characteristics from within the program, for playing games between two Macs. If you play through modems over telephone lines, for example, you can communicate by typing on the keyboard - a line of text is sent to the opponent when you hit the return key. - The program will give a short analysis of a board position, showing the number of primary liberties (max about 8), number of secondary liberties (max 8), and the result of a simple ladder. - The program will now display the "Reasons for Computer Moves". Other recent improvements include more reasonable end of game scoring and the ability to add symbols to handicap stones. I have tested the communications between two Macs and it seems to work okay. This is public domain, so you may give it to friends and post to bulletin boards. Regards, Jim [ archived as [SUMEX-AIM.Stanford.EDU]GAME-GO.HQX DoD ] .............................. POSTED TO AI BY .............................. ------------------------------ Date: Tue, 17 Feb 87 22:27:57 -0500 From: goddard@rochester.arpa Subject: Rochester Connectionist Simulator release in April In mid-April we will be releasing a much improved version of the simulator. The Rochester simulator allows construction and simulation of arbitrary networks with arbitrary unit functions. It is designed to be easily extensible both in unit structures and user interface, and includes a facility for interactive debugging during network construction. The simulator is written in C and currently runs here on Suns, Vaxen and the BBN Butterfly multiprocessor (and should run on any UNIX machine). There is a graphics interface package which runs on a Sun under suntools, and is primarily designed for interactive display of the flow of activation during network simulation. The simulator is easy to use for novices, and highly flexible for those with expertise. We are now collecting names and addresses of people and sites interested in receiving a copy of the simulator when released in April. The preferred method for dissemination is via electronic mail, but we will also send tape and possibly disk copies. To get on the distribution list, send mail to costanzo@cs.rochester.edu giving your name and addresses (both physical and electronic). This address is for the distribution list ONLY, for other questions see below. It is possible that there will be some kind of minimal licensing agreement required, for a nominal fee. There are many papers, journal articles and technical reports which give an idea of the connectionist research and philosophy here at Rochester. A complete list of these is in "Rochester Connectionist Papers: 1979-1985", by Feldman, Ballard, Brown and Dell, Computer Science TR 172. For this or any other technical report, write to: Peggy Meeker Department of Computer Science University of Rochester Rochester, NY 14627 The previous version of the simulator with some documentation is availible immediately via electronic mail from me (see addresses below). However you are advised to wait for the April release, as the documentation will be much better. Any other questions about the simulator should also be addressed to me. Nigel Goddard goddard@cs.rochester.edu ...!seismo!rochester!goddard ------------------------------ Date: 13 February 1987, 17:50:57 EST From: Adrian Walker Subject: book announcement - Knowledge Systems and Prolog A new book which may be of interest to readers of AILIST-- KNOWLEDGE SYSTEMS AND PROLOG A LOGICAL APPROACH TO EXPERT SYSTEMS and NATURAL LANGUAGE PROCESSING Adrian Walker (Ed.), Michael McCord, John F. Sowa, Walter G. Wilson Addison-Wesley, 1987 This book introduces Prolog and two important areas of Pro- log use-- expert systems and natural language processing systems (together known as knowledge systems.) The book covers basic and more advanced Prolog programming, describes practical expert systems and natural language processing in depth, and provides an introduction to the formal basis in mathematical logic for the meaning of Prolog programs. HIGHLIGHTS y Presents significant examples of knowledge systems, with useful parts of actual programs included. y Describes important research results in expert systems, natural language processing, and logic programming. y Integrates many trends in knowledge systems by bringing diverse representations of knowledge together in one practical framework. y Though useful with any Prolog implementation, provides an introductory tutorial followed by advanced program- ming techniques for IBM Prolog. TABLE OF CONTENTS Chapter 1. Knowledge Systems: Principles and Practice (Adrian Walker ) 1.1 What is a Knowledge System? 1.2 From General to Specific, and Back Again 1.3 Prolog and Logic Programming 1.4 Knowledge Representation 1.5 Getting the Computer to Understand English 1.6 Some Trends in Knowledge Acquisition 1.6.1 Learning by Being Told 1.6.2 Learning by Induction from Examples 1.6.3 Learning by Observation and Discovery 1.7 Summary Chapter 2. A Prolog to Prolog (John Sowa) 2.1 Features of Prolog 2.1.1 Nonprocedural Programming 2.1.2 Facts and Predicates 2.1.3 Variables and Rules 2.1.4 Goals 2.1.5 Prolog Structures 2.1.6 Built-in Predicates 2.1.7 The Inference Engine 2.2 Pure Prolog 2.2.1 Solving Problems Stated in English 2.2.2 Subtle Properties of English 2.2.3 Representing Quantifiers 2.2.4 Choosing a Data Structure 2.2.5 Unification: Binding Values to Variables 2.2.6 List-Handling Predicates 2.2.7 Reversible Predicates 2.3 Procedural Prolog 2.3.1 Backtracking and Cuts 2.3.2 Saving Computed Values 2.3.3 Searching a State Space 2.3.4 Input/Output 2.3.5 String Handling 2.3.6 Changing Syntax 2.4 Performance and Optimization 2.4.1 Choosing an Algorithm 2.4.2 Generate and Test 2.4.3 Reordering the Generate and Test 2.4.4 Observations on the Method Exercises Chapter 3. Programming Techniques in Prolog (Walter Wilson) 3.1 How to Structure Prolog Programs 3.1.1 Logic Programming Development Process 3.1.2 Declarative Style 3.1.3 Data Representation 3.1.4 Structuring and Verifying Recursive Programs 3.1.5 Control Structures 3.2 Techniques and Examples 3.2.1 Meta-level Programming 3.2.2 Graph Searching 3.2.3 Balanced Trees 3.2.4 Playing Games and Alpha-beta Pruning 3.2.5 Most-Specific Generalizations 3.3 Summary of Prolog Programming Principles Exercises Chapter 4. Expert Systems in Prolog (Adrian Walker) 4.1 Knowledge Representation and Use 4.1.1 Rules 4.1.2 Frames 4.1.3 Logic 4.1.4 Summary 4.2 Syllog: an Expert and Data System Shell 4.2.1 Introduction to Syllog 4.2.2 A Manufacturing Knowledge Base in Syllog 4.2.3 Inside the Syllog Shell 4.2.4 Summary of Syllog 4.3 Plantdoc 4.3.1 Using Plantdoc 4.3.2 The Plantdoc Inference Engine 4.3.3 Weighing the Evidence 4.3.4 Summary of Plantdoc 4.4 Generating Useful Explanations 4.4.1 Explaining Yes Answers, Stopping at a Negation 4.4.2 Explaining Yes and No Answers, Stopping at a Negation 4.4.3 Full Explanations of Both Yes and No Answers 4.5 Checking Incoming Knowledge 4.5.1 Subject-Independent Checking of Individual Rules 4.5.2 Subject-Independent Checking of the Knowledge Base 4.5.3 Subject-Dependent Checking of the Knowledge Base 4.6 Summary Exercises Chapter 5. Natural Language Processing in Prolog (Michael McCord) 5.1 The Logical Form Language 5.1.1 The Formation Rules for LFL 5.1.2 Verbs 5.1.3 Nouns 5.1.4 Determiners 5.1.5 Pronouns 5.1.6 Adverbs and the Notion of Focalizer 5.1.7 Adjectives 5.1.8 Prepositions 5.1.9 Conjunctions 5.1.10 Nonlexical Predicates in LFL 5.1.11 The Indexing Operator 5.2 Logic Grammars 5.2.1 Definite Clause Grammars 5.2.2 Modular Logic Grammars 5.3 Words 5.3.1 Tokenizing 5.3.2 Inflections 5.3.3 Slot Frames 5.3.4 Semantic Types 5.3.5 Lexical Look-up 5.4 Syntactic Constructions 5.4.1 Verb Phrases, Complements, and Adjuncts 5.4.2 Left Extraposition 5.4.3 Noun Phrases 5.4.4 Left-Recursive Constructions 5.5 Semantic Interpretation 5.5.1 The Top Level 5.5.2 Modification 5.5.3 Reshaping 5.5.4 A One-Pass Approach 5.6 Application to Question Answering 5.6.1 A Sample Database 5.6.2 Setting up the Lexicon 5.6.3 Translation to Executable Form 5.6.4 A Driver for Question Answering Exercises Chapter 6. Conclusions (Adrian Walker) Appendix A. How to Use IBM Prolog (Adrian Walker & Walter Wilson) A.1 A Simple Example A.2 Detailed Programming of a Metainterpeter A.3 Testing the Metainterpreter at the Terminal A.4 VM/Prolog Input and Output A.5 VM/Prolog and the VM Operating System A.6 Tailoring VM/prolog A.7 Clause Names and Modules A.8 Types, Expressions, and Sets A.9 MVS/Prolog Appendix B. Logical Basis for Prolog and Syllog (Adrian Walker) B.1 Model Theory Provides the Declarative View B.2 Logical Basis for Prolog without Negation B.3 Logical Basis for Prolog with Negation B.4 Further Techniques for Interpreting Knowledge Bibliography Author Index Subject Index The book can be ordered direct from Addison-Wesley. In the USA, phone 617-944-3700, ask for the Order Department, and quote title, authors, and Order Number ISBN 09044. Adrian Walker IBM T.J. Watson Research Center PO Box 704 Yorktown Heights NY 10598 Tel: 914-789-7806 Adrian @ IBM.COM ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Mon Feb 23 00:40:26 1987 Date: Mon, 23 Feb 87 00:40:12 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #48 Status: R AIList Digest Sunday, 22 Feb 1987 Volume 5 : Issue 48 Today's Topics: Scientific Method - Psycho-Physical Measurement ---------------------------------------------------------------------- Date: 15 Feb 87 05:40:19 GMT From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad) Subject: Psycho-Physical Measurement: Reply to Adam Reed Adam V. Reed (adam@mtund.UUCP) at AT&T ISL, Middletown NJ USA, wrote in support of the following position: Psychophysicists measure conscious experience in the same sense that physicists measure ordinary physical properties. Our senses and central nervous systems are analogous to the physicist's measuring equipment. If we can assume that this "mental" equipment is similar in all of us, then reports of psychophysical "measurements" of private, conscious experiences are just as objective as reports of physical measurements of physical phenomena, and objective in the same sense (observer-independence). I will attempt to show why this is incorrect. But first let me say that there is really no reason for a psychophysicist to get embroiled in the mind/body problem (or its "other-minds" variant). In cog-sci there is a real empirical question about what processes and performances are justifiably and relevantly mind-like, because it is mental capacity (or at least its performance manifestations) that one is attempting to capture and model. It MATTERS in cognitive modeling whether you've really captured intelligence, or just a clever toy (partial) look-alike. There is no corresponding problem in psychophysics. The input/output charactersitics, detection sensitivities, etc., of human observers have face validity as displayed in their performance data. There is no empirical question affecting the validity (as opposed to the interpretation) of the data that depends on their being a measure of conscious experience rather than merely human receiver I/O characteristics. For simplicity I will focus on detection performance only, although the same arguments could be applied to discrimination, magnitude judgment, identification, etc. If a subject reports when he detects the presence of a signal, and this relation (signal/detection-report) displays interesting I/O regularities (thresholds, detectabilities, criterial biases, etc.), those regularities are indisputably objective in the same sense that the physicist's (or engineer's) regularities are. The sticky part comes when one wants to interpret the measurements and their regularities, not as they are objectively -- namely input/output performance regularities of human subjects under certain experimental conditions -- but as measurements of and regularities in conscious experience. Adam has an "intuition pump" in support of the latter interpretation: He suggests that a subject can compute his own (say) detection thresholds if he receives detection trials plus feedback as to whether or not a stimulus was present. His only performance would be to report, after a long series of trials and private calculations, what his detection threshold was. Since everyone can in principle do this for himself, it is observer-independent, and hence objective. Yet it involves no overt behavior other than the final threshold report; otherwise, it is exactly like a physicist performing an experiment in the privacy of his lab, and then reporting the results, which anyone else can then replicate in the privacy of his own lab. So surely the measurement is not merely of behavioral regularities, but of conscious experience. There are many directions from which one can attack this argument: (i) One could call into question the "lab" analogy, pointing out that, in principle, two physicists could check each other's measurements in the same "lab," whereas this is not possible in one-man psychophysics. (ii) One could question the objectivity of being both subject and experimenter. (iii) One could question whether the subject is performing a "measurement" at all, in the objective sense of measurement; only the psychophysicist is measuring, with the subject's receiver characteristics under various input conditions being the object of measurement. The subject is detecting and reporting. (iv) One could point out that one subject's report of his threshold is not subject-independently tested by another subject's report of his own threshold. (v) One could point out that intersubjective consensus hardly amounts to objectivity, since all subjects could simply share the same subjective bias. (and so on) These objections would all trade (validly) on what we really MEAN by the objective/subjective distinction, which is considerably more than consensus among observers. I will focus my rebuttal, however, on Adam's argument, taken more or less on its own terms; I will try to show that it cannot lead to the interpretation he believes it supports. First, what work are the "covert calculations" really doing in Adam's thought-experiment? What (other than the time taken and the complexity of the task) differentiates a simple, one-trial detection-response from the complex report of a threshold after a series of trials with feedback and internal calculations? My reply is: nothing. Objectively speaking, the normal trial-by-trial method and the long-calculation-with-feedback method are just two different ways of making the same measurement of a given subject's threshold. (And the only one doing the measuring in both cases is the psychophysicist, with the data being the subject's input and output. Not even the subject himself can wear both hats -- objective and subjective -- at one time.) So let's just talk about a simple one-trial detection, because it shares all the critical features at issue, and is not complicated by irrelevant ones. The question then becomes "What is the objectivity-status of reports of single stimulus-detections from individual subjects?" rather than "How observer-independent is the calculation of detection thresholds after a series of trials with feedback?" The two questions are equivalent in the relevant respects, and they share the same weaknesses. When a subject reports that he has detected a stimulus, and there was in fact a stimulus presented, that's ALL there is, by way of data: Input was the stimulus, output was a positive detection report. (When I say "behavioral" or "performance" data, I am always referring to such input/output regularities.) Of course, if I'm the subject, I know that there's something it's "like" to detect a stimulus, and that the presence of that sensation is what I'm really reporting. But that's not part of the psychophysical data, at least not an objective part. Because whereas someone else can certainly look at the same stimulus, and experience the sensation for himself, he's not experiencing MY sensation. I believe that he's experiencing the same KIND of sensation. The belief is surely right. But there's certainly no objective basis for it. Consider that no matter how many times the same stimulus is presented to different subjects, and all report detecting it, there is still no objective evidence that they're having the same sensation -- or even that they're having any sensation at all. It is the everyday, informal solution to this "other-minds" problem -- based on the similarity of other subjects' behavior to our own -- that confers on us the conviction that they're experiencing similar things with "similar equipment." But that's no objective basis either. Contrast this psychophysical detection experiment with a PHYSICAL detection experiment. Suppose we're trying to detect an astronomic effect (say, an "alpha") through a telescope. If an astronomer reports detecting an alpha, there is the presumption -- and it can be tested, and confirmed -- that another astronomer could, with similar equipment and under similar conditions, detect an alpha. Not his OWN alpha, but an objective, observer-independent alpha. This would not necessarily be the self-same alpha -- only a token of the same type. Even establishing that it was indeed an instance of the same generic type could be done objectively. But none of this carries over to the case of psychophysical detection, where all the weight of our confidence that the sensation exists and is of the same type is borne by our individual, subjective, intuitive solutions to the every-day other-minds problem -- the "common"-sense-experience we all share, if you will. I'm not, of course, claiming that this "common sense" is likely to be wrong; just that it's unique to subjective phenomena and does not amount to objectivity. Nor can it be used as a basis for claiming that psychophysics "measures" conscious experience. Yes, we all have subjective experience of the same kind. Yes, that's what we're reporting when we are subjects in a psychophysical experiment. But, no, that does not make psychophysical data into objective measures of conscious experience. (In fact, "an objective measure of a subjective phenomenon" is probably a contradiction in terms. Think about it.) A third case is worth considering, because it's midway between the physical and the psychophysical detection situation, and more like the latter in the relevant respects: Unlike cognitive science, which is concerned with active information-processing -- learning, memory, language, etc. -- psychophysics is in many ways a calibration science: It's concerned with determining our sensitivities for detection, discrimination, etc. As such, it is really considering us in our capacity as sensory devices -- measuring instruments. So the best analogy would probably be the equivalent kind of investigation on physical measuring devices. If what was at issue was not the astronomer's objectivity in alpha detection but the telescope's, then once again observer-independent conclusions could be drawn. Comparisons between the telescope's sensitivity and that of other alpha-detection devices could be made, etc. Here it would clearly be the device's input/output behavior that was at issue, nothing more. The same seems true of psychophysical detection. For although we all know we're having sensations in a detection experiment, the only thing that is being, or can be, objectively measured under such conditions is our sensitivity as detection devices. Nor is more AT ISSUE in psychophysics. In cog-sci, one can say of an input/output device that purports to model our behavior: "But how do you know that's really how I did it? After all, I can do much more (and I do it all consciously), whereas all you have there is a few dumb processes and performances." This is a real issue in cognitive modeling. (The buck stops at the TTT, however, according to my account.) In psychophysics, on the other hand, nobody is going to question the validity of a detection threshold because there's no way to show that it's based on measuring consciousness rather than mere input/output performance characteristics. Before turning to Adam Reed's specific comments, let me reiterate that this analysis is just as applicable, mutatis mutandis, to the more complicated case of threshold calculation after a series of trials with feedback. It's still a matter of input/output characteristics -- this time with a long series of inputs, with instructions -- rather than any "direct, objective measurement of experience." There's just no such thing as the latter, according to the arguments I'm making. [And I haven't even brought up the vexed issue of psycho-physical "incommensurability," namely, that no matter how reliable our psychophysical judgments, and how strong our conviction that they're veridical in our own case, there is no OBJECTIVE measure on which to equate and check the validity of the relation between physical stimulation and sensation. Correlations between input and output are one thing -- but between physical intensity and "experiential intensity"...?] Adam writes: > I don't buy the assumption that two must *observe the same > instance of a phenomenon* in order to perform an *observer-independent > measurement of the same (generic) phenomenon*. The two physicists can > agree that they are studying the same generic phenomenon because they > know they are doing similar things to similar equipment, and getting > similar results. But there is nothing to prevent two psychologists from > doing similar (mental) things to similar (mental) equipment and getting > similar results, even if neither engages in any overt behavior apart > from reporting the results of his measurements to the other. My point is > that this constitutes objective (observer-independent) measurement of > private (no behavior observable by others) mental processes. Apart from the objections I've already made about the "similar equipment" argument [what, by the way, is "mental equipment"? sounds figurative], about the experimenter as subject, about detection as "measurement," and about the irrelevance of the behavioral covertness to the basic input/output issue, the "generic" question seems problematic. With the alphas, above, we didn't have to oberve the same alpha, but we did have to observe the same kind of alpha. Now the "alpha" in the private case is MY sensations, not sensations simpliciter. So you needn't verify, for objectivity's sake, the specific detection sensation I had on trial N, or on any of my trials when I was subject, if you like -- just as long as the generic sensation you do check on is MINE not YOURS. Because otherwise, you see, there's this observer-dependence... > This objection [that there's no way of checking the correctness of a > subject's covert calculations] applies with equal force to the > observation, recording and calculations of externally observable > behavior. So what? What I meant here was that, after a long series of detection trials with feedback and covert calculations, there's no way you can check that I calculated MY threshold right except by running the trials on yourself and checking YOUR threshold. But what has that to do with the validity of MY threshold, or its status as a measure of my experience, rather than just my input/output sensitivity after a series of trials with complex instructions? I agree that there is validity-problem with all behavior, by the way, but I think that favors my argument rather than yours. One way to check the covert calculation is to have a subject do both -- overt detecting AND covert calculations on subsequent feedback. The two thresholds -- one calculated covertly by the subject, the other by the experimenter -- may well agree, but all that shows is that they get the same result when wearing their respctive (objective) psychophysicist's hats. What the agreement does not -- and cannot -- show is that the subject was "measuring experience" when he was detecting. It can't even show he was HAVING experience when he was detecting. But that's the whole point about behavioral measures and objectivity. If we're lucky, they'll swing together with conscious experience, but there's no objective basis for counting on it, or checking it. (And, equally important: It makes no methodological difference at all.) > Yes [there {is} no way of getting any data AT ALL without the subject's > overt mega-response at the end], but *this is not what is being > measured*. Or is the subject matter of physics the communication > behavior of physicists? The subject may be silent till trial N, but the input/output performance that is being measured is the presentation of N trials followed by a report that stands in a certain relation to the inputs. This is no different from the case of a simple trial, with a single stimulus input, and the simple report "I saw it." That's not scientific testimony, that's subjective report. The only one who can ever see THAT kind of "it" (namely, yours), is you. (And, as I mentioned, the subject is really switching hats here too.) > What is objectively different about the human case is that not only is > the other human doing similar (mental) things, he is doing those > things to similar (human mind implemented on a human brain) equipment. > If we obtain similar results, Occam's razor suggests that we explain > them similarly: if my results come from measurement of subjectively > experienced events, it is reasonable for me to suppose that another > human's similar results come from the same source. But a computer's > "mental" equipment is (at this point in time) sufficiently dissimilar > [to] a human's that the above reasoning would break down at the point > of "doing similar things to similar equipment with similar results", > even if the procedures and results somehow did turn out to be identical. First, I of course agree that people have similar experiences and similar brains, and that computers differ in both respects. But I don't consider an experience, or the report of an experience, to be a "measurement." If anything, all of me -- rather than part of me, used and experimented on by another part -- is the measuring device when I'm detecting a stimulus. After all, what's happening when I'm detecting an (astronomic) alpha: a measurement of a measurement? (The point about the computer was just meant to remind you that psychophysicists are just doing input/output sensitivity measurements, and that the same data could be generated by a computer-plus-transducer. But the difference between current computer and ourselves touches on more complex issues related to the TTT that needn't be raised here.) The relevant factors are all there in simple one-trial detection: If I report a detection, there's absolutely no objective test of whether (1) I had a sensation at all, (2) I "measured" it accurately, or even (3) whether it's measurable at all (i.e., whether experience and phsyical magnitude are commensurable). My detection sensitivity in the face of inputs, on the other hand, is indeed objectively testable. No number of private experiments by experimenter/subjects can make a dent in this epistemic barrier (called the mind/body problem). > Not true [that what we are actually measuring objectively is merely > behavior]. As I have shown in my original posting, d' can be measured > without there *being* any behavior prior to measurement. There is > nothing in Harnad's reply to refute this. It can't be done without presenting stimuli and feedback. And "behavior" refers to input/output relations. So there's a long string of real-time input involved in the covert experiment, followed by the report of a d' value. From that we can formulate the following behavioral description: That after so-and-so-many trials of such-and-such stimuli with such-and-such instructions, the subject reports X. Even when I'm myself the subject in such an experiment, that's how I would describe my findings, and those data are behavioral. This is no different, as I suggested, from a single detection trial. And the subject, of course, is switching hats during such an experiment; there's nothing magic about his behavioral silence during the covert calculations, any more than there is in the astronomer's, after he's gotten his telescope reading and performs calculations on them. > Why [will the testability and validity of these hypotheses always be > objectively independent of any experiential correlations (i.e., the > presence or absence of consciousness)]? And how can this be true in > cases when it is the conscious experience that is being measured? These input/output sensitivity characteristics of human observers would look the same whether or not human subjects were conscious. They ARE conscious, and they ARE having experiences during the measurements, but it's not their experiences we (or they) are measuring, it's their sensitivity to stimuli. It feels, when I'm the subject, as if there's a close coupling between the two. But who am I to say? That's just a feeling And feelings also seem, objectively speaking, incommensurable with physical intensities. The astronomer's detection has no such liability (except, of course, its subjective side -- "What it's like to detect an alpha," or what have you). Rather than forcing us to conclude that it's conscious experience that we're measuring in psychophysics, as Adam suggests, I think Occam's Razor (a methodological principle, after all) is dictating precisely the opposite. > I would not accept as legitimate any psychological theory > which appeared to contradict my conscious experience, and failed to > account for the apparent contradiction. As far as I can tell, Steve's > position means that he would not disqualify a psychological theory just > because it happened to be contradicted by his own conscious experience. That depends on what you mean by "contradicted conscious experience." I assume we're both willing to concede on hallucinations and illusions. I also reject radical behaviorism, which says that consciousness is just behavior. (I know that's not true.) I'd reject any theory that said I wasn't conscious, or that there was no such thing, or that it's "really" just something else that I know perfectly well it isn't. I'd also reject a theory that couldn't account for everything I can detect, discriminate, report and describe. But if a theory simply couldn't account for the fact that I have subjective experience at all, it wouldn't be contradicting my experience, it would just be missing it, bypassing it. That's just what the methodological solipsism I recommend does. It is, in a sense, epistemologically incomplete -- it can't explain everything. Whether it's also ontologically incomplete depends on the (objectively untestable) question of whether the asymptotic model that passes the TTT is or is not really conscious. If it is, then the model has "captured" conscious, even though the coupling cannot be demonstrated or explicated. If it has not, it is ontologically incomplete. But, short of BEING that model, there's no way we can ever know. (I also think that turing-indistinguishability is an EXPLANATION of why there's this incompleteness.) >>[SH:] If I were one of the [psychophysical] experimenters >>and Adam Reed were the other, how would he could get "objective >>(observer-independent) results" on my experience and I on his? Of >>course, if we make some (question-begging) assumptions about the fact >>that the experience of our respective alter egos (a) exists, (b) is >>similar to our own, and (c) is veridically reflected by the "form" of the >>overt outcome of our respective covert calculations, then we'd have some >>agreement, but I'd hardly dare to say we had objectivity. > [AR:] These assumptions are not "question-begging": they are logically > necessary consequences of applying Occam's razor to this situation (see > above). And yes, I would tend to regard the resulting agreement among > different subjective observers as evidence for the objectivity of their > measurements. I guess it'll have to be a standoff then. We disagree on what counts as objective -- perhaps even on what objective means. Also on which way Occam's Razor cuts. > For measurement to be *measurement of behavior*, the behavior must be, > in the temporal sequence, prior to measurement. But if the only overt > behavior is the communication of the results of measurement, then the > behavior occurs only after measurement has already taken place. So the > measurement in question cannot be a measurement of behavior, and must be > a measurement of something else. And the only plausible candidate for > that "something else" is conscious experience. If you're measuring, say, detection sensitivity, you're measuring input/output characteristics. It doesn't matter if these are trial-to-trial I/O/I/O etc., or just III...I/O. Only the behaviorists have made a fetish of overt performance. These days, it's safe to say that performance CAPACITY is what we're measuring, and that includes the capacity to do things covertly, as revealed in the final output, and inferrable therefrom. (Suppose you were checking a seismograph by looking at it's monthly cumulations only: Would the long behavioral silence make the end-result any less overt and "behavioral"?) As I suggested in another module, cognitive science is just behaviorism-with-a-theory, at last. The theory includes attributing covert, unobservable processes to the head -- but not conscious experiences to the mind. We know that's there those too, but for the (Occamian) reasons I've been discussing endlessly, they can't figure in our theories. > Steve seems to be saying that the mind-body problem constitutes "a > fundamental limit on objective inquiry", i.e. that this problem is *in > principle* incapable of ever being solved. I happen to think that human > consciousness is a fact of reality and, like all facts of reality, will > prove amenable to scientific explanation. And I like to think that > this explanation will constitute, in some scientifically relevant sense, > a solution to the "mind-body problem". So I don't see this problem as a > "fundamental limit". I used to have that fond hope too. Now I've seen there's a deep problem inherent in all the existing candidates, and I've gotten an idea of what the problem is in principle (that turing-indistinguishability IS objectivity), so I don't see any basis for hope in the future (unless there is a flaw in my reasoning). And, as Nagel has shown, the inductive scenario based on our long successful history in explaining objective phenomena simply fails to be generalizable to subjective ones. So I don't see the rational basis for Adam Reed's optimism. On the other hand, methodological epiphenomenalism is not all that bad -- after all, nothing OBJECTIVE is left out. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Mon Feb 23 00:40:36 1987 Date: Mon, 23 Feb 87 00:40:24 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #49 Status: R AIList Digest Sunday, 22 Feb 1987 Volume 5 : Issue 49 Today's Topics: Philosophy - Consciousness & Other Minds Scientific Method - Formalization in AI ---------------------------------------------------------------------- Date: 14 Feb 87 03:33:12 GMT From: well!wcalvin@LLL-LCC.ARPA (William Calvin) Subject: Re: More on Minsky on Mind(s) Stevan Harnad replies to my Darwin Machine proposal for consciousness (2256@well.uucp) as follows: > Summary: No objective account of planning for the future can give an independent causal role to consciousness, so why bother? > wcalvin@well.UUCP writes: > >> Rehearsing movements may be the key to appreciating the brain >> mechanisms [of consciousness and free will] > > But WHY do the functional mechanisms of planning have to be conscious? > ...Every one of the internal functions described for a planning, > past/future-oriented device of the kind Minsky describes (and we too > could conceivably be) would be physically, causally and functionally EXACTL Y > THE SAME--i.e., would accomplish the EXACT same things, by EXACTLY the same > means -- WITHOUT being interpreted as being conscious. So what functional > work is the consciousness doing? And if none, what is the justification > for the conscious interpretation of any such processes...? > Why bother? Why bother to talk about the subject at all? Because one hopes to understand the subject, maybe extend our capabilities a little by appreciating the mechanistic underpinning a little better. I am describing a stochastic-plus-selective process that, I suggest, accounts for many of the things which are ordinarily subsumed under the topic of consciousness. I'd like the reactions of people who've argued consciousness more than I have, who could perhaps improve on my characterization or point out what it can't subsume. I don't claim that these functional aspects of planning (I prefer to just say "scenario-spinning" rather than something as purposeful-sounding as planning) are ALL of consciousness -- they seem a good bet to me, worthy of careful examination, so as to better delineate what's left over after such stochastic-plus-selective processes are accounted for. But to talk about consciousness as being purely personal and subjective and hence beyond research -- that's just a turn-off to developing better approaches that are less dependent on slippery words. That's why one bothers. We tend to think that humans have something special going for them in this area. It is often confused with mere appreciation of one's world (perceiving pain, etc.) but there's nothing uniquely human about that. The world we perceive is probably a lot more detailed than that of a spider -- and even of a chimp, thanks to our constant creation of new schemata via word combinations. But if there is something more than that, I tend to think that it is in the area of scenario-spinning: foresight, "free will" as we choose between candidate scenarios, self- consciousness as we see ourselves poised at the intersection of several scenarios leading to alternative futures. I have proposed a mechanistic neurophysiological model to get us started thinking about this aspect of human experience; I expect it to pare away one aspect of "consciousness" so as to better define, if anything, what remains. Maybe there really is a little person inside the head, but I am working on the assumption that such distributed properties of stochastic neural networks will account for the whole thing, including how we shift our attention from one thing to another. Even William James in 1890 saw attention as a matter of competing scenarios: [Attention] is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought." To those offended by the notion that "chance rules," I would point out that it doesn't: like mutations and permutations of genes, neural stochastic events serve as the generators of novelty -- but it is selection by one's memories (often incorporated as values, ethics, and such) that determine what survives. Those values rule. We choose between the options we generate, and often without overt action -- we just form a new memory, a judgement on file to guide future choices and actions. And apropos chance, I cannot resist quoting Halifax: "He that leavth nothing to chance will do few things ill, but he will do very few things." He probably wasn't using "chance" in quite the sense that I am, but it's still appropriate when said using my stochastic sense too. William H. Calvin BITNET: wcalvin@uwalocke University of Washington USENET: wcalvin@well.uucp Biology Program NJ-15 206/328-1192 or 543-1648 Seattle WA 98195 ------------------------------ Date: 12 Feb 87 19:12:48 GMT From: mcvax!ukc!reading!onion!minster!adt@seismo.css.gov Subject: Re: Harnad's epiphenomenalism In article <4021@quartz.Diamond.BBN.COM> aweinste@Diamond.BBN.COM (Anders Weinstein) writes: >Well, I don't think we ought to give this up so easily. I would urge that >cognitivists *not* buy into the premise of so many of Harnad's replies: the >existence of some weird parallel universe of subjective experience. >(Actually, *multiple* such universes, one per conscious subject, though of >course the existence of more than my own is always open to doubt.) We should >recognize no such private worlds. The most promising prospect we have is that >conscious experiences are either to be identified with functional states of >the brain or eliminated from our ultimate picture of the world. How this >reduction is to be carried out in detail is naturally a matter for >empirical study to reveal, but this should remain one (distant) goal of >mind/brain inquiry. > >Anders Weinstein aweinste@DIAMOND.BBN.COM >BBN Labs, Cambridge MA Why is it necessary to assert that there are no subjective universes, all that is necessary is that everyone in their own subjective universe agrees the definition of consciousness as they perceive it. Eliminating conscious experiences from our ultimate picture of the world sounds like throwing away half the results so that the theory fits. The analogy of our understanding of gold in terms of its atomic structure is a useful one but does not require the rejection of subjective universes. If objectivism is taken to its limit as above then surely it must be possible to define "beautiful" in terms of physical states of mind, or "beautiful" should be eliminated from our ultimate picture of the world. OR "beautiful" is not a conscious experience. I would be interested to know which of these possibilities you support. ------------------------------ Date: 14 Feb 87 21:57:45 GMT From: brothers@topaz.rutgers.edu (Laurence R. Brothers) Subject: Submission for mod-ai Path: topaz!brothers From: brothers@topaz.RUTGERS.EDU (Laurence R. Brothers) Newsgroups: mod.ai Subject: Re: Other Minds Message-ID: <9245@topaz.RUTGERS.EDU> Date: 14 Feb 87 21:57:45 GMT References: <8702132202.AA01947@BOEING.COM> Organization: Rutgers Univ., New Brunswick, N.J. Lines: 49 So...? I think you've basically restated a number of properties of intelligence which AI researchers have been exploring for some time, with varying degrees of success. There are two REAL reasons why you can't build an "intelligent" machine today: 1) Since no one really knows how people think, we can't build machines which accurately model ourselves. 2) Current machines do not have anything like the kind of computing power necessary for intelligence. Ray@Boeing says: >Manipulation of symbols is insufficient by itself to duplicate human >performance; it is necessary to treat the perceptions and experiences the >symbols *symbolize*. Put a symbol for red and a symbol for blue in a pot, >and stir as you will, there will be no trace of magenta. Look, manipulation of symbols by a program is analogical with manipulation of neural impulses by a brain. When you reduce far enough, EVERYTHING is typographical/syntactical. The neat thing about brains is that they manipulate so MANY symbols at once. General arguments against standard AI techniques are all well and good (viz. Hofstadter's position), but keep in mind that while mainstream AI has not produced so much wonderful stuff, the old neural-net research was even less impressive. My own view regarding true machine intelligence is that there is no particular reason why it's not theoretically possible, but given an "intelligent" machine, one should not expect it to be able to do anything weird like passing a Turing Test. The hypothetical intelligent machine won't be anything like a human -- different architecture, different i/o bandwidths, different physical manifestation, so it is philosophically deviant to expect it to emulate a human. Anyhow, as a putative AI researcher (so I'm only 1st year, so sue me), it seems to me that decades of work have to be done on both hardware and cognitive modeling before we can even set our sights on HAL-9000.... Give me another ring when those terabyte RAM, femtosecond CAML cycle optical computers come out -- until then the entire discussion is numinous.... -- Laurence R. Brothers brothers@topaz.rutgers.edu {harvard,seismo,ut-sally,sri-iu,ihnp4!packard}!topaz!brothers "The future's so bright, I gotta wear shades!" ------------------------------ Date: Mon, 16 Feb 87 18:50:21 n From: DAVIS%EMBL.BITNET@wiscvm.wisc.edu Subject: reply to harnad I'm afraid that Stevan Harnad has still appeared not to grasp the irrelevance of asking 'WHY' questions about consciousness. > ...I am not asking a teleological question or even an evolutionary one. > [In prior iterations I explained why evolutionary accounts of the origins > and "survival value" of consciousness are doomed: because they're turing- > indistinguishable from the IDENTICAL selective-advantage scenario minus > conciousness.] Oh dear. In my assertion that there is a *biological* dimension to the current existence (or illusion of) consciousness, I had hoped that Harnad would understand the idea of evolutionary events being 'frozen-in'. Sure - there is no advantage in a conscious system doing what can be done unconciously. BUT, and its a big but, if the system that gets to do trick X first *just happens* to be conscious, then all future systems evolving from that one will also be conscious. This is true of all aspects of biological selection, and would be true in any context of natural selection operating on an essentially randomn feature generator. There need be NO 'why' as to the fact that consciousness is with us now - there is every reason to suppose that we are looking at a historical accident that is frozen-in by the irreversibility of a system evolving in a biological context. In fact, it may not even be an accident - when you consider the sort of complexity involved in building a'turing- indistinguishable' automaton, versus the slow, steady progress possible with an evolving, concious system, it may very well be that the ONLY reason for the existence of conscious systems is that they are *easier* to build within an evolutionary, biochemical context. Hence, we have no real reason to suppose that there is a 'why' to be answered, unless you have an interest in asking 'why did my random number generator give me 4.5672 ?'. Consciousness appears to be with us today - the > justification for the conscious interpretation of the "how" < (Harnad) is simply this: - as individuals we experience self-consciousness, - other system's behaviour is so similar to our own that we may reasonably make the assumption of conscioussness there too, - the *a priori* existence of conciousness is supported by (i) our own belief in our own experience and hence (ii) the evolutionary parrallels with other biological features such as the pentadactyl limb, globin and histone structures and the use of DNA. Voila - Occam's razor meets the blind watchmaker, and gives us conscious machines, not because there is any reason 'why' this should be so, but just because it worked out like that. Like it - or lump it! As for the question of knowledge & consciousness: I did not intend the word 'know' to be used in its epistemological sense, merely to point out that our VAXcluster has access to information, but (appears not to) KNOW anything. The mystery of the 'C-1' is that we can be aware, that it is 'like something to be us', period. We don't know how yet,and we will probably never know why beyond the likelihood of our ancestral soup bowl being pretty good at coming up with bright ideas, like us! (no immodesty intended here.....) regards, Paul Davis netmail: davis@embl.bitnet wetmail: embl, postfach 10.2209, 6900 heidelberg, west germany petmail: homing pigeons to ...... ------------------------------ Date: Sun, 15 Feb 87 17:50:00 EST From: Raul.Valdes-Perez@B.GP.CS.CMU.EDU Subject: Formalization in AI (Not Philosophy) I believe it is wrong to say that the importance of formalization to AI is overstated; formalization is our secret weapon. Let's say that AI is the science of codifying human knowledge in an effective manner, where by effective is meant able to effect a result, rather than, say, listing on paper and hanging in a museum. Our secret weapon is formalization by embedding knowledge in a computer program, in accordance with our theories of how best to organize the embedding. We then run the program to test our theories. This embedding is a formalization; we are able to discover qualitative properties of the knowledge and organization by syntactic manipulation i.e. execution of the computer program. These qualitative properties would not otherwise be discovered by us because of our limited capacity to sustain complex thought. Programming may not seem formal, because few theorems follow from its exercise. This difficulty is due to our programming languages that lack useful mathematical properties. Our resulting insights are qualitative; nevertheless they are achieved by formalization. My conclusion is that everyone in AI believes in formalization, whether he knows it or not. -- Raul E. Valdes-Perez -- -- CMU CS -- ------------------------------ Date: Mon, 16 Feb 87 07:41:29 PST From: ames!styx!lll-lcc!ihnp4!hounx!kort@cad.Berkeley.EDU Subject: Re: Other Minds Ray Allis has brought up one of my favorite subjects: the creation of an artificial mind. I agree with Ray that symbol manipulation is insufficient. In last year's discussion of the Chinese Room, we identified one of the shortcomings of the Room: it was unable to learn from experience and tell the stories of its own adventures. The cognitive maps of an artificial mind are the maps and models of the external world. It is one thing to download a map created by an external mapmaker. It is quite another thing to explore one's surroundings with one's senses and construct an internal representation which is analogically similar to the external world. An Artificial Sentient Being would be equipped with sensors (vision, audition, olfaction, tactition), and would be given the goal of exploring its environment, constructing an internal map or model of the that environment, and then using that map to navigate safely. Finally, like Marco Polo, the Artificial Sentient Being would describe to others, in symbolic language, the contents of its internal map: it would tell its life story. I personally would like to see us build an Artificial Sentient Being who was able to do Science. That is, it would observe reality and construct accurate theories (mental models) of the dynamics which governed external reality. Suppose we had two such machines, and we set them to explore each other. Would each build an accurate internal representation of the other? (That is, could a Turing Machine construct a mathematical model of (another) Turing Machine?) Would the Sentient Being recognize the similarity between itself and the Other? And in seeing its soul-mate, would it come to know itself for the first time? Barry Kort --- -- Barry Kort ...ihnp4!houxm!hounx!kort A door opens. You are entering another dementia. The dementia of the mind. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Mon Feb 23 00:40:58 1987 Date: Mon, 23 Feb 87 00:40:40 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #50 Status: R AIList Digest Sunday, 22 Feb 1987 Volume 5 : Issue 50 Today's Topics: Philosophy - Consciousness & Other Minds ---------------------------------------------------------------------- Date: 17 Feb 87 20:15:29 GMT From: "Col. G. L. Sicherman" Subject: Re: artificial minds In article <8702132202.AA01947@BOEING.COM>, ray@BOEING.COM (Ray Allis) writes: > ... Homo Sap.'s > distinguished success among inhabitants of this planet is primarily due > to our ability to think. ... Success is relative! Cockroaches are successful too, for quite different reasons. And our own success is questionable, considering how many of us starve to death. Try explaining _that_ to a cockroach. Biologically, our chief advantages over other species are erect posture and prehensile hands. Abstract thought is only ancillary; other species lack it mainly because they cannot use it. > and it is difficult > for me to imagine a goal more relevant than improving the chances for > survival by increasing our ability to act intelligently. Well said. But this is an argument for using computers as tools, and it is seldom true that tools ought to be designed to resemble the human components that they extend. Would you use a hammer that looks like a fist? Or wear a shoe with toes? Why try to endow a lump of inorganic matter with the soul of a human being? You don't yet know what your own mind is capable of. Besides, if you do produce an intelligent computer, it may not like you! -- Col. G. L. Sicherman UU: ...{rocksvax|decvax}!sunybcs!colonel CS: colonel@buffalo-cs BI: colonel@sunybcs, csdsiche@ubvms ------------------------------ Date: 19 Feb 87 17:13:11 GMT From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad) Subject: More on the functional irrelevance of the brain to mind-modeling "CUGINI, JOHN" wrote on mod.ai: > The Big Question: Is your brain more similar to mine than either > is to any plausible silicon-based device? That's not the big question, at least not mine. Mine is "How does the mind work?" To answer that, you need a functional theory of how the mind works, you need a way of testing whether the theory works, and you need a way of deciding whether a device implemented according to the theory has a mind. That's what I proposed the formal and informal TTT for: testing and implementing a functional theory of mind. Cugini keeps focusing on the usefulness of "presence of `brain'" as evidence for the possession of a mind. But in the absence of a functional theory of the brain, its superficial appearance hardly helps in constructing and testing a functional theory of the mind. Another way of putting it is that I'm concerned with a specific scientific (bioengineering) problem, not an exobiological one ("Does this alien have a mind?"), nor a sci-fi one ("Does this fictitious robot have a mind?"), nor a clinical one ("Does this comatose patient or anencephalic have a mind?"), nor even the informal, daily folk-psychological one ("Does this thing I'm interacting with have a mind?"). I'm only concerned with functional theories about how the mind works. > A member of an Amazon tribe could find out, truly know, that light > switches cause lights to come on, with a few minutes of > experimentation. It is no objection to his knowledge to say that he > has no causal theory within which to embed this knowledge, or to > question his knowledge of the relevance of the similarities among > various light switches, even if he is hard-pressed to say anything > beyond "they look alike." Again, I'm not concerned with informal, practical, folk heuristics but with functional, scientific theory. > Now, S. Harnad, upon your solemn oath, do you have any serious > practical doubt, that, in fact, > 1. you have a brain? > 2. that it is the primary cause of your consciousness? > 3. that other people have brains? > 4. that these brains are similar to your own My question is not a "practical" one, but a functional, scientific one, and none of these correlations among superficial appearances help. > how do you know that two performances > by two entities in question (a human and a robot) are relevantly > similar? What is it precisely about the performances you intend to > measure? How do you know that these are the important aspects? > ...as I recall, the TTT was a kind > of gestalt you'll-know-intelligent-behavior-when-you-see-it test. > How is this different from looking at two brains and saying, yeah > they look like the same kind of thing to me? Making a brain look-alike is a trivial task (they do it in Hollywood all the time). Making a (TTT-strength) behavioral look-alike is not. My claim is that a successful construction of the latter is as close as we can hope to get to a functional understanding of the mind. There's no "measurement" problem. The data are in. Build a robot that can detect, discriminate, identify, manipulate and describe objects and events and can interact linguistically indistinguishably from the way we do (as ultimately tested informally by laymen) and you'll have the problem licked. As to "relevant" similarities: Perhaps the TTT is too exacting. TOTAL human performance capacity may be more than what's necessary to capture mind (for example, nonhuman species and retarded humans also have minds). Let's say it's to play it safe; to make sure we haven't left anything relevant out; in any case, there will no doubt be many subtotal way-stations on the long road to the asymptotic TTT. The brain's another matter, though. Its structural appearance is certainly not good enough to go on. And its function is an ambiguous matter. On the one hand, its behavioral capacities are among its functional capacities, so behavioral function is a subset of brain function. But, over and above that we do not know what implementational details are relevant. The TTT could in principle be beefed up to demand not only behavioral indistinguishability, but anatomical, physiological and pharmacologcal indistinguishability. I'd go for the behavioral asymptote first though, as the most likely criterion of relevance, before adding on implementational constraints too -- especially because those implementational details will play no role in our intuitive judgments about whether the device in question has a mind like us, any more than they do now. Nor will they significantly increase the objective validity of the (frail) TTT criterion itself, since brain correlates are ultimately validated against behavioral correlates. My own guess, though, is that our total performance capacity will be as strong a hardware constraint as is needed to capture all the relevant functional similarities. > Just a quick pout here - last December I posted a somewhat detailed > defense of the "brain-as-criterion" position... > No one has responded directly to this posting. I didn't reply because, as I indicated above, you're not addressing the same question I am (and because our exchanges have become somewhat repetitive). -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ Date: Fri, 20 Feb 87 15:23:32 pst From: Ray Allis Subject: Other Minds Hello? Where'd everyone go? Was it something I said? I have a couple of things to say, yet. But fear not, this is part 2 of 2, so you won't have me cluttering up your mail again in the near future. This is a continuation of my 2/13/87 posting, in which I am proposing a radical paradigm shift in AI. [The silence on the Arpanet AIList is due to my saving the philosophical messages for a weekly batch mailing. This gives other topics a chance and reduces the annoyance of those who don't care for these discussions. -- KIL] Our common sense thought is based on and determined by those things which are "sensible" (i.e. that we can sense). "The fog comes in on little cat feet" [Sandberg]. Ladies and gentlemen of the AI community, you are not even close! Let me relax the criteria a little, take this phrase, "a political litmus test". How do you expect a machine to understand that without experience? Nor can you ever *specify* enough "knowledge" to allow understanding in any useful sense. The current computer science approach to intelligence is as futile as the machine translation projects of the 60's, and for the same reason; both require understanding on the part of the machine, and of that there isn't a trace. Obviously symbolic thinking is significant; look at the success of our species. There are two world-changing advantages to symbolic thought. One advantage is the ability to think about the relationships among things and events without the confusing details of real things and events; "content-free" or "context-independent" "reasoning" leading to mathematics and logic and giving us a measure of control over our environment, and our destiny. Symbol systems are tools which assist and enhance human minds, not replacements for those minds. Production rules are an externalization of knowledge. They are how we explain our behavior to other people. The other advantage lies in the fundamental difference between "symbolize" and "represent". Consider how natural language works. Through training, you come to associate "words" with experiences. The immediate motive for this accomplishment is communication; when you can say "wawa" or "no!", the use of language becomes your best tool for satisfying your desires and needs. But a more subtle and significant thing happens. The association between any symbol and that which it symbolizes is arbitrary, and imprecise. Also, in any human experience, there is *so much* context that it is practically the case that every experience is associated with every other, even if somewhat indirectly. So please imagine a brain, in some instantaneous state of excitation due to external stimuli. Part of the "context" (or experience) will be (representations of) symbols previously associated. Now imagine the internal loop which presents internal events to the brain as if they were external events, presenting those symbols as if you "saw" or "heard" them. But, since the association is imprecise, the experience evoked by those symbols will very likely not be identical to that which evoked the symbols. A changed pattern of activity in the nervous system will result, possibly with different associated symbols, in which case the cycle repeats. The function of all this activity is to "converge" on the "appropriate" behavior for the organism, which is to say to continue the organism's existence. There is extreme "parallelism"; immense numbers of events are occurring simultaneously, and all associations are stimulated "at once". Also, none of this is "computation" in the traditional sense; it is the operation of an analog "device", which is the central nervous system, in its function of producing "appropriate" behavior. Imagine an experience represented in hundreds of millions of CNS connections. Another experience, whatever the source, (that is from external sensors, from memory or wholly created) will be represented in the same (identical) neurons, in point-for-point registration, all half-billion points at once. Any variation in correspondence will be immediately conspicuous. The field (composite) is available for the same contrast enhancement and figure/ground "processing" as in visual (or any) input. Multiple experiences will reinforce at points of correspondence, and cancel elsewhere. Tiny children are shown instances of things; dogs, kittens, cows, fruits, and expected to generalize and to demonstrate their generalization, so adults can correct them if necessary. Generalization is the shift in figure / ground percentage which comes from "thresholding" out the weaker sensations. The resultant is the "intersection" of qualities of two or more experiences. This whole operation, comparing millions of sensation details with corresponding sensation details in another experience can happen in parallel in a very few cycles or steps. Informed by Maturana's ideas of autopoeic systems, mind can be considered as an emergent phenomenon of the complexity which has evolved in the central nervous systems of Terrestrial organisms (that's us). This view has fundamental philosophical implications concerning whether minds are likely to exist elsewhere in the Universe due to "natural causes", and whether we can aspire to create minds. Much "thinking" is of the sort described by the Nobel Prize winner in "The Search for Solutions" who thinks of DNA as a rope which, when stretched will break at certain "weak" points. That "tool", the visualization, is guided by physical experience, his personal experience of ropes and their behavior. Einstein said he often thought in images; certainly his thought was guided, and perhaps the results judged, by his personal experience with the things represented. We also need "... the ability to generalize, the ability to strip to the essential attributes of some actor in the process..." "We are not ready to write equations, for the most part, and we still rely on mechanical and chemical or other physical models." Josua Lederberg - Nobel Prize geneticist - President of Rockefeller U. "The Search for Solutions". The internal loop can use motor action (intents) to re-stimulate associated sensory input (results) and entire sequences of sensory input to motor output to sensory input can occur without interacting with the external environment. Here is the basis for imagination and planning. Experiences need not be original; they may be created entirely from abstractions. And this is called *imagination*. The ability to construct internal imaginary events and situations is fundamental to symbolic communication: where symbols evoke and are derived from internal state. Planning is the process of reviewing a set of experiences, which may be recalled, or may be constructed imaginary experiences. Planning requires imagination (see above) of actions and consequences. The success and effectiveness of the resulting plan depends on the quality and quantity of experiences available to the planner. He benefits from a rich repertoire of experience from which to choreograph his dance of events. The novelty in the present theory is that most of the planning process is essentially and necessarily analog in nature, and symbol processing is only part of it. Symbols are critical to make the process explicit, but the planning process itself is not only, or even primarily, symbol processing. If we agree that our minds are an effect of our CNS, then we must accept that the structure of our mind is determined by the structure of our CNS. Sure there's a "deep structure" in linguistic ability; it's our physical implementation (embodiment). The "meaning" of language is that state which it evokes in us. "A new meaning is born whenever the mind uses a word or other symbol in a new way. If you think of a key as something to open a lock and then speak of hard work as the key to success, you are using the word key in a new way. It no longer means simply a metal implement for opening a lock; it has acquired a much richer sense in your mind: "necessary prerequisite for attaining a desired goal." If the word key were not free to shift its sense, the new concept probably could not emerge. All thinkers, whether artists, philosophers, scientists, businessmen, or laborers, can create new thoughts if they use words in new ways." ["The Mind Builder", Richard W. Samson, 1965.] Samson identified seven mental "faculties" which make an interesting list of target capabilities for "intelligent machines". These are: 1. Words: We let words (together with numbers and other symbols) mean things. 2. Thing Making: We make mental pictures of things when we interpret sensations. 3. Qualification: We notice the qualities of things: how things are alike and how they differ. 4. Classification: We mentally sort things into classes, types or families. 5. Structure Analysis: We observe how things are made: break structural wholes into component parts. 6. Operation Analysis: We notice how things happen: in what successive stages. 7. Analogy: We see how seemingly unconnected situations are alike, forming parallel relations in different "worlds of thought". When you are ready, try your system on the SAT test: Which word (a, b, c, or d) best completes the sentence, in your opinion? There is no "right" answer; pick the word which seems best to you. Poverty and hatred are ---------- of war. (a) roots (b) leaves (c) seeds (d) fruits We might be well advised to imitate a real example intelligence (ours). Later we can improve on the implementation, and possibly the performance. Certainly we will use mathematics to analyze and predict the system's behavior; or rather subsets and abstractions, models of the system. But we may not be able to construct any model less complex than the system itself, which will produce the desired behavior; its behavior must be understood through simulation. "Computational irreducibility is a phenemenon that seems to arise in many physical and mathematical systems. The behavior of any system can be found by explicit simulation of the steps in its evolution. When the system is simple enough, however, it is always possible to find a short cut to the procedure: once the initial state of the system is given, its state at any subsequent step can be found directly from a mathematical formula." "For a system such as (illus.), however, the behavior is so complicated that in general no short-cut description of the evolution can be given. Such a system is computationally irreducible, and its evolution can effectively be determined only by the explicit simulation of each step. It seems likely that many physical and mathematical systems for which no simple description is now known are in fact computationally irreducible. Experiment, either physical or computational, is effectively the only way to study such systems." [Stephen Wolfram, Computer Software in Science and Mathematics, Scientific American, Sept., 1984] A mind is an effect which probably cannot be sustained at a lesser level of complexity than in our own case; any abstraction which simplifies will also destroy the very capabilities we wish to understand. There are trillions of components and connections in the human brain. No reasonable person can expect to model a mind in any significant way using a few tens or hundreds of components. Since there is a threshold of complexity below which the behavior of interest will not occur, and the complexity of models is generally deliberately reduced below this level, models will not produce the phenomena of interest. "Yet recall John von Neumann's warning that a complete description of how we perceive may be far more complicated than this complicated process itself - that the only way to explain pattern recognition may be to build a device capable of recognizing pattern, and then, mutely, point to it. How we think is still harder, and almost certainly we are not yet breaking this problem down in solvable form." Horace Freeland Judson, "The Search for Solutions", 1980. In spite of the tone of that last quote, I believe we can and should build, now, things which will prove or disprove these ideas, so we can either quit wasting energy or get going on building other minds. I'm not going to be at this mail address after March 1, but probably someone will forward my mail. The Boeing Advanced Technology Center just closed down all its robotics projects, including mobility and stereo vision, my work in induction, and all other work not "directly supporting Boeing programs". So twenty-plus of us are scrambling to find other places to work. I don't know what access to any networks I might have next month. Ray ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Feb 24 00:40:45 1987 Date: Tue, 24 Feb 87 00:40:38 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #51 Status: R AIList Digest Monday, 23 Feb 1987 Volume 5 : Issue 51 Today's Topics: News - Impact of Artificial Intelligence, Philosophy - Design Stance on Consciousness, Review - Society of Mind ---------------------------------------------------------------------- Date: Sat, 21 Feb 1987 09:59 CST From: Laurence L. Leff Subject: Impact of Artificial Intelligence "Artificial Intelligence" is the computer journal with the highest impact factor according to the latest issue of the Journal Citation Reports put out by the Institute for Scientific Information. It beats by a good factor the second runner up, IEEE Transactions on Pattern Analysis. The Impact Factor is a measure of how often people cite the journal and is proportional to the number of citations per article published. I. E. we are looking at how often someone deems an article in "Artificial Intelligence" of sufficient significance to cite it in their article. To put this in perspective, here are the numbers for some familiar computer journals. Artificial Intelligence 3.914 IEEE T Pattern Anal 2.374 IEEE T Computers 1.654 Comput Surv 1.545 Commun ACM 1.528 SIAM J Comput 1.349 INT J Robot Res 1.314 J Assoc Comput Mach 1.282 Comput Vision Graph 1.170 IEEE T Syst Man Cyb 1.168 Computer 1.161 Pattern Recogn 1.092 IBJ J Res Dev 1.087 IEEE T Software Eng 0.963 Acta Inform 0.627 J Comput Syst Sci 0.613 J Robotic System 0.600 Int J. Syst Sci 0.428 Software Pract Experience 0.253 Kybernetika 0.171 AT&T Tech Journal 0.080 So who is citing "Artificial Intelligence" you might ask. Of a total of 924 citations in 1985 to Artificial Intelligence, here is a break down of some of the frequent and interesting citing journals Artificial Intelligence 103 IEEE T Pattern Analysis 62 Comput Vision Graph 50 Int J Man Mach Stud 37 Comput Aided Design 29 P. Soc. Photo-Opt Inst 25 TSI-Tech Science Inf 23 Comput Math Appl 20 IEEE T Syst Man Cybern 19 Lect Notes Comput Sc 18 Comput Surv 15 J Assoc Comput Mach 12 J Symb Comput 7 Environ Plann B 6 It is also interesting to note who authors publishing in "Artificial Intelligence" cite. When you compare the list of items cited within "Artificial Intelligence" and compare it to that in other fields, one is impressed by the importance of the conference literature to artificial intelligence. Of a total of 997 citations by "Artificial Intelligence" articles, here are the numbers for some of the more noteworthy sources of these citations Artificial Intelligence 103 IJCAI and AAAI conferences 76 P Int S Robotics Res 33 Mach Intell 23 Commun ACM 16 Cognitive Science 15 Comput Surv 7 Handbook of Artifical Int. 6 . ------------------------------ Date: 22 Feb 87 1150 PST From: John McCarthy Subject: consciousness This discussion of consciousness considers AI as a branch of computer science rather than as a branch of biology or philosophy. Therefore, it concerns why it is necessary to provide AI programs with something like human consciousness in order that they should behave intelligently in certain situations important for their utility. Of course, human consciousness presumably has accidental features that there would be no reason to imitate and other features that are perhaps necessary consequences of its having evolved that aren't necessary in programs designed from scratch. However, since we don't yet understand AI very well, we shouldn't jump to conclusions about what features of consciousness are unnecessary in order to have the intellectual capabilities humans have and that we want our programs to have. Consciousness has many aspects and here are some. 1. We think about our bodies as physical objects to which the same physical laws apply as apply to other physical objects. This permits us to predict the behavior of our bodies in certain situations, e.g. what might break them, and also permits us to predict the behavior of other physical objects, e.g. we expect them to have similar inertia. AI systems should apply physics to their own bodies to the extent that they have them. Whether they will need to use the analogy may depend on what knowledge we choose to build in and what we will expect them to learn from experience. 2. We can observe in a general way what we have been thinking about and draw conclusions. For example, I have been thinking about what to say about consciousness in this forum, and at present it seems to be going rather well, so I'll continue composing my comment rather than think about some specific aspect of consciousness. I am, however, concerned that when I finish this list I may have left our important aspects of consciousness that we shall want in our programs. This kind of general observation of the mental situation is important for making intellectual plans, i.e. deciding what to think about. Very intelligent computer programs will also need to examine what they have been thinking about and reason about this information in order to decide whether their intellectual goals are achievable. Unfortunately, AI isn't ready for this yet, because we must solve some conceptual problems first. 3. We compare ourselves intellectually with other people. The concepts we use to think about our own minds are mainly learned from other people. As with information about our bodies, we infer from what we observe about ourselves to the mental qualities of other people, and we also learn about ourselves from what we learn about others. In so far as programs are made similar to people or other programs, they may also have to learn from interaction. 4. We have goals about our own mental functioning. We would like to be smarter, nicer and more content. It seems to me that programs should also have such meta-goals, but I don't see that we need to make them the same as people's. Consider that many people have the goal of being more rational, e.g. less driven by impulses. When we find ourselves with circular preferences, e.g preferring A to B, B to C and C to A, we chide ourselves and try to change. A computer program might well discover that its heuristics give rise to circular preferences and try to modify them in service of its grand goals. However, while people are originally not fully rational, because our heritage provides direct connections between our disparate drives and the actions that achieve the goals they generate, it seems likely that there is no reason to imitate all these features in computer programs. Thus our programs should be able to compare the desirability of future scenarios more readily than people do. 5. Besides our direct observations of our own mental states, we have a lot of general information about them. We can predict whether problems will be easy or difficult for us and whether hypothetical events will be pleasing or not. Programs will require similar capabilities. Finally, it seems to me that the discussion of consciousness in this digest has been too much an outgrowth of the ordinary traditional philosophical discussions of the subject. It hasn't sufficiently been influenced by Dennett's "design stance". I'm sure that more aspects of human consciousness than I have been able to list will require analogs in robotic systems. We should also be alert to provide forms of self-observation and reasoning about the programs own mental state that go beyond those evolution has given us. ------------------------------ Date: Sun, 22 Feb 87 20:21 EST From: ANK%CUNYVMS1.BITNET@wiscvm.wisc.edu Subject: N.Y.Times review of SOCIETY OF MIND Today's (22 Feb. 87) New York Times Book review section carried a full page review of Minsky's "Society of Mind" {pp 339 Simon & Schuster $19.95} by James W Lance, a professor in neurology from Australia. Since the beginning of this year, over a score of people have devoted a cumulative of 100 + hours debating over Marvin's comments on Consciousness. With that as a backdrop I wanted to see what Dr. Lance had to say ! Well nothing much that readers of AI-Digest do not already know. In all fairness to the reviewer I must say he did a good job of filling a page with bits and pieces from the book. But what he did not accomplish is to critique the book as a scholarly (I am right ?...Well many may think not..) work. New York Times, I must complain, has not been very serious in the past two years, when it comes to reviews relating to such topics, in comparision to other scientific books that pass through their tables. What then is my gripe ? I think "consciousness" is a very serious matter. Furthermore the classical Mind-Body question will always re-occur every few decades in the light of a new philosophical construct. Therefore to attribute the onus of assigning the *definitation of "consciousness" to Minsky's posting in AI Digest, is wrong. I did not see much debate when PDP was published by M.I.T.Press ? Listen folks ! I think there is more mileage to be got from the two Volumes of Parallel Distributed Processing than in "Society....". I rather suspect that we in the academia expect great architecture every monday morning. Similarly Minsky's book is *not* supposed to be taken as the final word or *official* pronouncement of "mind-brain debate". The purpose, as I understood it from reading the book, was to generate idea's and reflect on the homely's and aphorisms that the book is so full of. It is true that many common-day phenomena relating to memory is outside many models of memory. Let me illustrate " I forgot the my telephone number of two years ago in Cambridge.... and last week right in the middle of Fifth Ave. and 42nd. it came as a flash.." I do not think many theories of memory either explain one or the other problems, but none that in the classical sense address all the issues. (Yes ! not even the latest theories. That's the complexity of studying Man and his mind using expirical tools) The point I wish to make is simple. Many of us (graduate-level students) could get many germ-of-an-idea from his book. Lets keep it at that. All many of us need is a metaphor or a notion, and off we go. His book does that rather neatly. It should be a required reading along with Drefus's, if we have to go beyond satisfying our Ph.D. requirements. The last paragraph of Lance's review was, and let me paraphrase it: "This is a disturbing book for a neurologist to read because of the summation of mathematics + psychology + philosophy still does not approach the complexities of neurology. And yet the text pursues an exciting trail to the elusive goal" Sure enough, I guess Minsky did not expect to give one either (or so I presume..) I'm sure it is easy for Harnad to reduce all "books in Psychology, Philosophy, Biology....theatre, music..." to the MIND-BODY problems. Not that I personally mind, but it is better that we limit the domain. Finally I wonder if *intensional-realist* like Harnad (maybe I'm wrong) really have a plausible model of the mind ? Anil Khullar {Ph.D. Program in Psychology C.U.N.Y. Graduate Center. New York NY 100036 } ank%cunyvms1.BITNET@wiscvm.edu BITNET: ank@cunyvms1 PS: I personally think Harnad has given me enough insights for my term-paper....... ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Feb 24 00:41:00 1987 Date: Tue, 24 Feb 87 00:40:50 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #52 Status: R AIList Digest Monday, 23 Feb 1987 Volume 5 : Issue 52 Today's Topics: Seminars - Boolean Concept Learning (CMU) & Knowledge-Based CAD-CAM Software Integration (Rutgers) & Parallel Techniques in Computer Algebra (SMU) & A Picture Theory of Mental Images (SUNY) & Minds, Machines, and Searle (Rutgers) ---------------------------------------------------------------------- Date: 20 Feb 87 11:41:42 EST From: Marcella.Zaragoza@isl1.ri.cmu.edu Subject: Seminar - Boolean Concept Learning (CMU) THEORY SEMINAR Lenny Pitt Wednesday, 25 Feb. 3:30 WeH 5409 Recent results on Boolean concept learning. Lenny Pitt U. Illinois at Urbana-Champaign In "A Theory of the Learnable" (Valiant, 1984), a new formal definition for concept learning from examples was proposed. Since then a number of interesting results have been obtained giving learnable classes of concepts. After motivating and explaining Valiant's definition of probabilistic and approximate learning, we show that even some apparently simple types of concepts (e.g. Boolean trees, disjuncts of two conjuncts) cannot be learned (assuming P not equal NP). The reductions used suggest an interesting relationship between learnability problems and the approximation of combinatorial optimization problems. This is joint work with Leslie G. Valiant. This talk will be of interest to both Theory and AI people. To schedule an appointment to meet with him on Wednesday, send mail to stefanis@g. ------------------------------ Date: 18 Feb 87 13:01:42 EST From: KALANTARI@RED.RUTGERS.EDU Subject: Seminar - Knowledge-Based CAD-CAM Software Integration (Rutgers) RUTGERS COMPUTER SCIENCE AND RUTCOR COLLOQUIUM SCHEDULE - SPRING 1987 Computer Science Department Colloquium : This talk has already been announced without an abstract which is given bellow --------------------------------------- DATE: Friday February 20, 1987 SPEAKER: Dr. Benjamin Cohen AFFILIATION: RCA Princeton Labs. TITLE: "Knowledge-Based CAD-CAM Software Integration." TIME: 2:50 (Coffee and Cookies will be setup at 2:30) PLACE: Hill Center, Room 705 ABSTRACT How to integrate large, distributed, heterogeneous CAD/CAM applications to support data sharing and data integrity is a major software engineering challenge. One of the key elements in a solution to the integration problem is the use of knowledge-based techniques and AI languages. A tutorial overview of the potential role of knowledge-based techniques in integrating distributed, heterogeneous databases will be presented. We also illustrate the use of knowledge-based techniques for process and data integration with a case study of the CAPTEN [Computer Assisted Picture Tube Engineering] Project underway at the David Sarnoff Research Center. ------------------------------ Date: Sat, 21 Feb 1987 12:49 CST From: Laurence L. Leff Subject: Seminar - Parallel Techniques in Computer Algebra (SMU) Seminar Announcement, Friday February 27, 1987, 315 SIC, 1:30 PM, Southern Methodist University Stephen Watt ABSTRACT: PARALLEL TECHNIQUES IN COMPUTER ALGEBRA This talk presents techniques for exploiting parallel proc- essing in symbolic mathematical computation. We examine the use of high-level parallelism when the number of processors is fixed and independent of the problem size, as in existing multiprocessors. Since seemingly small changes to the inputs can cause dra- matic changes in the execution times of many algorithms in computer algebra, it is not generally useful to use static scheduling. We find it is possible, however, to exploit the high-level parallelism in many computer algebra problems us- ing dynamic scheduling methods in which subproblems are treated homogeneously. An OR-parallel algorithm for integer factorization will be presented along with AND-parallel al- gorithms for the computation of multivariate polynomial GCDs and the computation of Groebner bases. A portion of the talk will be used to present the design of a system for running computer algebra programs on a multi- processor. The system is a version of Maple able to dis- tribute processes over a local area network. The fact that the multiprocessor is a local area network need not be con- sidered by the programmer. ------------------------------ Date: Thu, 19 Feb 87 09:57:35 EST From: "William J. Rapaport" Subject: Seminar - A Picture Theory of Mental Images (SUNY) STATE UNIVERSITY OF NEW YORK AT BUFFALO GRADUATE GROUP IN COGNITIVE SCIENCE MICHAEL J. TYE Department of Philosophy Northern Illinois University A PICTURE THEORY OF MENTAL IMAGES The picture theory of mental images has become a subject of hot debate in recent cognitive psychology. Some psychologists, notably Stephen Kosslyn, have argued that the best explanation of a variety of experi- ments on imagery is that mental images are pictorial. Although Kosslyn has valiantly tried to explain just what the basic thesis of the pic- torial approach (as he accepts it) amounts to, his position remains dif- ficult to grasp. As a result, I believe, it has been badly misunder- stood, both by prominent philosophers and by prominent cognitive scien- tists. My aims in this paper are to present a clear statement of the picture theory as it is understood by Kosslyn, to show that this theory presents no threat to the dominant digital-computer model of the mind (contrary to the claims of some well-known commentators), and to argue that the issue of imagistic indeterminacy is more problematic for the opposing linguistic or descriptional view of mental images than it is for the picture theory. Monday, March 9, 1987 3:30 P.M. Park 280, Amherst Campus Co-sponsored by: Department of Philosophy Informal discussion at 8:00 P.M. at a place to be announced. Call Bill Rapaport (Dept. of Computer Science, 636-3193 or 3181) or Gail Bruder (Dept. of Psychology, 636-3676) for further information. William J. Rapaport Assistant Professor Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260 (716) 636-3193, 3180 uucp: ..!{allegra,boulder,decvax,mit-ems,nike,rocksanne,sbcs,watmath}!sunybcs!rapaport csnet: rapaport@buffalo.csnet bitnet: rapaport@sunybcs.bitnet ------------------------------ Date: 20 Feb 87 02:01:33 GMT From: chandros@topaz.RUTGERS.EDU (Jonathan A. Chandross) Subject: Seminar - Minds, Machines, and Searle (Rutgers) USACS is pleased to announce a talk by Stevan Harnad on Minds, Machines, and Searle Tuesday, February 24th Hill Center Room 705 at 5:30 PM For those of you who aren't familiar with Stevan Harnad, he is the editor of the Brain and Behaviorial Sciences journal (where Searle's Chinese Room argument first appeared), as well as a regular poster to mod.ai. If you would like to come to dinner with us please send mail to: rutgers!topaz!chandross. I need to know by Monday (2/23) at the latest to make reservations. For further information, or a transcript of the talk, send email. SUMMARY AND CONCLUSIONS: Searle's provocative "Chinese Room Argument" attempted to show that the goals of "Strong AI" are unrealizable. Proponents of Strong AI are supposed to believe that (i) the mind is a computer program, (ii) the brain is irrelevant, and (iii) the Turing Test is decisive. Searle's point is that since the programmed symbol-manipulating instructions of a computer capable of passing the Turing Test for understanding Chinese could always be performed instead by a person who could not understand Chinese, the computer can hardly be said to understand Chinese. Such "simulated" understanding, Searle argues, is not the same as real understanding, which can only be accomplished by something that "duplicates" the "causal powers" of the brain. In the present paper the following points have been made: 1. Simulation versus Implementation: Searle fails to distinguish between the simulation of a mechanism, which is only the formal testing of a theory, and the implementation of a mechanism, which does duplicate causal powers. Searle's "simulation" only simulates simulation rather than implementation. It can no more be expected to understand than a simulated airplane can be expected to fly. Nevertheless, a successful simulation must capture formally all the relevant functional properties of a successful implementation. 2. Theory-Testing versus Turing-Testing: Searle's argument conflates theory-testing and Turing-Testing. Computer simulations formally encode and test models for human perceptuomotor and cognitive performance capacities; they are the medium in which the empirical and theoretical work is done. The Turing Test is an informal and open-ended test of whether or not people can discriminate the performance of the implemented simulation from that of a real human being. In a sense, we are Turing-Testing one another all the time, in our everyday solutions to the "other minds" problem. 3. The Convergence Argument: Searle fails to take underdetermination into account. All scientific theories are underdetermined by their data; i.e., the data are compatible with more than one theory. But as the data domain grows, the degrees of freedom for alternative (equiparametric) theories shrink. This "convergence" constraint applies to AI's "toy" linguistic and robotic models as well, as they approach the capacity to pass the Total (asympototic) Turing Test. Toy models are not modules. 4. Brain Modeling versus Mind Modeling: Searle also fails to note that the brain itself can be understood only through theoretical modeling, and that the boundary between brain performance and body performance becomes arbitrary as one converges on an asymptotic model of total human performance capacity. 5. The Modularity Assumption: Searle implicitly adopts a strong, untested "modularity" assumption to the effect that certain functional parts of human cognitive performance capacity (such as language) can be be successfully modeled independently of the rest (such as perceptuomotor or "robotic" capacity). This assumption may be false for models approaching the power and generality needed to pass the Total Turing Test. 6. The Teletype versus the Robot Turing Test: Foundational issues in cognitive science depend critically on the truth or falsity of such modularity assumptions. For example, the "teletype" (linguistic) version of the Turing Test could in principle (though not necessarily in practice) be implemented by formal symbol-manipulation alone (symbols in, symbols out), whereas the robot version necessarily calls for full causal powers of interaction with the outside world (seeing, doing AND linguistic understanding). 7. The Transducer/Effector Argument: Prior "robot" replies to Searle have not been principled ones. They have added on robotic requirements as an arbitrary extra constraint. A principled "transducer/effector" counterargument, however, can be based on the logical fact that transduction is necessarily nonsymbolic, drawing on analog and analog-to-digital functions that can only be simulated, but not implemented, symbolically. 8. Robotics and Causality: Searle's argument hence fails logically for the robot version of the Turing Test, for in simulating it he would either have to USE its transducers and effectors (in which case he would not be simulating all of its functions) or he would have to BE its transducers and effectors, in which case he would indeed be duplicating their causal powers (of seeing and doing). 9. Symbolic Functionalism versus Robotic Functionalism: If symbol-manipulation ("symbolic functionalism") cannot in principle accomplish the functions of the transducer and effector surfaces, then there is no reason why every function in between has to be symbolic either. Nonsymbolic function may be essential to implementing minds and may be a crucial constituent of the functional substrate of mental states ("robotic functionalism"): In order to work as hypothesized, the functionalist's "brain-in-a-vat" may have to be more than just an isolated symbolic "understanding" module -- perhaps even hybrid analog/symbolic all the way through, as the real brain is. 10. "Strong" versus "Weak" AI: Finally, it is not at all clear that Searle's "Strong AI"/"Weak AI" distinction captures all the possibilities, or is even representative of the views of most cognitive scientists. Hence, most of Searle's argument turns out to rest on unanswered questions about the modularity of language and the scope of the symbolic approach to modeling cognition. If the modularity assumption turns out to be false, then a top-down symbol-manipulative approach to explaining the mind may be completely misguided because its symbols (and their interpretations) remain ungrounded -- not for Searle's reasons (since Searle's argument shares the cognitive modularity assumption with "Strong AI"), but because of the transdsucer/effector argument (and its ramifications for the kind of hybrid, bottom-up processing that may then turn out to be optimal, or even essential, in between transducers and effectors). What is undeniable is that a successful theory of cognition will have to be computable (simulable), if not exclusively computational (symbol-manipulative). Perhaps this is what Searle means (or ought to mean) by "Weak AI." ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Feb 24 00:41:10 1987 Date: Tue, 24 Feb 87 00:41:03 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #53 Status: R AIList Digest Monday, 23 Feb 1987 Volume 5 : Issue 53 Today's Topics: Conferences - Possible Workshop on Real-Time at AAAI-87 & 12th IMACS, 2nd-Generation Expert Systems & Workshop on Computer Architecture for PAMI ---------------------------------------------------------------------- Date: 16 Feb 1987 19:40-EST From: cross@wpafb-afita Subject: Conference - Possible Workshop on Real-Time at AAAI-87 I am proposing a workshop on real-time processing in knowledge-based systems to be held at AAAI-87. At the present time I am looking for suggestions about the workshop content and format (draft announcement follows). Pending approval from the AAAI-87 Workshop Committee, I'll place a formal announcement on AILIST and solicit participation. Thanks in advance for your input. Steve Cross ************************************************************************ Workshop on Real-Time Processing in Knowledge-Based Systems AI techniques are maturing to the point where application in knowledge intensive, but time constrained situations is desired. Examples include monitoring large dynamic systems such as nuclear power plants; providing timely advice based on time varying data bases such as in stock market analysis; sensor interpretation and management in hospital intensive care units, or in military command and control environments; and diagnoses of malfunctions in airborne aircraft. The goal of the workshop is to gain a better understanding of the fundamental issues that now preclude real-time processing and to provide a focus for future research. Specific issues that will be discussed include: Pragmatic Issues: What is real-time performance? What metrics are available for evaluating performance? Parallel Computation: How can parallel computation be exploited to achieve real-time performance? What performance improvements can be gained by maximizing and integrating the inherent parallelism at all levels in a knowledge-based system (e.g., application through the hardware levels). Meta-Level Problem Solving: How can intelligent problem solving agents reason about and react to varying time-to-solution resources? What general purpose or domain specific examples exist of problem solving strategies employed under different time-to-solution constraints? What are the tradeoffs in terms of space, quality of solution, and completeness of solution. Complexity Issues: How can an intelligent agent reason about the inherent complexity of a problem? Algorithm Issues: What novel problem solving methods can be exploited? How can specialized hardware (for example , content addressable memories) be exploited? To encourage vigorous interaction and exchange of ideas between those attending, the workshop will be limited to approximately 30 participants (and only two from any one organization). The workshop is scheduled for July xx, 1987, as a parallel activity during AAAI 87, and will last for a day. [Because of planning conflicts, we may meet on one evening and lay plans for a more involved workshop in August or September]. All participants are required to submit an abstract (up to 1000 words) and a proposed list of discussion questions. Five copies should be submitted to the workshop chairman by May 1, 1987. The discussion questions will help the workshop participant's focus on the fundamental issues in real-time AI processing. Because of the brief time involved for the workshop, participants will be divided into several discussion groups. A group chairman will present a 30 minute summary of his group's abstracts during the first session. In addition, the committee reserves the right to arrange for invited presentations. Each group will be assigned several questions for discussion. Each group will provide a summary of their groups discussion. The intent of the workshop is to promote creative discussion which will spawn some exciting ideas for research. Workshop Chairman: Stephen E. Cross, AFWAL/AAX, Wright-Patterson AFB OH 45433- 6583, (513) 255-5800. arpanet: cross@wpafb-afitab.arpa ------------------------------ Date: Thu, 19 Feb 87 17:59:19 From: mcvax!crcge1!david@seismo.CSS.GOV (Jean-Marc David) Subject: Conference - 12th IMACS, 2nd-Generation Expert Systems 12th IMACS WORLD CONGRESS' 88 PARIS - July 18 - 22, 1988 CALL FOR PAPERS ================= ******************************************************** * * * SECOND GENERATION EXPERT SYSTEMS * * * * Reasoning with Heuristic and Deep Knowledge * * * ******************************************************** The 12th IMACS WORLD CONGRESS' 88 on Scientific Computation will be hold in Paris, France (July 18-22, 1988). A one-day session of the Congress will be devoted to SECOND GENERATION EXPERT SYSTEMS. Authors are invited to submit papers describing Expert Systems reasoning with Deep Knowledge, or any aspect of deep reasoning ; the covered topics include: - Model-Based Reasoning - Qualitative Physics - Multi-Level / Multi-Model Reasoning - Reasoning from Structure, Behavior and Function - Causal Reasoning Papers can deal with both theoretical aspects of deep reasoning and applications (diagnosis, process control, simulation ...). Emphasis will be put on work describing cooperation between Heuristic and Deep Reasoning. Submission Information : ======================== Submit three copies of a 1000 words abstract by August 1, 1987 to the Session Chairman. Papers will be accepted on the basis of submitted abstracts. Notifications of acceptance will be mailed by December 1, 1987. Accepted papers will be either original contributions or important survey papers. Timetable : =========== - abstracts submission: August 1, 1987 - notifications of acceptance: December 1, 1987 - full papers submission: February 15, 1988 Submissions and inquiries about the Second Generation Expert System Session should be sent to the Session Chairman: Jean-Marc DAVID IMACS '88 Laboratoires de Marcoussis Computer Science Division Route de Nozay 91460 - Marcoussis FRANCE Other inquiries should be directed to the Congress Secretariat : Secretariat IMACS WORLD CONGRESS '88 I.D.N. BP 48 59651 - Villeneuve d'Ascq Cedex FRANCE ------------------------------ Date: Thu, 19 Feb 87 16:45:36 CST From: dyer@stilton.wisc.edu (Chuck Dyer) Subject: Conference - Workshop on Computer Architecture for PAMI CALL FOR PAPERS 1987 WORKSHOP ON COMPUTER ARCHITECTURE FOR PATTERN ANALYSIS AND MACHINE INTELLIGENCE Seattle, Washington October 5 - 7, 1987 CAPAMI-87 will focus on new architectures and associated algorithms designed for artificial intelligence applications. This workshop is a successor of the Computer Architecture for Pattern Analysis and Image Database Management workshops which were held in '81, '83, and '85. The emphasis of the program will be the presentation of significant new contributions plus panel and discussion sessions in which attendees can actively compare and contrast their methods. Papers will be reviewed by the Program Committee. No parallel sessions are planned. TOPICS * Computer Vision and Image Processing Architectures * Architectures for Inference Engines and Rule-Based Systems * Knowledge-Based Machines and Systems * Neural Network based Architectures * VLSI and Systolic Implementations * Parallel Algorithms for AI Problems on these Architectures * Parallel Matching and Reasoning Algorithms PAPER SUBMISSION INSTRUCTIONS Authors should submit four (4) copies of a complete paper by APRIL 15, 1987 to: Charles R. Dyer Department of Computer Science University of Wisconsin 1210 W. Dayton St. Madison, WI 53706 Authors will be notified of the acceptance of their papers by June 1, 1987. Final camera-ready papers are due by July 15, 1987. WORKSHOP ORGANIZATION Workshop Chair: Steven L. Tanimoto Program Chair: Charles R. Dyer Program Committee: Christopher M. Brown James J. Little Michael J. B. Duff Azriel Rosenfeld Robert M. Haralick Jorge L. C. Sanz Ramesh Jain Leonard M. Uhr John R. Kender Jon A. Webb H. T. Kung ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 5 16:39:12 1987 Date: Thu, 5 Mar 87 16:39:06 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #54 Status: R AIList Digest Tuesday, 24 Feb 1987 Volume 5 : Issue 54 Today's Topics: Administrivia - Problem with Issue 51, Queries - Financial Expert Systems & Real-Time AI & Recognition of Text Written by Hand & Automatic Theorem Proving & Network Complexity & DBMS Issues in KBs ---------------------------------------------------------------------- Date: Mon 23 Feb 87 10:46:16-PST From: Ken Laws Reply-to: AIList-Request@SRI-AI.ARPA Subject: Problem with Issue 51 Some of you may have found only a single message in Volume 5, No. 51, the analysis of Artificial Intelligence citations (by Lawrence Leff). That issue also contained a discussion by John McCarthy on taking a "design stance" towards the cognition problem and a review of a review of Minsky's new book. Unfortunately, the first message ended with a line containing a single period. This is take as an end-of-message flag by some mailers, and so the second and third messages were lost or hidden. If you need a remailing of the issue, with the offending line removed, just send a request to AIList-Request@SRI-STRIPE.ARPA. -- Ken Laws ------------------------------ Date: Mon 23 Feb 87 09:28:41-PST From: Frances Borison Subject: Financial Expert Systems Does anyone know of any financial expert systems that are commercially available for purchase and that run on either general purpose computers or AI workstations? I am aware of Planpower by Applied Expert Systems (APEX) and Financial Advisor by Palladian Software. Any assistance would be appreciated. Frances Borison. ------------------------------ Date: 23 Feb 87 18:35:44 GMT From: teknowledge-vaxc!rburns@SRI-UNIX.ARPA (Randy Burns) Subject: Wanted speaker familiar with financial applications of AI A friend of mine is involved with an IEEE group which is sponsoring a seminar on financial applications of Artificial Intelligence. I would appreciate anyone with experience in this area to contact me so I can forward your name to him. Randy Burns Teknowledge Inc. 415-424-0500 x543 ------------------------------ Date: 17 Feb 87 12:26:52 GMT From: mcvax!ukc!tcom!idec!camcon!ijd@seismo.css.gov (Ian Dickinson) Subject: Research in Real-Time A.I. I'm currently engaged in a small research project looking at the problems of updating AI databases in real time. [By database, I mean that collection of information that I am currently reasoning against - not the large commercial variety.] A typical problem in the domain is that you have N (where large(N)) sensors attached to a process plant all throwing lots of data with low information content at an intelligent fault diagnosis system. The system has to cope with contradictory data, have good coverage of the incoming signals, but still be able to respond quickly to high-priority situations. The particular issues that I am concerned with are: (1) what are the representational inadequacies of current AI notations that are suited to doing real time problems and/or handling noisy and contradictory data? (2) what are the computational costs of using such notations? Primary choices for handling mucky, changing data are the RMS family (Doyle, de Kleer etc) and other non-monotonic logics, so these are typical of the notations that I am referring to. So the issues become: what can't you do with them, and what would it cost anyway? The questions I would like to submit to net.land are: o anybody doing any work on extending non-monotonic notations in wierd directions (eg integrating them with uncertain inference techniques)? o anybody got any pet real-time AI problems? o anyone else working in the real-time field? Please mail responses directly to me, and I will post a summary for discussion later. Thanks in advance, Ian. -- !! Ian Dickinson Cambridge Consultants Ltd, AI group !! !! Voice: (0223) 358855 [U.K.] Email: ijd%camcon.co.uk !! !! uucp: ...!seismo!mcvax!ukc!camcon!ijd or: ijd%camcon.uucp !! >> Disclaimer: All opinions expressed are my own (surprise!). << ------------------------------ Date: Thu, 19 Feb 87 22:45:18 +0100 From: Hakon Styri Subject: Recognition of text written by hand Is there anybody with knowledge about work going on in the field of machine recognition of text written by hand. I'm not interested in "understanding" the text, just converting it into machine readable form. And, the text is already written so special pen and paper is no good. ------------------------------ Date: 20 Feb 87 04:32:09 GMT From: cartan!brahms.Berkeley.EDU!cotner@ucbvax.Berkeley.EDU (Carl Cotner) Subject: automatic theorem proving Can anyone on the net recommend any books or articles about automatic theorem proving? I am interested (I think) in the subject, but know almost anything about it. Any reference would be very welcomed. Thanks. ucbvax!brahms!cotner Carl Cotner/UCB Math Dept/Berkeley CA 94720 ------------------------------ Date: 19 Feb 87 22:47:52 GMT From: ihnp4!ihnp3!mth@ucbvax.Berkeley.EDU (Mark Horbal) Subject: Network Complexity (sorry if this is a duplicate posting, but my machine burped) I am in the process of putting together a paper which attempts to motivate planning and development of software based Network Management tools. In general, such tools would be both STRATEGIC, eg. network topology planning, from the perspective of capacity, security, fault tolerance, etc, and TACTICAL, such as visualization of network activity, dynamic routing, fault recovery, congestion avoidance, etc. Clearly, this fields is ready for and in desperate need of AI, which is why I'm addressing it to this group. Now, my intuition tells me that as these networks become more complicated, we'll realize that the seat-of-the-pants network management we're used to is inadequate, and we'll wish that we had spent time developing the right tools to do the job. I envision the complexity of our computer networks to be exploding at some exponential rate, while our ability to understand and control them is falling behind, growing relatively slowly. This brings me to my QUESTION: If we define the "complexity" of a computer network as a measure of difficulty in observing, understanding, and excercising a modicum of control over it, how is this "complexity" estimated? If we further choose a simple but intuitive way of representing a computer network by a graph, how do we quantify this "complexity" with respect to the graph's topology? Clearly metrics such as the number of nodes, edges circuits etc have intuitive appeal, but do not individually seem to convey the underlying combinatorial explosion that, I believe, lurks underneath. Are you aware of any analytic, graph-theoretical, heuristic, empirical, or otherwise useful metrics of such "complexity"? I am not necessarily looking for some absolute measure of the thing, but general concepts. Any facts, comments, opinions and thoughts will be most appreciated. M. Horbal @ Bell Labs ihnp4!ihnp3!mth (312) 979-6496 ------------------------------ Date: Mon, 23 Feb 87 13:01:35 EST From: tim@linc.cis.upenn.edu (Tim Finin) Subject: DBMS issues in KBs A colleague is interested in what work has been done involving the traditional concerns of a DBMS for Knowledge Bases (e.g. concurrency, security, query optimization, etc). I don't think that these issues have been addressed in any systematic way. Can anyone offer any references to work regarding such things? From: Susan Davidson Date: Thu, 19 Feb 87 15:00 EST Can you point me to any papers that speak to the exact problems of updating, concurrency, optmization, etc. in knowledge bases? sbd ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 5 16:39:23 1987 Date: Thu, 5 Mar 87 16:39:16 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #55 Status: R AIList Digest Tuesday, 24 Feb 1987 Volume 5 : Issue 55 Today's Topics: AI Tools - Language Comparisons & Prolog & DEC AI Workstation, Application - Legal Reasoning, Literature - Learing about AI & Automatic Theorem Proving ---------------------------------------------------------------------- Date: 19 Feb 87 12:31:00 EST From: "CUGINI, JOHN" Reply-to: "CUGINI, JOHN" Subject: speaking of language comparisons I just finished up a report, published by the Institute for Computer Sciences and Technology / National Bureau of Standards, comparing Common Lisp, C-Prolog, and OPS5. This is a nitty-gritty comparison- of-features type of paper, 70 pages. It's targeted at programmers who are just entering the wonderful world of knowledge-based systems. The grizzled AI veteran will probably not find a wealth of new insights. Anyway, for those interested, it's: NBS Special Publication 500-145 Programming Languages for Knowledge-Based Systems order from: Superintendent of Documents US Government Printing Office Washington DC 20402 GPO stock number: 003-003-02783-9 price: $4.00 John Cugini ------------------------------ Date: 11 Feb 87 12:30:00 GMT From: mcvax!unido!ztivax!steve@seismo.css.gov Subject: Re: prolog information wanted - (nf) >I am looking for a good book or two about frame based systems implemented >in Prolog. I am especially interested in examples of code and data >structures. If you know of any such books, please send the name, etc., >to this login. Thanks in advance. > > Lance > ihnp4!ihuxj!lance I thought Prolog didn't have any data structures :-) ------------------------------ Date: 18 Feb 87 21:29:42 PST (Wed) From: spar!malcolm@decwrl.DEC.COM Subject: Re: DEC AI Workstation In article <8702121349.AA14488@csv.rpi.edu> yerazuws@CSV.RPI.EDU (Crah) writes: > I wouldn't bother with the SUN, especially in a diskless >configuration. I wasted (yes, wasted) nine months trying to develop >an architecture simulator on Sun 2's. Little things like a server >being slow can completely hang your LISP and your editor - so you sit. >And sit. And forget what you were doing... You're right....don't even think about running Lisp on a Sun-2. On the other hand Sun-3's (which are three times faster in general than Sun-2's) make a fast lisp workstation. BUT, you must have enough memory on the system to make sure that you don't page when you garbage collect. I work with both Franz and Lucid Common Lisp and they both copy the workspace to garbage collect. When you have to go to the disk (or network) every time you want to garbage collect then you lose big. And then you finish GC and start doing real work again and all the pages you want have already been flushed. I suspect that the reason the limited memory isn't as much a factor with Symbolics workstations is because they do incremental garbage collection. When you are working normally everything works fast.....but go away for a while and come back and watch the swap bar turn solid black for a few seconds. Franz Common Lisp can run quite nicely in 9M of memory. Memory is real cheap these days. I have 16M on my desk and I almost never page while switching back and forth between Lisp and other windows I am using. As far as performance goes, I have seen a Sun3/160 running anywhere between .5 and 4 times a Symbolics 3600. Moving to a Sun3/260 gives you another factor of two performance improvement. Sun's can match the speed of a Symbolics workstation....now if they can just make the environment as nice. Cheers. Malcolm ------------------------------ Date: Thu, 19 Feb 87 09:50:27 est From: mnetor!lsuc!dave@seismo.CSS.GOV Subject: Re: Legal reasoning To: watmath!clyde!cbatt!ucbvax!ENIAC.SEAS.UPENN.EDU!mayerk Subject: Re: Legal reasoning Newsgroups: mod.ai In-Reply-To: <8702160344.AA01571@eniac.seas.upenn.edu> Organization: Law Society of Upper Canada, Toronto Cc: mnetor!seismo!sri-stripe.arpa!ailist In article <8702160344.AA01571@eniac.seas.upenn.edu> you write: > >Could someone give some pointers into the literature about legal >reasoning. Or better yet, someone you know whom I could contact. There's a conference coming up in May at Northeastern University in Boston, the First International Conference on Artificial Intelligence and Law. Contact Carole Hafner at Northeastern or Thorne McCarty at Rutgers (mccarty@rutgers.edu). Major projects which have been undertaken include McCarty's TAXMAN system, Kowalski & Sergot's work in Prolog at Imperial College (Univ. of London), Jim Sprowl's ABF Processor, Layman Allen & Charles Saxon's work at U of Michigan, and many others. Check the Rutgers Journal of Computers, Technology & the Law; also law periodical indexes under "automation". There have been two conferences on Law & Computers at the Univ of Houston, organized by Charles Walter. The 1984 conference papers were published as a book, "Computing Power and Legal Reasoning", published by West Publishing Co (St. Paul, MN), ISBN 0-314-96670-4. The 1985 papers haven't yet been published that I know of. Both had papers from just about everyone working in this field in North America, as well as a few from Europe. I recently completed an LL.M. thesis, "Blueprint for a Computer-Based Model of the Income Tax Act of Canada", at Osgoode Hall Law School (York University, Toronto), which contains an implementation of tax law in Prolog and surveys previous work. (I've also submitted a condensed version as a paper to the AI & Law conference.) I can send you a copy if you like. David Sherman The Law Society of Upper Canada Osgoode Hall Toronto, Canada M5B 2N6 (416) 947-3466 dave@lsuc.UUCP { seismo!mnetor cbosgd!utgpu watmath decvax!utcsri ihnp4!utzoo } !lsuc!dave ------------------------------ Date: 17 Feb 87 17:27:59 GMT From: ubc-vision!ubc-cs!andrews@BEAVER.CS.WASHINGTON.EDU (Jamie Andrews) Subject: Re: Learing about AI In article <278@vax1.ccs.cornell.edu> czhj@vax1.UUCP (Ted Inoue) writes: >But look at the approach that LOGIC gives AI. It is a purely reductionist >view, akin to studying global plate motion at the level of sub-atomic >particles. It is simply the wrong level at which to approach the problem. This is too generalized. There are good applications of logic to AI, and there are bad ones. Only by knowing a lot about logic *and* the structure of the problem domain can you tell which is which. I would agree that predicate logic techniques have often been applied to problems in a way that leaves out inordinately large chunks of the domain. However, the same could be said about most AI techniques. --Jamie. ...!seismo!ubc-vision!ubc-cs!andrews "Take my shoes off & throw them in the lake" ------------------------------ Date: 21 Feb 87 22:07:01 GMT From: ihnp4!chinet!nucsrl!ragerj@ucbvax.Berkeley.EDU (John Rager) Subject: Re: Learing about AI (was Re: A List of AI Books (for beginners)) from: / sher@rochester.ARPA (David Sher) / 8:17 am Feb 13, 1987 / >I think there seems to be something of a misconception regarding the >place of logic wrt AI and computer science in general. To start with >I will declare this: > Logic is a language for expressing mathematical constructs. >It is not a science and as far as artificial intelligence is concerned >the mathematics of logic are not very relevant. Its main feature >is that it can be used for precise expression. Logic is a branch of mathematics. The last time I checked mathematics was a science. Its relevancy to AI is a matter of opinion. > So why use logic rather than a more familiar language, like english. > ... > However the problem is that few of us knowledge > engineers have the talent to be precise in our everyday language. >Thus for decades engineers, scientists, and statisticians have used >logic to express their ideas since even an incompetent speaker can be >clear and precise using logical formalisms. However like any language >with expressive power one can be totally incomprehensible using logic. First, you aren't talking about Logic, you are talking about mathematical notation. Second, the reason that mathematicians use this notation has nothing to do with their inability to express concepts in English. It has to do with the inexpressibliity of the concepts in 'ordinary English'. Mathematical notation is the specialized language of the mathematical disciplines. All disciplines have a specialized language, a set of terms with precise meanings in that field. A good philosopher, writing a good paper, writes in what seems to be ordinary English. It is not. It is English augmented by the argot of the field. This is true even though the difference may not be obvious to an outsider, since it looks like English. The language of mathematics doesn't look like English. So where does this leave English? It is a wonderful language and I love it. It is not a specialized tool for working in a particular discipline. It is a means of everyday communication, an amazing miracle of generality. It does not have the expressive power of logic. Do not try to use it for what it is not suited for. (Before anyone says anything, when the first English grammars were devised they were modeled on these of Latin, in which language one does not end sentences with prepositions. It has always been common practice to use propositions as sentence-ending particles in English.) >Note: I am not a logician but I use a lot of logic in my everyday >work which is probabilistic analysis of computer vision problems >-David Sher When you say you use a lot of logic, do you really mean it? Recursive function theory? Saturated model theory? Or do you mean you use the vernacular of the mathematician? John Rager sher@rochester {allegra,seismo}!rochester!sher ------------------------------ Date: 23 Feb 87 16:56:02 GMT From: jbn@glacier.stanford.edu (John B. Nagle) Subject: Re: automatic theorem proving The best thinking on the subject is in "A Computational Logic", by Robert Boyer and Jay Moore (Academic Press, 1979, ISBN 0-12-122950-5). The field has regressed somewhat since then. John Nagle ------------------------------ End of AIList Digest ******************** From LAWS@KL.SRI.COM Tue Dec 15 07:09:22 1987 Mail-From: LAWS created at 24-Feb-87 20:51:46 Date: Tue 24 Feb 1987 20:48-PST From: AIList Moderator Kenneth Laws Reply-To: AIList@SRI-STRIPE.ARPA Us-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V5 #56 To: AIList@SRI-STRIPE.ARPA Resent-Date: Mon 14 Dec 87 22:44:48-PST Resent-From: Ken Laws Resent-To: isr@vtopus.CS.VT.EDU Resent-Message-Id: <12358590427.16.LAWS@KL.SRI.COM> Status: R AIList Digest Wednesday, 25 Feb 1987 Volume 5 : Issue 56 Today's Topics: Query - Parallel Alpha-Beta Search, Logic - Automated Deduction References, News - IJCAI-87 Computers and Thought Award, Discussion - Programming Metaphors & Logic in AI & Intelligence ---------------------------------------------------------------------- Date: Tue, 24 Feb 87 17:11:03 EST From: "Neil B. Cohen" Subject: Parallel alpha-beta search I was told to contact you about a paper that has been recently written in the subject of parallel alpha-beta tree searching. Can you tell me if such a paper was recently published, and if so, where I can get a copy of it? I am very interested in trying to apply such a technique on the BBN Butterfly computer. Thanks in advance for any help you can give me. Neil B. Cohen (nbc@bbn.com) ------------------------------ Date: Tue, 24 Feb 87 13:16:45 pst From: ladkin@kestrel.ARPA (Peter Ladkin) Subject: automated deduction references Some of the best sources for Automated Theorem Proving are conference proceedings and journal articles. Springer publishes the proceedings of the Conference on Automated Deduction, held in even years. The last is Lecture Notes in Computer Science 230, CADE-8 (ed. Siekmann). Also LNCS 232, Fundamentals of Artificial Intelligence, (ed. Bibel and Jorrand) has good material on Automated Deduction. LNCS 202 is Rewriting Techniques and Applications (ed. Jouannaud), another European conference on theorem proving by algebraic term rewriting systems. The Journal of Automated Reasoning and the Journal of Symbolic Computation have been started in the last couple of years. This is by no means a complete list. Taking the transitive closure of the `referenced' relation on this material will probably lead to a complete list. peter ladkin ladkin@kestrel.arpa ------------------------------ Date: Tue, 24 Feb 87 10:29:40 GMT From: Alan Bundy Subject: IJCAI-87 Computers and Thought Award THE 1987 COMPUTERS AND THOUGHT AWARD It is my great pleasure to announce that the winner of the 1987 Computers and Thought Award is Johan de Kleer of Xerox Palo Alto Research Center. The Award is in recognition of his fundamental contributions to artificial intelligence research in the areas of: qualitative reasoning, truth maintenance, constraint propagation and explicit control of reasoning. The Computers and Thought Lecture is given at each International Joint Conference on Artificial Intelligence by an outstanding young scientist in the field of artificial intelligence. The Award carries with it a certificate and the sum of $2,000 plus travel and subsistence expenses for the IJCAI. The Lecture is one evening during the Conference, and the public is invited to attend. The Lecturer is invited to publish the Lecture in the conference proceedings. The Lectureship was established with royalties received from the book Computers and Thought, edited by Feigenbaum and Feldman; it is currently supported by income from IJCAI funds. Nominations for The 1987 Computers and Thought Award were invited from all in the artificial intelligence international community. The award selection committee was the union of the Programme, Conference and Advisory Committees of IJCAI-87 and the Board of Trustees of IJCAII, with nominees excluded. Past recipients of this honour have been Terry Winograd (1971), Patrick Winston (1973), Chuck Rieger (1975), Douglas Lenat (1977), David Marr (1979), Gerald Sussman (1981), Tom Mitchell (1983) and Hector Levesque (1985). Alan Bundy IJCAI-87 Conference Chair ------------------------------ Date: 23 Feb 87 20:00:41 EST From: Raul.Valdes-Perez@b.gp.cs.cmu.edu Subject: Prog. Lang. Metaphors [Forwarded from the CMU bboard by Laws@SRI-STRIPE.] This posting is the result of the query for metaphors that underlie programming languages. Everything onward from (14) was compiled from suggestions. The items upto (13) were the original ideas. METAPHOR LANGUAGE 1. function application (lambda calculus) Pure Lisp 2. variable assignment Fortran 3. message-passing Smalltalk 4. set manipulation SETL, relational databases 5. modus ponens Prolog 6. array manipulation APL 7. constraints spreadsheets 8. rewriting production systems 9. window manipulation window managers 10. algorithm manipulation (?!) Lenat's dissertation (AI) 11. resolution resolution theorem provers 12. string manipulation SNOBOL 13. states (and transitions)? graphs? 14. List processing is a metaphor for Lisp & IPL-V 15. Does LOGO have a metaphor? [A metaphor for the graphical part of LOGO is a moving turtle. - RVP] 16. "... metaphors involved in user interactions and the influence they have on system design. (There is a session on metaphors at the SIGCHI in April)" Desktop metaphor Electronic Book metaphor Rooms metaphor (a method of organizing windows dealing with a particular application into a class of windows. e.g. mailroom) Overlay or transparency metaphor Hierarchical metaphor Network metaphor (Zog or Notecards are systems that use this metaphor) 17. File [stream?] manipulation is a metaphor in UNIX. 18. Patterns in Snobol-4. 19. Type inference ML Dataflow ID (Arvind) Concurrent programming CSP, Linda, Multilisp Structured programming (iffy) Pascal 20. Couple of relevant papers: "The Scientific Community Metaphor" by Kornfeld & Hewitt, IEEE Trans. on Systems, Man, & Cybernetics Jan 81; "Metaphor and the Cognitive Representation of Computing Systems" by Carroll & Thomas, same journal, Mar/Apr 82. ------------------------------ Date: Tue, 24 Feb 87 13:47:23 pst From: ladkin@kestrel.ARPA (Peter Ladkin) Subject: logic in ai david sher said: >Note: I am not a logician but I use a lot of logic in my everyday >work which is probabilistic analysis of computer vision problems john rager replied: >When you say you use a lot of logic, do you really mean it? Recursive >function theory? Saturated model theory? Rager asks whether Sher uses infinitary methods in what seems to be a finitary context. The answer is obviously no, and I wonder why he would ask the question? Maybe he thinks that all logic is infinitary? Meanwhile, he seems to have forgotten that inference is the basis of logic, and most of us use that in one form or another. peter ladkin ladkin@kestrel.arpa ------------------------------ Date: Tue, 24 Feb 87 10:19 EST From: Seth Steinberg Subject: A defense of the vulgar tongue. I am going to ignore the "Is mathematics a science?" argument and get right down to why I think mathematical and logical notation are overused in computer science presentations. The problem has very little to do with precision and a lot to do with class, clarity, and vulgarity. By class, I am refering to a set of societal distinctions which have been handed down in our society and are quite extant in our modern academic community. By clarity, I am refering to the ability to communicate ideas both within and outside the community. By vulgarity, I am refering to the use of the vulgar tongue - which in this case is not English but the programming language of choice. Class: Much as a restaurant will have a menu written in French to impress the diner, many authors feel obligated to use logical notation to make their paper seem more "scientific". Walt Kelley once had a character ask "I wonder what language the Romans used for the old 24 karat bamboozle." They used Greek, and a lot of our prejudices come from the Greeks. Somehow or another, arguing at a high level of abstraction makes the argument more precise, general, cogent, powerful or what not. Sometimes this is true, sometimes it isn't. Abstraction is often a major obstacle in the search for the truth. Clarity: Chemists use chemical notation and scratchy looking stereo diagrams. Philologists use cryptic phonetic notation. Geneticists use long lists of upper case letters and funny three letter combinations. Vintners use a full set of common adjectives with very precise but not always obvious meanings. It is quite possible to be clear, precise and understood without resorting to mathematical or logical notation. Each of these notations was chosen because it concisely describes commonly discussed phenomena. Architects do not express buildings in mathematical notation when they talk to contractors but the latter can usually come up with a cost estimate anyway. Vulgarity: Programmers spend a lot of time discussing the behavior of computers. Specialized terms like "barf" and "lossage", while evocative, are not particularly precise. Whenever two programmers get into an argument about what a program does, they don't sit down and write up a proof, they look at the code. They might prove something about the problem domain. What they usually do is "desk check" the code, or maybe even go into the debugger and make the stupid computer "desk check" the code for them. Programs are the common language of programmers. They are precise; they can be used as a reasoning aid; they are widely understood. I have read too many papers in which mathematical notation is gratuitously introduced. I have seen this reaching for abstraction hide obvious inferences from the author. With certain notable exceptions, too many authors reach for the wrong tools too soon. Seth Steinberg ------------------------------ Date: 23 Feb 87 12:12:02 GMT From: mcvax!ukc!warwick!gordon@seismo.css.gov (Gordon Joly) Subject: What is this "INtelliGenT"? "Intelligent" and "intelligence" are somewhat overused I fell. Consider the browser which was described as "semi-intelligent" and the "intelligent terminal". No wonder there is some confusion as to the possible meaning of the term A.I. Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 5 17:21:27 1987 Date: Thu, 5 Mar 87 17:21:16 est From: vtcs1::in% To: vpi-ailist@vtcs1.cs.vt.edu Subject: Status: RO Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 4 Mar 87 19:36 EST Received: from relay.cs.net by RELAY.CS.NET id ai19280; 26 Feb 87 13:35 EST Received: from sri-stripe.arpa by RELAY.CS.NET id aa04998; 26 Feb 87 13:32 EST Date: Thu 26 Feb 1987 09:19-PST From: AIList Moderator Kenneth Laws Reply-to: AIList@sri-stripe.arpa US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V5 #57 To: AIList@sri-stripe.arpa AIList Digest Thursday, 26 Feb 1987 Volume 5 : Issue 57 Today's Topics: Seminars - Planning Robotic Manipulation Strategies (UPenn) & Reasoning and Planning in Dynamic Domains (CSLI) & Expert Systems in Manufacturing (SU) & Representing Defaults with Epistemic Concepts (SU), Journal Issue - Financial Applications, IEEE Expert, Conference - SUNY Buffalo Comp. Sci. Grad. Student Conference ---------------------------------------------------------------------- Date: Mon, 23 Feb 87 11:22:20 EST From: tim@linc.cis.upenn.edu (Tim Finin) Subject: Seminar - Planning Robotic Manipulation Strategies (UPenn) Computer and Information Science University of Pennsylvania 307 Towne Building 10:30 February 25, 1987 Planning Robotic Manipulation Strategies Michael A. Peshkin Carnegie Mellon University Automated planning of grasping or manipulation requires an understanding of the physics and the geometry of objects in contact. Sliding figures prominently, but since the pressure distribution between the surfaces in contact is unknown, deterministic solution for the motion is impossible. I have found the locus of motions over all distributions. Strategies based on these results succeed despite unknown pressure distribution. We also desire strategies which succeed despite uncertain initial position of a workpiece. Configuration maps are introduced, mapping all configurations of a part before an elementary operation onto all possible outcomes. Products of configuration maps are used to synthesize complex strategies which succeed for a wide range of initial positions of the workpiece. ------------------------------ Date: Wed 25 Feb 87 17:18:14-PST From: Emma Pease Subject: Seminar - Reasoning and Planning in Dynamic Domains (CSLI) Reading: "Reasoning and Planning in Dynamic Domains: An Experiment with a Mobile Robot" by Michael Georgeff, Amy Lansky, and Marcel Schoppers discussion led by Amy Lansky Ventura Hall, March 5, 12:00 noon Both Georgeff and Lansky will be present to discuss their recent paper on using their Procedural Reasoning System (PRS) to control SRI's robot, Flakey. The PRS architecture has been one of the focuses of RATAG group discussions. This paper describes progress made toward having the mobile robot reason and plan complex tasks in real-world environments. To cope with the dynamic and uncertain world, they use a highly reactive system to which is attributed the attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipulated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the system need only be partly elaborated before it decides to act. This allows the system to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of over-commitment common to previous planners. In addition, the system is continuously reactive and has the ability to change its goals and intentions as situations warrant. Thus, while the system architecture allows for reasoning about means and ends in much the same way as traditional planners, it also possesses the reactivity required for survival in complex, dynamic domains. ------------------------------ Date: Wed 25 Feb 87 15:49:35-PST From: Automation & Manufacturing Subject: Seminar - Expert Systems in Manufacturing (SU) Jane Frederick Friday 27 February G.E. Industrial Automation Systems Terman 556 1:30-3:00pm "Expert Systems in Electronic Manufacturing" Manufacturing appears to be one of the fertile fields for expert system applications. The tasks are bounded and repetitive in nature. There exists a set of experts which regularly perform the tasks. These tasks can be defined in process steps and last but not least, manufacturing is a direct pay point. The payback for quality and productivity improvements can be specifically determined. This last issue is very important and often overlooked, but expert systems development and implementation is an expensive and ongoing process. Therefore, one of the challenges for expert systems in manufacturing is selecting the correct application and the one with the greatest payback. Refreshments will be served. ------------------------------ Date: 23 Feb 87 1047 PST From: Vladimir Lifschitz Subject: Seminar - Representing Defaults with Epistemic Concepts (SU) Commonsense and Nonmonotonic Reasoning Seminar REPRESENTING DEFAULTS WITH EPISTEMIC CONCEPTS Kurt Konolige, SRI International Karen Myers, Stanford Thursday, February 26, 4pm Bldg. 160, Room 161K Reasoning about defaults --- implications that typically hold, but which may have exceptions --- is an important part of commonsense reasoning. We present some parts of a theory of defaults, concentrating on distinctions between various subtle ways in which defaults can be defeated, and on the adjudication of conflicting defaults under hierarchic inheritance. In order to represent this theory in a formal system, it seems necessary to use the epistemic concept of self-belief. We show how to express the theory by an almost-local translation into autoepistemic logic, which contains the requisite epistemic operators. Just to be controversial, we also argue that circumscription (pointwise, schematic, prioritized, or otherwise) is insufficient for this task. ------------------------------ Date: 24 February 1987, 11:34:26 EST From: "Chidanand V. Apte" Subject: Journal Issue - Financial Applications, IEEE Expert CALL FOR PAPERS --------------- IEEE EXPERT Special Issue - Fall 1987 AI Applications in Financial Expert Systems The Fall 1987 issue of IEEE EXPERT will be devoted to papers that discuss the technical requirements imposed upon AI techniques for building intelligent systems for financial applications and the methodologies employed for the construction of such systems. Requirements for submission of papers ------------------------------------- Authors should submit their papers to the guest editors no later than APRIL 1, 1987. Each submission should include one cover page and five copies of the complete manuscript. The one cover page should include Name(s), affiliation(s), complete address(es), identification of principal author and telephone number. The five copies of the complete manuscript should each include: Title and abstract page: title of paper, 100 word abstract indicating significance of contribution, and The complete text of the paper in English, including illustrations and references, not exceeding 5000 words. Topics of interest ------------------ Authors are invited to submit papers describing recent and novel applications of AI techniques in the research and development of financial expert systems. Topics (in the context of the domain) include, but are not limited to: Automated Reasoning, Knowledge Representations, Inference Techniques, Problem Solving Control Mechanisms, Natural Language Front Ends, User Modeling, Explanation Methodologies, Knowledge Base Debugging, Validation, and Maintenance, and System Issues in Development and Deployment. Guest Editors -------------- Chidanand Apte (914-945-1024, Arpa: apte@ibm.com) John Kastner (914-945-3821, Arpa: kastner@ibm.com) IBM Thomas J. Watson Research Center P.O. Box 218 Yorktown Heights, New York 10598 ------------------------------ Date: Tue, 24 Feb 87 09:37:46 EST From: "William J. Rapaport" Subject: Conference - SUNY Buffalo Comp. Sci. Grad. Student Conference STATE UNIVERSITY OF NEW YORK AT BUFFALO DEPARTMENT OF COMPUTER SCIENCE UBGCCS-87 SECOND ANNUAL GRADUATE CONFERENCE ON COMPUTER SCIENCE Topics: Artificial Intelligence--Parallel Program Debugging Visual Knowledge Representation--Hypercube Algorithms--Naive Physics Model-Based Diagnosis--Computer Vision--Natural Language Understanding Tuesday, March 10, 1987 8:00 A.M. - 5:00 P.M. Center For Tomorrow Amherst Campus, SUNY Buffalo Program: Ted F. Pawlicki SUNY Buffalo "The Representation of Visual Knowledge" John M. Mellor-Crummey University of Rochester "Parallel Program Debugging with Partial Orders" Susan J. Wroblewski SUNY Buffalo "Efficient Trouble Shooting in an Industrial Environment" Ching-Huei Wang SUNY Buffalo "ABLS: An Object Recognition System for Locating Address Blocks on Mail Pieces" Diane Horton University of Toronto "Presuppositions as Beliefs: A New Approach" Norman D. Wahl SUNY Buffalo "Hypercube Algorithms to Determine Geometric Properties of Digitized Images" Ganapathy Krishnan SUNY Buffalo "Bottom-Up Image Analysis for Color Separation" Bart Selman University of Toronto "Vivid Representations and Analogues" Soteria Svorou SUNY Buffalo "The Semantics of Spatial Extension Terms in Modern Greek" Hing Kai Hung SUNY Buffalo "Semantics of a Recursive Procedure with Parameters and Aliasing" Josh D. Tenenberg University of Rochester "Naive Physics and the Control of Inference" Zhigang Xiang SUNY Buffalo "Multi-Level Model-Based Diagnostic Reasoning" Registration begins at 8 A.M. (free) First talk starts at 8:45 A.M. Optional Buffet Luncheon ($5) For program and registration information, please contact: Lynda Spahr (716) 636-2464 ubg-ccs%buffalo UBGCCS-87 226 Bell Hall SUNY at Buffalo Buffalo, New York 14260 Sponsored by: SUNY Buffalo Computer Science Graduate Student Association SUNY Buffalo Department of Computer Science SUNY Buffalo Graduate Student Association ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 5 17:22:40 1987 Date: Thu, 5 Mar 87 17:22:24 est From: vtcs1::in% To: vpi-ailist@vtcs1.cs.vt.edu Subject: Status: RO Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 4 Mar 87 19:58 EST Received: from relay.cs.net by RELAY.CS.NET id aa12013; 1 Mar 87 1:57 EST Received: from sri-stripe.arpa by RELAY.CS.NET id aa02057; 1 Mar 87 1:46 EST Date: Sat 28 Feb 1987 22:18-PST From: AIList Moderator Kenneth Laws Reply-to: AIList@sri-stripe.arpa US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V5 #58 To: AIList@sri-stripe.arpa AIList Digest Sunday, 1 Mar 1987 Volume 5 : Issue 58 Today's Topics: Philosophy & AI Methodology - Consciousness ---------------------------------------------------------------------- Date: 23 Feb 87 04:14:52 GMT From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad) Subject: Evolution of consciousness DAVIS%EMBL.BITNET@wiscvm.wisc.edu wrote on mod.ai: > Sure - there is no advantage in a conscious system doing what can > be done unconciously. BUT, and its a big but, if the system that > gets to do trick X first *just happens* to be conscious, then all > future systems evolving from that one will also be conscious. I couldn't ask for a stronger concession to methodological epiphenomenalism. > In fact, it may not even be an accident - when you > consider the sort of complexity involved in building a `turing- > indistinguishable' automaton, versus the slow, steady progress possible > with an evolving, concious system, it may very well be that the ONLY > reason for the existence of conscious systems is that they are > *easier* to build within an evolutionary, biochemical context. Now it sounds like you're taking it back. > Hence, we have no real reason to suppose that there is a 'why' to be > answered. You'll have to make up your mind. But as long as anyone proposes a conscious interpretation of a functional "how" story, I must challenge the interpretation by asking a functional "why?", and Occam's razor will be cutting with me, not with my opponent. It is not the existence of consciousness that's at issue (of course it exists) but its functional explanation and the criteria for inferring that it is present in cases other than one's own. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ Date: 24 Feb 87 08:41:00 EST From: "CUGINI, JOHN" Reply-to: "CUGINI, JOHN" Subject: epistemology vs. functional theory of mind > > Me: The Big Question: Is your brain more similar to mine than either > > is to any plausible silicon-based device? > > SH: that's not the big question, at least not mine. Mine is "How does the > mind work?" To answer that, you need a functional theory of how the > mind works, you need a way of testing whether the theory works, and > you need a way of deciding whether a device implemented according to > the theory has a mind. > Cugini keeps focusing on the usefulness of "presence of `brain'" > as evidence for the possession of a mind. But in the absence of a > functional theory of the brain, its superficial appearance hardly > helps in constructing and testing a functional theory of the mind. > > Another way of putting it is that I'm concerned with a specific > scientific (bioengineering) problem, not an exobiological one ("Does this > alien have a mind?"), nor a sci-fi one ("Does this fictitious robot > have a mind?"), nor a clinical one ("Does this comatose patient or > anencephalic have a mind?"), nor even the informal, daily folk-psychological > one ("Does this thing I'm interacting with have a mind?"). I'm only > concerned with functional theories about how the mind works. How about the epistemological one (philosophical words sound so, so... *dignified*): Are we justified in believing that others have minds/consciousness, and if so, on what rational basis? I thought that was the issue we (you and I) were mostly talking about. (I have the feeling you're switching the issue.) Whether detailed brain knowledge will be terribily relevant to building a functional theory of the mind, I don't know. As you say, it's a question of the level of simulation. My hunch is that the chemistry and low-level structure of the brain are tied very closely to consciousness, simpliciter. I suspect that the ability to see red, etc (good ole C-1) will require neurons. (I take this to be the point of Searle's remark somewhere or other that consciousness without a brain is as likely as lactation without mammary glands). On the other hand, integer addition clearly is implementable without wetware. But even if a brain isn't necessary for consciousness, it's still good strong evidence for it, as long as one accepts the notion that brains form a "natural kind" (like stars, gold, electrons, light switches). As I'm sure you know, there's a big philosophical problem with natural kinds, struggled with by philosophers from Plato to Goodman. My point was that it's no objection to brain-as-evidence to drag in the natural-kinds problem, because that is not unique to the issue of other minds. And it seems to me that's what you are (were?) guilty of when you challenge the premise that our brains are relevantly similar, the point being that if they are similar, then the my-brain-causes-consciousness-therefore-so-does-yours reasoning goes through. John Cugini ------------------------------ Date: 24 Feb 87 16:28:46 GMT From: clyde!burl!codas!mtune!mtuxo!houxm!houem!marty1@rutgers.rutgers. edu (M.BRILLIANT) Subject: Re: Evolution of consciousness I'm sorry if it's necessary to know the technical terminology of philosophy to participate in discussions of engineering and artifice. I admit my ignorance and proceed to make my point anyway. In article <552@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes (I condense and paraphrase): > DAVIS%EMBL.BITNET@wiscvm.wisc.edu wrote on mod.ai: > > ... if the system that [does] X first [is] conscious, then all > > future systems evolving from that one will also be conscious. > I couldn't ask for a stronger concession to methodological epiphenomenalism. In 25 words or less, what's methodological epiphenomenalism? > > In fact ... [maybe] conscious systems ... are > > *easier* to build within an evolutionary, biochemical context. > Now it sounds like you're taking it back. I think DAVIS is just suggesting an alternative hypothesis. > > Hence, we have no real reason to suppose that there is a 'why' to be > > answered. Then why did DAVIS propose that "easier" is "why"? Let me propose another "why." Not long ago I suggested that a simple unix(tm) command like "make" could be made to know when it was acting, and when it was merely contemplating action. It would then not only appear to be conscious, but would thereby work more effectively. Let us go further. IBM's infamous PL/I Checkout Compiler has many states, in each of which it can accept only a limited set of commands and will do only a limited set of things. As user, you can ask it what state it's in, and it can even tell you what it can do in that state, though it doesn't know what it could do in other states. But you can ask it what it's doing now, and it will tell you. It answers questions as though it were very stupid, but dimly conscious. Of course, the "actuality" of consciousness is private, in that the question of whether X "is conscious" can be answered only by X. An observer of X can only tell whether X "acts as though it were conscious." If the observer empathizes with X, that is, observes him/her/it-self as the "same type of being" as X, the "appearance" of consciousness becomes evidence of "actuality." I propose that we pay less attention to whether we are the "same type of being" as X and more attention to the (inter)action. If expert systems can be written to tell you an answer, and also tell you how they got the answer, it should not be hard to write a system like the Checkout Compiler, but with a little more knowledge of its own capabilities. That would make it a lot easier for an inexpert user to interact with it. Consider also the infamous "Eliza" as a system that is not conscious. At first it appears to interact much as a psychotherapist would, but you can test it by pulling its leg, and it won't know you're pulling its leg; a therapist would notice and shift to another state. You can also make a therapist speak to you non-professionally by a verbal time-out signal, and then go back to professional mode. But Eliza has only one functional state, and hence neither need nor capacity for consciousness. Thus, the evolutionary advantage of consciousness in primates (the actuality as well as the appearance) is that it facilitates such social interactions as communication and cooperation. The advantage of building consciousness into computer programs (now I refer to the appearance, since I can't empathize with a computer program) is the same: to facilitate communication and cooperation. I propose that we ignore the philosophy and get on with the engineering. We already know how to build systems that interact as though they were conscious. Even if a criterion could be devised to tell whether X is "actually" conscious, not just "seemingly" conscious, we don't need it to build functionally conscious systems. Marty M. B. Brilliant (201)-949-1858 AT&T-BL HO 3D-520 houem!marty1 ------------------------------ Date: 25 Feb 87 14:32:04 GMT From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad) Subject: Re: Evolution of consciousness M. B. Brilliant (marty1@houem.UUCP) of AT&T-BL HO 3D-520 asks: > In 25 words or less, what's methodological epiphenomenalism? Your own reply (less a few words) defines it well enough: > I propose that we ignore [the philosophy] and get on with the > engineering. [We already know how] to build systems that interact as > though they were conscious. Even if a criterion could be devised to > tell whether X is "actually" conscious, not just "seemingly" conscious, > we don't need it to build [functionally] conscious systems. Except that we DON'T already know how. This ought to read: "We should get down to trying" to build systems that can pass the Total Turing Test (TTT) -- i.e., are completely performance-indistinguishable from conscious creatures like ourselves. Also, there is (and can be) no other functional criterion than the TTT, so "seemingly" conscious is as close as we will ever get. Hence there's nothing gained (and a lot masked and even lost) from focusing on interpreting trivial performance as conscious instead of on strengthening it. What we should ignore is conscious interpretation: That's a good philosophy. And I've dubbed it "methodological epiphenomenalism." > Thus, the evolutionary advantage of consciousness in primates (the > actuality as well as the appearance) is that it facilitates such social > interactions as communication and cooperation. The advantage of > building consciousness into computer programs (now I refer to the > appearance, since I can't empathize with a computer program) is the > same: to facilitate communication and cooperation. This simply does not follow from the foregoing (in fact, it's at odds with it). Not even a hint is given about the FUNCTIONAL advantage (or even the functional role) of either actually being conscious or even of appearing conscious. "Communication-and-cooperation" -- be it ever as "seemingly conscious" as you wish -- does not answer the question about what functional role consciousness plays, it simply presupposes it. Why aren't communication and cooperation accomplished unconsciously? What is the FUNCTIONAL advantage of conscious communication and cooperation? How we feel about one another and about the devices we build is beside the point (except for the informal TTT). It concerns the phenomenological and ontological fact of consciousness, not its functional role, which (if there were any) would be all that was relevant to mind engineering. That's methodological epiphenomenalism. -- Stevan Harnad (609) - 921 7771 {allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 5 17:22:23 1987 Date: Thu, 5 Mar 87 17:22:06 est From: vtcs1::in% To: vpi-ailist@vtcs1.cs.vt.edu Subject: Status: RO Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 4 Mar 87 19:55 EST Received: from relay.cs.net by RELAY.CS.NET id ac11929; 1 Mar 87 1:51 EST Received: from sri-stripe.arpa by RELAY.CS.NET id aa02080; 1 Mar 87 1:51 EST Date: Sat 28 Feb 1987 22:32-PST From: AIList Moderator Kenneth Laws Reply-to: AIList@sri-stripe.arpa US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V5 #59 To: AIList@sri-stripe.arpa AIList Digest Sunday, 1 Mar 1987 Volume 5 : Issue 59 Today's Topics: Policy - Hardware vs. AI, Queries - AI in Network Protocols & Best LISPM/WorkStation & Legal Reasoning and AI & Completeness and Consistency of Rule Bases, Application - Network Complexity ---------------------------------------------------------------------- Date: Wed, 25 Feb 87 10:41:16 -0800 From: Amnon Meyers Subject: Hardware vs. AI I would like to suggest that the AILIST be more open to hardware topics that are related to WORKING in Artificial Intelligence, for several reasons: This would be a valuable service to people working in AI who are having hardware problems. They can draw upon the solutions of others who have solved the same problems, and upon knowledge of AI facilities with hardware experts. Even though there are bulletin boards for specific hardware, it would be useful to to organize the AI community's hardware and environment problems as a single list. If this is too much for AILIST then perhaps an AI-HARDWARE list is called for. I disagree with the notion that hardware problems have 'nothing to do with AI'. While discussions of LISP and PROLOG dialects are interesting, they appear to me to have no more relevance to 'AI' than do hardware issues. Likewise discussion of the operation and environment provided by LISP machines and other workstations. Likewise philosophical discussions of the mind. My point is that it is not useful to try to define AI too narrowly. There is a theory and practice of AI, and AILIST seems to stress the theory. It would be nice if the 'practice' were taken up somewhere as well. I can certainly understand that the AILIST is already overburdened, and that the moderator already does too much work (and a fine job as well). THOSE should be the reasons for excluding hardware issues, not arbitration about what is and is not relevant to AI. Amnon Meyers meyers@ics.uci.edu P.S. I often wonder where people are writing from, so... Irvine Computational Intelligence Project ICS Department University of California Irvine, California 92717 (714) 856-4840 ------------------------------ Date: 28 Feb 87 08:10:06 GMT From: ramarao@umn-cs.UUCP Subject: AI in Network Protocols. topic : EXPERT SYSTEMS OR AI IN NETWORKS AND PROTOCOLS I am trying to find out if there has been any attempt at applying AI techniques, AI languages to the field of network protocols. Can anyone give me some references. I would like to know why it is not feasible to implement network protocols, etc. in a non-procedure based approach. I am trying to find out if it is feasible to design network protocols in LISP or Prolog or any AI languages. -Bindu Rama Rao (ramarao@umn-cs.arpa) (612)-625-9637 **** Keep smilin (-: ------------------------------ Date: Thu, 26 Feb 87 14:27:51 PST From: TAYLOR%PLU@ames-io.ARPA Subject: Best LISPM/WorkStation? NOTE: This posting is being sent to the AILIST, SLUG, TI, XEROX, SUN and WorkStation bulletin boards Here at the AI Research & Applications Branch - NASA Ames Research Center, we are planning to buy several Lisp or possibly non-Lisp workstations in the near future and want to look at alternatives to Symbolics, of which we have 7 + a 3600 file server at the present time. Possible alternatives are (in no particular order): Explorer Xerox 1186 Sun Vax station LMI Apollo Several things that concern us are: Are we maximizing productivity and minimizing cost in our current environment ? How can we accomplish these goals in the future ? Is our current environment of Lisp Machine workstations going to continue to offer us the best development environment ? General purpose workstations offering Lisp, Prolog, Pascal, FORTRAN, C, etc, are coming on strong. We will be supporting outside users who have non-Symbolics equipment; what is the most portable development/delivery environment that we could have, consistent with our software requirements ? (see below) If we move to a non-Symbolics environment, what environment will minimize the portability costs ? Our software requirements are object oriented Lisp, Prolog, two-way calling interface between Lisp & Prolog, rich window system/graphics (monochrome and color) facilities and a productive development environment. We would appreciate any comments, experiences and recommendations of people who have used two or more of the above Lispms/work stations. We are familiar with two Lispm comparisions which have appeared on bboards: Dandelion vs Symbolics, 17 Sep 86, steve@siemens.UUCP Explorer vs Symbolics, 23 Oct 86, miller@ur-acorn.ARPA In order to liven up this discussion, we thought the repetition of some previous bboard claims about Lispm/workstation capabilities would elicit honest, deeply-held opinions ! Here goes: 1. The Symbolics window debugger is unmatched anywhere. 2. Symbolics' on-line documentation is much better than TI's BUT TI's suggestion system is much better than Symbolics'. 3. Symbolics' networking is much better than TI and better in general. 4. With Symbolics GC, must boot ea. 14 day. With TI GC (no ephemeral exists) must boot ea 0.5 day 5. Symbolics and TI are so similar that it is easy to carry skills back and forth. 6. Xerox's window system is easy to use but less powerful than Symbolics. 7. Xerox's GC is really a 'reference counter' and therefore CAN'T reclaim circular lists. Other than that, however, Xerox's GC is much better than Symbolics. 8. VAX's GC takes 6 sec (with 9 meg) while Symbolics' takes 1 hr. 9. VAX must have >5 Meg to be useful. 10. VAX's LISP Language Sensitive Editor is about as useful as EMACS. 11. A SUN without disks is useless. Furthermore, here are a few issues to flame on - - hardware - failure rates, ease of fault analysis - window systems - networking - namespace - garbage collection - Initial ease of use / overall user interface. - Power for highly trained user - editors - online documentation - completeness, clarity - performance metering - debuging tools - maximum paging space - speed To try to keep this discussion in one central place and since I do not subscribe to all the bboards to which this is being posted, I would suggest (subject to Ken Laws veto) that all responses be posted to the AIList (AIList@sri-stripe.ARPA). However e-mail to me if you have any problems with that proposal. -------------------------------------------------------------------------- Will Taylor - Sterling Software, MS 244-17, NASA-Ames Research Center, Moffett Field, CA 94035 arpanet: taylor@ames-pluto.ARPA usenet: ..!ames!plu.decnet!taylor phone : (415)694-6525 ------------------------------ Date: 26 Feb 87 17:28:16 GMT From: Jim Stewart Subject: Re: Legal Reasoning and AI I came across an announcement regarding A.I. applications for legal reasoning and a conference in May. This interests me as I am (one of the few) who happens to be a law student and a software engineer/ technical writer. (Engineer/writer by day, student by night). I have a strong interest in legal research and A.I.. Here at Lewis & Clark's Northwestern School of Law (Portland, Oregon USA) a small group of students is forming with support of administration to continue research in the areas of computer applications for legal research and reasoning. Unfortunately, notwithstanding our proximity to the "Silicon Forest" here in Oregon, we are somewhat disconnected from the mainstream activities in this area. I am interested in learning who else is out there with net access, and happens to be a law student as well as a technical professional. Is there a sub news-group of A.I., or is this news-group appropriate for such an exchange? Thanks Gregory Miller Technical Staff Computervision Electronics CAE Development Center (cvedc) P.O. Box 959 Hillsboro, Oregon 97123 (503) 645-2410 Northwestern School of Law @ Lewis & Clark College ------------------------------ Date: Wed, 25 Feb 87 16:59 N From: MFMISTAL%HMARL5.BITNET@wiscvm.wisc.edu Subject: Completeness and consistency of rule bases I'm interested in computer (assisted) completeness and consistency checking of rule bases. Is there someone on the net who could provide me with some references to the literature on these subjects. Both references on theoretical and practical issues are welcomed. Please, send them to me directly, I will compile a complete list for posting on the net. Jan L. Talmon Department of Medical Informatics and Statistics University of Limburg PO Box616 6200 MD Maastricht The Netherlands EARN/BITNET: MFMISTAL@HMARL5 ------------------------------ Date: 25 Feb 87 03:56:45 GMT From: belmonte@svax.cs.cornell.edu (Matthew Belmonte) Subject: Re: Network Complexity In article <292@ihnp3.UUCP> mth@ihnp3.UUCP (Mark Horbal) writes: > If we define the "complexity" of a computer network as a > measure of difficulty in observing, understanding, and > excercising a modicum of control over it, how is this > "complexity" estimated? > > If we further choose a simple but intuitive way of representing > a computer network by a graph, how do we quantify this "complexity" > with respect to the graph's topology? I believe there might be another area which is relevant to the problem you mention in the second statement above, but not the first. A year ago I was doing an internship at NRL implementing a transition-network parser for some context-free grammars which mimicked *small* subsets of English. The question occurred to me, "how does one quantify the complexity of the transition networks we generate?" (By "complexity" here I mean topics such as: Do we have a lot of long paths consisting of nonterminals which will result in many failed parses? Do we have many null transitions that we can't squeeze out by munging adjacent states together? etc.) The answer I got was, well, we don't really know of any method of completely characterising such complexities. Is this the same sort of problem as mentioned above, or am I completely off-base? Disclaimer: Yes, I know I'm extraordinarily weak on theory, but I'm a lowly, simple-minded freshman, so I have an excuse. -- "When you've got them by the balls, their hearts and minds will follow." -- a member of the Nixon administration Matthew Belmonte Internet: BITNET: UUCP: ..!decvax!duke!duknbsr!mkb ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 5 17:23:10 1987 Date: Thu, 5 Mar 87 17:22:47 est From: vtcs1::in% To: vpi-ailist@vtcs1.cs.vt.edu Subject: Status: RO Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 4 Mar 87 20:00 EST Received: from relay.cs.net by RELAY.CS.NET id ah12024; 1 Mar 87 2:03 EST Received: from sri-stripe.arpa by RELAY.CS.NET id aa02165; 1 Mar 87 2:01 EST Date: Sat 28 Feb 1987 22:39-PST From: AIList Moderator Kenneth Laws Reply-to: AIList@sri-stripe.arpa US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V5 #60 To: AIList@sri-stripe.arpa AIList Digest Sunday, 1 Mar 1987 Volume 5 : Issue 60 Today's Topics: Bibliography - Leff ai.bib44C ---------------------------------------------------------------------- Date: Sat, 28 Feb 1987 13:19 CST From: Leff (Southern Methodist University) Subject: ai.bib44C %A S. V. Shil'man %T Adaptive-Optimal Filtering in Random Processes %J MAG95 %P 249-261 %K O06 AI06 %A O. Yu. Pershin %T A Class of Extremal Combinatorial Problems for Multicomponent Network Design %J MAG95 %P 262-269 %K AI03 O06 %A V. I. Borzenko %T Extrapolation of a System of Classifications %J MAG95 %P 270-275 %K O06 %A V. V. Mottl %A I. B. Muchnik %T Algorithm for Recognition of a Stream of Random Events %J MAG95 %P 276-278 %K AI06 O06 %A Dominique Perrin %A Jean-Erric Pin %T First-Order Logic and Star-Free Sets %J Journal of Computer and System Sciences %V 32 %N 3 %D JUN 1986 %P 393-406 %K AI11 %A J. A. Bergstra %A J. W. Klop %T Conditional Rewrite Rules: Confluence and Termination %J Journal of Computer and System Sciences %V 32 %N 3 %D JUN 1986 %P 323-362 %K AI14 %A M. S. Esparz %T High-Priced Lisp Hardware Obsolete in Near Future, Says Study %J InfoSystems %V 33 %N 9 %D SEP 1986 %P 16 %K H02 %A D. H. Feedman %T Expert Systems Moving from Glamour Technology to Workhorse %J InfoSystems %V 33 %N 9 %D SEP 1986 %P 14-15 %K AI01 %A Ewa Orlowska %T Semantic Analysis of Inductive Reasoning %J Theoret. Comput. Sci %V 43 %D 1986 %N 1 %P 81-89 %K AI16 %A Francoise Bellegarde %T Convergent Term Rewriting Systems can be Used for Program Transformation %B Programs as Data Objects %P 24-41 %S Lecture Notes in Computer Science %V 217 %I Springer-Verlag %C Berlin-New York %D 1986 %K AA08 AI14 %A Adolfo Lagomasino %A Andrew P. Sage %T Imprecise Knowledge Representation in Inferential Activities %B BOOK57 %P 473-497 %K O04 %A Murray Eden %A Michael Unser %A Riccardo Leonardi %T Polynomial Representation of Pictures %J Signal Process. %V 10 %D 1986 %N 4 %P 385-393 %K AI06 %A Christian Ronse %T Definitions of Convexity and Convex Hulls in Digital Images %J Bull. Soc. Math. Belg. Ser. B %V 37 %D 1986 %N 2 %P 71-85 %K AI06 %A David C. Rine %T Some Applications of Multiple-Valued Logic and Fuzzy Logic to Expert Systems %B BOOK57 %P 407-434 %K O04 AI01 %A Hung T. Nguyen %A Irwin R. Goodman %T On Foundations of Approximate Reasoning %B BOOK57 %P 47-59 %K AI16 O04 AI01 %A S. T. Wierzchon %T Mathematical Tools for Knowledge Representation %B BOOK57 %P 61-69 %K AI16 O04 AI01 %A A. Lananer %T Associate Processing in Brain Theory and Artificial Intelligence %B Brain Theory %P 193-210 %I Springer-Verlag %C Berlin-New York %D 1986 %P 193-210 %K AI08 AI16 %A Lawrence R. Baniner %A Jay G. Wilpon %A Biing-Hwang Juang %T A Segmental k-Means Training Procedure for Connected Word Recognition %J AT&T Technical Journal %V 65 %N 3 %D MAY-JUN 1986 %P 21-40 %K AI04 AI05 %A A. A. Natan %A A. I. Samylovskiy %T Recognition of Gaussian Random Processes by Local Analysis of Their Properties %J MAG95 %P 128-135 %K AI06 %A M. V. Fomina %T Methods for Successive Construction of a Hierarchical Representation of the States of a Complex Object %J MAG95 %P 136-145 %K AI16 %A L. B. Groysberg %T Planning of Component Tests for Confirmation of System Reliability %J MAG95 %P 146-153 %K AI16 AA05 %A E. Vidal Ruiz %T An Algorithm for Finding Nearest Neighbors in (Approximately) Constant Average Time %J MAG96 %P 145-158 %K AI06 O06 %A S. K. Pal %A P. K. Pramanik %T Fuzzy Measures in Determining Seed Points in Clustering %J MAG96 %P 159-164 %K O06 %A G. T. Toussaint %T Interactive Curve Drawing by Segmented Bezier Approximation with a Control Parameter %J MAG96 %P 171-176 %K O06 %A A. Rosenfeld %T Continuous Functions on Digital Pictures %J MAG96 %P 177-184 %K AI06 %A E. R. Davies %T Image Space Transforms for Detecting Straight Edges in Industrial Images %J MAG96 %P 185-192 %K AI06 %A S. K. Parui %A S. Eswara Sarma %A D. Dutta Majumder %T How to Discriminate Shapes Using Shape Vector %J MAG96 %P 201-204 %K AI06 %A A. Schening %A H. Nieman %T Computing Depth from Stereo Images by Using Optical Flow %J MAG96 %P 205-212 %K AI06 %A T. H. Phillips %A A. Rosenfeld %T A Simplified Method of Detecting Structure in Glass Patterns %J MAG96 %P 213 %K AI06 %A J. K. Mattila %T On Some Logical Points of Fuzzy Conditional Decision Making %J MAG97 %P 137-146 %K O04 %A K. Nakamura %T Preference Relations on a Set of Fuzzy Utilities as a Basis for Decision Making %J MAG97 %P 147-162 %K AI13 O04 %A M. R. Casals %A M. A. Gil %A P. Gil %T On the Use of Zadeh's Probabilistic Definition for Testing Statistical Hypothesis from Fuzzy Information %J MAG97 %P 175-190 %K O04 %A P. Dallant %A A. Meunier %A P. S. Christel %A L. Sedel %T Semi-automatic Image-Analysis Applied to the Quantification of Bone Microstructure %J Journal of Biomedical Engineering %V 8 %N 4 %D OCT 1986 %P 320-328 %K AA01 AI06 %A C. E. Riese %A S. M. Zubrick %T Using Rule Induction to Combine Declarative and Procedural Knowledge Representations %J MAG94 %P 603-606 %K AI16 AI04 %A D. S. Prerau %A A. S.Gunderson %A R. E. Reinke %A S. K. Goyal %T The COMPASS Expert System: Verification, Technology Transfer, and Expansion %J MAG94 %P 597-602 %K AI01 %A B. Pinkowski %T A Lisp-based System for Generating Diagnostic Keys %J MAG94 %P 592-596 %K T01 %A S. R. Mukherjee %A M. Sloan %T Positional Representation of English Words %J MAG94 %P 587-591 %K AI02 %A J. H. Martin %T Knowledge Acquisition Through Natural Language Dialogue %J MAG94 %P 582-586 %K AI02 %A D. M. Mark %T Finding Simple Routes: 'Ease of Description' as an Objective Function in Automated Route Selection %J MAG94 %P 577-581 %A S. Mahalingam %A D. D. Sharma %T WELDEX - An Expert System for Nondestructive Testing of Welds %J MAG94 %P 572-576 %K AA05 AI01 %A J. Liebowitz %T Evaluation of Expert Systems: An Approach and Case Study %J MAG94 %P 564-571 %K AI01 %A S. J. Laskowski %A H. J. Antonisse %A R. P. Bonasso %T Analyst II: A Knowledge-Based Intelligence Support System %J MAG94 %P 558-563 %K AA18 %A D. A. Krawczak %A P. J. Smith %A S. J. Shute %A M. Chignell %T EP-X: A Knowledge-Based System to Aid in Search of the Environmental Pollution Literature %J MAG94 %P 552-557 %K AA14 AI01 AA10 %A E. Y. Kandrashina %A O. N. Ochakovskaja %A Y. A. Zagorulko %T Time-1: Semantic System for Dynamic Object Domain %J MAG94 %P 548-551 %K AI16 %A C. I. Kalme %T A General Purpose Language for Coupled Expert Systems %J MAG94 %P 539-547 %K T03 H03 AI01 %A J. R. James %A P. P. Bonissone %A D. K. Frederick %A J. H. Taylor %T A Retrospective View of CACE-III: Considerations in Coordinating Symbolic and Numeric Computation in a Rule-Based Expert System %J MAG94 %P 532-538 %K T03 AI14 AI01 %A R. T. Hartley %T Representation of Procedural Knowledge for Expert Systems %J MAG94 %P 526-531 %K AI16 AI01 %A J. J. Hannan %A P. Politakis %T ESSA: An Approach to Acquiring Decision Rules for Diagnostic Expert Systems %J MAG94 %P 520-525 %K AA21 %A K. Hammer %A J. Hardin %A D. Rudisill %A A. Goldfein %T Using a Predictive Parse to Create a Modeless Editor %J MAG94 %P 514-519 %K AA15 %A R. L. Constable %T Implementing Mathematics with the Nupri Proof Development System %I Prentice-Hall %C Englewood Cliffs, NJ %D 1986 %K AI11 AA13 %X 299 pages $21.95 %A L. O. Hall %A W. Bandler %T Relational Knowledge Acquisition %J MAG94 %P 509-513 %K AI16 %A W. D. Hagament %A M. Gardy %T MEDCAT/CATS: Two Contrasting Artificial Intelligence Applications in Medical Education %J MAG94 %P 503-508 %K AA07 AA01 %A J. F. Gilmore %A K. Pulaski %T A Survey of Expert System Tools %J MAG94 %P 498-502 %K T03 %A A. Garcia-Ortiz %T Computer Algebra Applied to the Design of Optical Sensor Platforms %J MAG94 %P 493-497 %K AI14 AA16 %A B. R. Fox %A K. G. Kempf %T Complexity, Uncertainty, and Opportunistic Scheduling %J MAG94 %P 487-492 %K O04 AA05 AI16 O06 AI03 %A M. E. Cohen %A D. L. Hudson %A N. Gitlin %A L. T. Mann %A J. Van den Bogaerde %A L. Leal %T Knowledge Representation and Classification of Chromatographic Data for Diagnostic Medical Decison Making %J MAG94 %P 481-486 %K AA02 AA01 %A F. Brundick %A J. Dumer %A T. Hanratty %A P. Tanenbaum %T GENIE: An Inference Engine with Diverse Applications %J MAG94 %P 473-480 %K T03 %A H. Winter %T Artificial Intelligence in Man-Machine Systems %B BOOK59 %P 1-22 %K AA15 %A J. Mylopoulos %A A. Borgida %A S. Greenspan %A C. Meghini %A B. Nixon %T Knowledge Representation in the Software Development Process - A Case Study %B BOOK59 %P 23-44 %K AA08 %A B. Radig %T Design and Applications of Expert Systems %B BOOK59 %P 45-61 %K AA08 %A W. Wahlster %T The Role of Natural Language in Advanced Knowledge Based Systems %B BOOK59 %P 62-83 %K AI02 AA15 %A G. Fischer %T Cognitive Science - Information Processing in Humans and Computers %B BOOK59 %P 84-111 %K AI08 %A A. Meystel %T Knowledge-Based Controller for Intelligent Mobile Robots %B BOOK59 %P 112-140 %K AI07 AA19 %A S. E. Cross %A R. B. Bahnij %A D. O. Norman %T Knowledge-Based Pilot Aids - A Case Study in Mission Planning %B BOOK59 %P 141-174 %K AA19 %A U. Volckers %T Dynamic Planning and Time-Conflict Resolution in Air Traffic Control %B BOOK59 %P 175-197 %K AI09 O03 %A L. A. Zadeh %T Outline of a Computational Approach to Meaning and Knowledge Representation B ased on the Concept of a Generalized Assignment Statement %B BOOK59 %P 198 %K O04 AI16 %A J. K. Kastner %T Continuous Real-Time Expert System for Computer Operations %J Data Processing %V 28 %N 8 %D OCT 1986 %P 411-425 %K O03 AA08 AI01 %A Keith Clark %A Steve Gregory %T PARLOG: Parallel Programming in Logic %J ACM Transactions on Programming Languages and Systems %V 8 %N 1 %D JAN 1986 %P 1-49 %K AI10 H03 %A G. I. Janbykh %T Optimization of the Structure of Computer Networks Using Branch and Bound %J Avtomatika I. vychisletenlnaya Teknika %N 5 %D SEP-OCT 1986 %P 3-13 %K AA08 AI03 %A W. Rauchhindin %T Software Integrates AI, Standard Systems %J Mini-Micro Systems %V 19 %N 12 %D OCT 1986 %P 69-86 %A Dragan Kolar %A Vojislav Stojkovic %T The Implementation of CF Grammars by PROLOG Language %J Univ. u Novm Sadu Zb. Rad. Prirod. Mat Fak. Ser. Mat %V 15 %N 1 %P 245-252 %K T02 %A Krzysztof R. Apt %A Dexter C. Kozen %T Limits for Automatic Verification of Finite-tate Concurrent Systems %J Inform. Process. Lett %V 22 %D 1986 %N 6 %P 307-309 %K AA08 %A Robert L. Constable %T Constructive Mathematics as a Programming Logic I. Some Principles of Theory %B BOOK60 %P 21-37 %K AI10 AA13 %A H. Langmaack %T A New Transformational Approach to Partial Correctness Proof Calculi for ALGO L 68-Like Programs with Finite Modes and Simple Side Effects %B BOOK60 %P 73-102 %D 1985 %A Philippe Devienne %A Patrick Lebegue %T Weighted Graphs: A Tool for Logic Programming %B BOOK61 %P 100-111 %K AI10 %A James S. Royer %T Inductive Inferences of Approximations %J Information and Control %V 70 %N 2-3 %D AUG-SEP 1986 %P 156-178 %K AI03 %A Sergiu Hart %A Micha Sharir %T Probabilistic Propositional Temporal Logics %J Information and Control %V 70 %N 2-3 %D AUG-SEP 1986 %P AI10 AI16 O04 AI11 %A K. Yalumov %T KET: A Knowledge Engineering Tool %J Computers in Industry %V 7 %N 5 %D OCT 1986 %P 417-426 %K T03 %A S. F. Bocklisch %T A Diagnosis Sytem Based on Fuzzy Classification %J Computers and Industry %V 7 %N 1 %D FEB 1986 %P 73-82 %K AI01 O04 %A Justin R. Smith %T Parallel Algorithms for Depth-first Searches I. Planar Graphs %J SIAM J. Comput. %V 15 %D 1986 %N 3 %P 814-830 %K AI03 O06 %A N. N. Nepievoda %T Deductions in the Form of Graphs %J Semiotics and Information Science %N 26 %P 52-82 %D 1985 %X Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow (in Russian) %A Yu. I. Petunin %A G. A. Shuldeshov %T Calculation of a Plane Figure from its Discretized Image %J Kibernetika (Kiev) %D 1986 %N 2 %P 1-7 %K AI06 %X Russian with English Summary %A Shuro Nagata %A Takeshi Oshiba %A Sakae Funahashi %T An Implementation of a Validity Checking Program by Using N-set Partitions %J Bull. Nagoya Inst. Tech %V 37 %D 1985 %P 111-116 %D 1986 %K AA08 %X Japanese with English Summary %A Alex Pelin %A Jean H. Gallier %T Exact Computation Sequences %B BOOK61 %P 45-59 %A Henri Prade %T corrections to: "A Simple Inference Techique for Dealign with Uncertain Facts in terms of possibility" (Kybernetes 15 (1986) no. 1 19-24 %J Kybernetes %15 %N 3 %P 214 %K O04 AT13 %A Ronald R. Yager %T A Note on Projections of Conditional Possibility Distributions in Approximate Reasoning %J Kybernetes %V 15 %N 3 %P 185-187 %K O04 %A R. I Podlovchenko %T Investigation of s-models of programs from the standpoint of constructing canonization algorithms for them. %J Programmirovanie %D 1986 %N 2 %P 3-13 %K AA08 %X Russian %A Manfred Broy %A Bernhard Moller %A Peter Pepper %A Martin Wirsing %T Algebraic Implementations Preserve Program Correctness %J Sci Comput. Programming %V 7 %D 1986 %N 1 %P 35-53 %K AA08 %A Yu Qi Guo %A Lian Li %A Gang Wu Xu %T On the Disjunctive Structure of Dense Languages %J Sci. Sinica Ser. A %V 28 %D 1985 %N 12 %P 1233-1238 %A Thomas A. Joseph %A Thomas Rauchle %A Sam Toueg %T State Machines and Assertions: An Integrated Approach to Modeling and Verification of Distributed Systems %J Sci. Comput. Programming %V 7 %D 1986 %N 1 %P 1-22 %A Takeshi Shinohara %T Inductive Inference of Formal Systems From Positive Data %J Bull Inform. Cybernet. %V 22 %D 1986 %N 1-2 %P 9-18 %K AI04 %A Moshe Y. Vardi %T Automata-Theoretic Techniques for Modal Logics of Programs %J J. Comput. System Sci. %V 32 %D 1986 %N 2 %P 183-221 %A John N. Martin %T Some Formal Properties of Indirect Semantics %J Theoret. Linguist %V 12 %D 1985 %N 1 %P 1-32 %K AI02 AI16 %A Makoto Haraguchi %T Analogical Reasoning Using Transformation of Rules %J Bull. Inform. Cybernet. %V 22 %D 1986 %N 1-2 %P 1-8 %K AI16 %A Takahashi Yokomori %T Representation Theorems and Primitive Predicates for Logic Programs %J Bull. Inform. Cybernet. %V 22 %D 1986 %N 1-2 %P 19-37 %K AI11 %A Matthias Baaz %A Alexander Leitsch %T The Application of Strong Reduction Rules in Automatic Proofs %J Osterreich Akad. Wiss. Math.-Natur. KL Sitzungsber. II %V 194 %D 1985 %N 4-10 %P 287-307 %K AA08 %A Michael Leyton %T A Theory of Information I. General Principles %J J. Math. Psych. %V 30 %D 1986 %N 2 %P 103-160 %K AI16 AI08 %A J. J. Harvey %T Expert Systems: An Introduction %J MAG98 %P 100-108 %K AI01 AT08 %A J. J. Harvey %T ESSAI Expert Systems Toolkit %J MAG98 %P 109-114 %K AI01 T03 %A M. A. Newstead %A R. Pettipher %T Knowledge Acquisition for Expert Systems %J MAG98 %P 115-121 %K AI01 %A G. Jones %A R. Nuttall %A K. Stone %T Integrating Multiple Control Schemes %J MAG98 %P 122-127 %K AI01 %A R. Gunhold %A J. Zettel %T System 12 In-Factory Testing %J MAG98 %P 128-134 %K AA04 AI01 %A H. Schelfhout %T Customer Application Engineering for System 12 Hardware %J MAG98 %P 135-140 %K AA04 AI01 %A N. Theuretzbacher %T Expert System Technology for Safety-Critical Real-Time Systems %J MAG98 %P 147-153 %K AI01 O03 %A M. Thandasseri %T Expert Systems Application for TXE4A Exchanges %J MAG98 %P 154-161 %K AI01 AA04 %A P. Benson %T Artificial Intelligence Assisted Packet Radio Connectivity %J MAG98 %P 162-167 %K AI01 AA04 %A E. Gaudry %T Electronic Warfare Application for Expert Systems %J MAG98 %P 168-173 %K AA18 %A M. E. Atwood %A E. R. Radlinski %T Diagnostic System Architecture %J MAG98 %P 174-179 %K AA21 %A M. E. Atwood %A R. Brooks %A E. R. Radlinski %T Causal Models: The Next Generation of Expert Systems %J MAG98 %P 180-184 %K AI01 AI16 %A D. Neiman %T Technological Considerations for Industrial Expert Systems Applications %J MAG98 %P 185 %K AI01 %A M. Chester %T The Military Reconnoiters Neural Systems %J Electronics Product Magazine %V 29 %N 10 %D OCT 15 1986 %P 78-82 %K AI12 AA18 %A Dennis de Champeaux %T Subproblem Finder and Instance Checker, Two Cooperating Modules for Theorem Provers %J JACM %V 33 %N 4 %D OCT 1986 %P 633-657 %K AI11 %A W. Eric L. Grimson %T The Combinatorics of Local Constraints in Model-Based Recognition and Localiz ation from Sparse Data %J JACM %V 33 %N 4 %D OCT 1986 %P 658-686 %K AI06 %A B. Ramamurthi %A A. Gersho %T Classified Vector Quantization of Images %J IEEE Transactions on Communications %V 34 %N 11 %D NOV 1986 %P 1105-1115 %K AI06 %A R. Buhr %T Front-face Analysis and Classification %J ntzArchiv %V 8 %N 10 %D OCT 1986 %P 245-256 %K AI06 %A E. M. Clarke %A E. A. Emerson %A A. P. Sistla %T Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic Specifications %J ACM TRANS on Programming Languages and Systems %V 8 %N 2 %D APR 1986 %P 244-265 %K AA08 %A R. Narasimhan %T Artificial Intelligence in 5th-Generation Computers %J MAG99 %P 71-84 %K AI16 %A P. V. S. Rao %A K. K. Paliwal %T Automatic Speech Recognition %J MAG99 %P 85-120 %K AI05 %A D. D. Majumder %T Pattern Recognition, Image Processing and Computer Vision in 5th Generation Computer Systems %J MAG99 %P 139 %K AI06 %A A. Victor Cabot %A S. Selcuk Erenguc %T A Branch and Bound Algorithm for Solving a Class of Nonlinear Integer Programming Problems %J Naval Research Logistics Quarterly %P 559-568 %K AI03 %A Terry Winograd %A Fernando Flores %T Understanding Computers and Cognition %I Ablex Publishing Corporation %C Norwood, NJ %D 1986 %K AT15 AI08 AI16 %X 224 pages ISBN 0-89391-050-3 $24.95 %A Tsuyoshi Yamamoto %T An Application of List Processing Artificial Intelligence to Computer Graphics and CAD %J Pixel %N 40 %P 80-85 %D 1986 %K AA04 UNIX graphics T01 %A R. Hauser %T NewCAT: Parsing Natural Language Using Left-Associative Grammar %S Lecture Notes in Computer Science %I Springer-Verlag %V 231 %D 1986 %K AT15 AI02 %X 540 pages Figures, $34.80 ISBN 3-540-16781-1 %A T. Samad %T Natural Language Interface for Computer-Aided Design %S Kluwer International Series in Engineering and Computer Science %V 14 %D 1986 %I Kluwer Academic Publishers %X 188 pages, $38.95, ISBN 0-89838-222-X %A P. E. Utgoff %T Machine Learning of Inductive Bias %S Kluwer INternational Series in Engineering and Computer Science %V 15 %D 1986 %I Kluwer Academic Publishers %X 165 pages, $37.50, ISBN 0-89838-223-8 %A S. P. Dutta %A R. S. Lashkari %A G. Nadoli %A T. Ravi %T A Heuristic Procedure for Determining Manufacturing Families from Design-Based Grouping for Flexible Manufacturing Systems %J Computers and Industrial Engineering %V 10 %N 3 %D 1986 %P 193-202 %K AA26 %A Efraim Turban %T Expert Systems- Another Frontier for Industrial Engineering %J Computers and Industrial Engineering %V 10 %N 3 %D 1986 %P 227-236 %K AI01 %A Michael M. Skolnick %T Application of Morphological Transformation to the Analysis of Two-Dimensional Electrophoretic Gels of Biological Materials %J MAG100 %P 306-332 %K AA10 AI06 %A Stanely R. Sternberg %T Grayscale Morphology %J MAG100 %P 333-354 %K AI06 %A Fernand Meyer %T Automatic Screening of Cytological Specimens %J MAG100 %P 356-369 %K AA10 AI06 %A Xinhua Zhuang %A Robert M. Haralick %T Morphological Structuring Element Decomposition %J MAG100 %P 370-382 %K AI06 %A Leonardo C. Topa %A Robert J. Schalkoff %T An Analytical App[roach to the Determination of Planar Surface Orientation Using Active-Passive Image Pairs %J MAG100 %P 404 %K AI06 %A Akira Shiozaki %T Edge Extraction Using Entropy Operator %J MAG101 %P 1-9 %K AI06 %A Son Pham %T Digital Straight Segments %J MAG101 %P 10-30 %K AI06 %A Hussein A. H. Ibraham %A John R. Kender %A David Elliot Shaw %T On the Application of Massively Parallel SIMD Tree Machines to Certain Intermediate Level Vision Tasks %J MAG101 %P 42-52 %K H03 AI06 %A Marijke F. Augusteijn %A Charles R. Dyer %T Recognition and Recovery of the Three-Dimensional Planar Point Patterns %J MAG101 %P 76-99 %K AI06 %A John Tyler %T Sppec Recognition System Using Walsh Analysis and Dynamic Programming %J Microprocessors and Microsystems %V 10 %N 8 %D OCT 1986 %P 427-433 %K AI05 H01 %A N. Rushby %T A Knowledge-Engineering Approach to Instructional Design %J MAG102 %P 385-389 %K AA07 %A H. Barringer %A I. Mearns %T A Proof System for ADA Tasks %J MAG102 %P 404-415 %K AA08 AI11 %A J. M. Hoc %T Review of Introduction Expert Systems by M. Gondran %J MAG103 %P 278 %K AT15 AI01 %A J. M. Hoc %T Review of Man Faced with Artificial Intelligence by J. D. Warnier %J MAG103 %P 280 %K AT15 %A E. Schuster %A P. Knoflach %A K. Huber %A G. Grabner %T An Interactive Processing System for Ultrasonic Compound Imaging, Real-Time Image Processing and Texture Analysis %J Ultrasonic Imaging %D 1986 %V 8 %N 2 %P 131 %K AA01 AI06 %A R. Opie %T Expert Systems Developing Applications %J Control and Instrumentation %V 18 %N 10 %D 1986 %P 57-60 %K AI01 ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 5 17:31:52 1987 Date: Thu, 5 Mar 87 17:31:46 est From: vtcs1::in% To: vpi-ailist@vtcs1.cs.vt.edu Subject: Status: RO Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 4 Mar 87 19:30 EST Received: from relay.cs.net by RELAY.CS.NET id ah20710; 4 Mar 87 2:25 EST Received: from sri-stripe.arpa by RELAY.CS.NET id aa24808; 4 Mar 87 2:23 EST Date: Sun 1 Mar 1987 18:57-PST From: AIList Moderator Kenneth Laws Reply-to: AIList@sri-stripe.arpa US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V5 #61 To: AIList@sri-stripe.arpa AIList Digest Monday, 2 Mar 1987 Volume 5 : Issue 61 Today's Topics: Seminars - Motion Planning in Time-Varying Environment (UPenn) & A Step Toward a Logic Machine (SMU) & VLSI Approach to the ELIS Lisp Machine (SU), Conference - Columbia AI Symposium ---------------------------------------------------------------------- Date: Sat, 28 Feb 87 14:01:15 EST From: tim@linc.cis.upenn.edu (Tim Finin) Subject: Seminar - Motion Planning in Time-Varying Environment (UPenn) COLLOQUIUM Computer and Information Science University of Pennsylvania Philadelphia PA 10:30am 3/2/87, 307 Towne Bldg. Motion Planning in Time-varying Environment Kamal Kant Gupta Computer Vision and Robotics Lab McGill University Motion Planning Problem is to determine the motion of an object, from a start position to a goal position, while avoiding collision with other objects (obstacles) in its enviroment. Most Motion Planning research, up until very recently, has considered static obstacles, i.e., plan the path to avoid the static obstacles, called the path planning problem, or, the PPP. We consider the problems of planning collision- free trajectories (path as a function of time) for an object among moving as well as static obstacles. We call it the Trajectory Planning Problem (TPP) in time-varying enviroments. Our approach to formulating the TPP is to consider space-time, where time is represented explicitly. Such a representation leads to a geometric view of trajectory - as a curve in space-time-and lends itself to use of computational geometric techniques in space-time. Such techniques are quite novel in the sense that they do not occur in the case of only static obstacles. We propose a heuristic but natural decomposition of the TPP into two sub-problems: (i) plan a path to avoid the static obstacles, i.e. solve the PPP, and (ii) plan the velocity of the robot along the path to avoid collision with the moving obstacles. We call the second sub-problem the velocity planning problem, the VPP. The main motivation behind the decomposition is to reduce the complexity of the full problem, and present efficient algorithms for collision-free trajectories. Standard algorithms may be used to sovle the PPP. We then present fast, efficient and complete algorithms to solve the VPP. The essence of these algorithms lies in forulating the VPP in 2-dimensional path- time. In the process, we also explore some properties of the path- time space. These algorithms have applications in several domains of robotics. In particular, we shall illustrate the use of these algorithms in two domains: i) for autonomous navigation of a mobile robot, and ii) for motion co-ordination of multiple robots. ------------------------------ Date: Sat, 28 Feb 1987 13:19 CST From: Leff (Southern Methodist University) Subject: Seminar - A Step Toward a Logic Machine (SMU) Seminar Announcement, Computer Science and Engineering, Southern Methodist University, MOnday, March 2, 1987, Room 315 SIC, 1:30PM A Step Toward a Logic Machine, C. S. Tang, Carnegie-Mellon University XYZ is a software development support system to unify various ways of programming programming [sic] with HLL (Pascal, Ada, Fortran etc.), programming with abstract specification (temporal logic, production systems, Prolog, pre-post condition specification, etc.) and programming with graphics (structured flow chart, Petri Nets, Data Flow Diagrams, etc.) It is based on a linear-time temporal logic language with a uniform framework of programs, to combine the abstract behavior description with dynamic state transition. It could be used to represent the dynamic semantics of HLLs, which could serve as the basis of a semantics-directed compiler generation and source to source transformation system, and also to represent different layers of abstract specification, from the very abstract level down to the assembly like efficiently executable level, such that a method is introduced for programming by decompositional specification and verification within this identical famework. And on this basis, related with programming with Dataflow Diagram, an approach to connect the informal methodology of sytem design with the fomral method of programming such as specification, verification, program decomposition and transformation are suggested. It is considered as a step toward a model of an architecture really based on logic, which could do logic reasoning and abstract specification conveniently and is still able to execute conventional programs as efficiently as on conventional Von Neumann computers. This "uniform program framework" appraoch is different from those to express program state transition by introducing new logic variants such as interval logic or branching logic in that: 1) This approach can avoid the task of building new metamathematical foundations and is easier to understand to use; 2) It could be implemented efficiently; 3) Prolog-like production systems are its sublanguage, so it is different from those systems to extend Prolog with temporal logics. The latter could not execute algorithmic programs efficiently; 4) It is even more expressive. ------------------------------ Date: 28 Feb 87 1027 PST From: Carolyn Talcott Subject: Seminar - VLSI Approach to the ELIS Lisp Machine (SU) Title: VLSI approach to the ELIS Lisp machine Speaker: Yasushi Hibino Director of Second Research Section NTT Basic Research Laboratories Nippon Telegraph and Telephone Time: Monday March 2, 3:30pm Place: 352 Margaret Jacks Abstract: The LISP Machine ELIS was designed to achieve a comfortable interactive programming environment by a fast microcoded LISP interpreter. ELIS is a microprogram control machine with a 32k 64-bits-words writable control store. ELIS also has a 32K word hardware stack and special memory interface registers. VLSI ELIS chip is developed by two-micron double metal layer CMOS technology. The VLSI ELIS is compatible with an ELIS breadboard machine in the level of microcodes. Therefore, TAO Lisp, which is a dialect of CommonLisp and assimilates object oriented programming, logic programming and concurrent programming within the Lisp world, is running on the VLIS ELIS. The speed of interpreted codes in TAO is comparable to that of compiled codes of MIT's Lisp machines. THis good performance is attained by a simple internal bus structure and a design of fucntion blocks with iterative circuit structures. In my talk, the architecture of ELIS is briefly introduced and a VLSI approach for it is discussed. The approach is not like Meed and Conway's. It is rather orthodox approach, because in the case of a dedicated machine it is not desirable that VLSI design methodology restricts an architecture of the machine. [CLT -- Sorry for the short notice, please pass this on to anyone you think might be interested.] ------------------------------ Date: Sun 1 Mar 87 18:46:18-EST From: Michael Lebowitz Subject: Conference - Columbia AI Symposium ARTIFICIAL INTELLIGENCE DAY SPONSORED BY DEPT. OF COMPUTER SCIENCE COLUMBIA UNIVERSITY MARCH 6, 1987 DAG ROOM, SCHOOL OF INTERNATIONAL AFFAIRS 10:00 Brian Reiser "An Intelligent Tutoring Systems" Princeton Univ. 11:00 Coffee Break 11:30 Edward H. Shortliffe "Graphical Access to an Expert System: Stanford Univ. 2:00 Carl Hewitt "Due Process" MIT 3:00 Ruzena Bajcsy "Errors and Mistakes in Sensory Univ. of Pennsylvania Programming" 4:00 Reception Computer Science Lounge ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 5 17:23:06 1987 Date: Thu, 5 Mar 87 17:22:59 est From: vtcs1::in% To: vpi-ailist@vtcs1.cs.vt.edu Subject: Status: RO Received: from relay.cs.net by vtcs1.cs.vt.edu; Wed, 4 Mar 87 20:09 EST Received: from relay.cs.net by RELAY.CS.NET id ab03360; 1 Mar 87 22:34 EST Received: from sri-stripe.arpa by RELAY.CS.NET id aa03918; 1 Mar 87 22:29 EST Date: Sun 1 Mar 1987 19:01-PST From: AIList Moderator Kenneth Laws Reply-to: AIList@sri-stripe.arpa US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V5 #62 To: AIList@sri-stripe.arpa AIList Digest Monday, 2 Mar 1987 Volume 5 : Issue 62 Today's Topics: Query - Expert System Definition, Discussion - Expert System Definition & Intelligence & Logic in AI & A Defence of Vulgar Tongue, News - AI Software Revenues, Review - Spang Robinson Report 3.2 ---------------------------------------------------------------------- Date: 28 Feb 87 00:15:36 GMT From: ihnp4!alberta!calgary!arcsun!roy@ucbvax.Berkeley.EDU (Roy Masrani) Subject: dear abby.... Dear Abby. My friends are shunning me because I think that to call a program an "expert system" it must be able to explain its decisions. "The system must be able to show its line of reasoning", I cry. They say "Forget it, Roy... an expert system need only make decisions that equal human experts. An explanation facility is optional". Who's right? Signed, Un*justifiably* Compromised Roy Masrani, Alberta Research Council 3rd Floor, 6815 8 Street N.E. Calgary, Alberta CANADA T2E 7H7 (403) 297-2676 UUCP: ...!{ubc-vision, alberta}!calgary!arcsun!roy CSNET: masrani%noah.arc.cdn@ubc.csnet -- Roy Masrani, Alberta Research Council 3rd Floor, 6815 8 Street N.E. Calgary, Alberta CANADA T2E 7H7 (403) 297-2676 UUCP: ...!{ubc-vision, alberta}!calgary!arcsun!roy CSNET: masrani%noah.arc.cdn@ubc.csnet ------------------------------ Date: Sun 1 Mar 87 10:48:43-PST From: Ken Laws Subject: Definition of Expert System (Re: Dear Abby) Why must an expert system explain its reasoning? 1) To aid system building and debugging; 2) to convince users that the reasoning is correct; and 3) to force conformance to a particular model of human reasoning. Reason 1 is hardly a sine qua non. It is necessary that the line of reasoning be debuggable, of course, but that can be done with checkpoints, execution traces, and other debugging tools. Forcing the system to "explain" its own reasoning adds to the complexity of the system without directly improving performance. An explanation capability may reduce the time, effort, and expertise required to build and maintain or modify the system -- particularly if domain experts instead of programmers are doing the work -- but the real issue is what knowledge is encoded and how it is used. We have been guilty of defining the field by the things that happened to be easy to implement in a few early programs, just as we sometimes define AI as that which is easy to do in LISP. Reason 2, convincing the user, is a worthy goal and perhaps necessary in consulting applications, but contains some traps. The real test of a system is its performance. If adequate (or exceptional) performance can be documented, many customers will have no interest about what goes on in the black box. If performance is documentably poor, adding an explanatory mechanism is just a marketing gimick: an expert con. The explanations are really only needed if some of the decisions are faulty and it is possible to recognize which ones from the explanation. Further, there are different types of explanation that should be considered. The traditional form is basically a trace of how a particular diagnosis was reached. This is only appropriate when the reasoning is sequential and depends strongly on a few key facts, the kind of reasoning that humans are able to follow and "desk check". Reasoning that is strongly parallel, non-deterministic, or depends on subtle data distinctions (without linguistic names) are not amenable to such explanations. This sort of problem often arises in pattern recognition. In image segmentation, for instance, it is typically unreasonable (for anyone but a programmer) to ask the system "By what sequence of operations did you extract this region?". It is reasonable, however, to ask how the target region differs from each of its neighbors, and how it might now be extracted easily given that one knows its distinguishing characteristics. In other words, the system should answer questions in light of its current knowledge instead of trying to backtrack to its knowledge at the time it was making decisions. The system's job is to extract structure from chaos, and explanations in terms of half-analyzed chaos are not helpful. Reason 3, adherence to a particular knowledge engineering methodology is really the sticking point. Some would claim that rule-based reasoning and its attendant explanatory capability is fundamentally different from other paradigms and perhaps even fundamental to human reasoning; it therefore deserves a special name ("expert system"). Others would claim that rule-based systems are only one model of expert reasoning and that the name should apply to any attempt at a psychologically based or knowledge-based program. A third group, mostly those selling software, claim performance alone as the criterion. I believe that explanatory capability, as currently feasible, is a correlate of the rule-based approach and is not central in theory; it may, however, be the key ingredient to making a particular application feasible or marketable. I don't believe that every optimal algorithm is AI, so I reject the pure performance criterion for expert systems. As to whether expert systems include only rule-based systems or all knowledge-based system, I can't say -- that is a matter of convention and has to be settled by those in the expert system field. -- Ken Laws ------------------------------ Date: 24 Feb 87 12:57:37 GMT From: mcvax!ukc!warwick!gordon@seismo.css.gov (Gordon Joly) Subject: Re: What is this "INtelliGenT"? For a working definition of A.I., how about "that which is yet to be done" or perhaps "that which is yet to be understood"? Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon ------------------------------ Date: Fri, 27 Feb 87 12:46:54 GMT From: Jerry Harper Reply-to: jharper@euroies.UUCP (Jerry Harper) Subject: Re: logic in ai I think some useful distinction can be made between the use of _formalisms_ in AI and the use of logic(s). The function of the latter with respect to a series of inference rules and a particular domain of discourse is the characterization of truth and logical consequence. The function of the former on my own reading of AI literature concerned with NLP systems seems to merely crystallize certain _intuitions_ a researcher may have about the description and solution to a various problem. In some cases these may conform to a logical calculus, in other cases they merely appear to do so. This is quite reasonable in a research context such as AI provided one accepts that computational tractability and formal rigour are different objectives served by methodological demands. For instance, it would be impossible to build the model theory of many logics used for semantic investigations of natural language into a computational system. Yet _doing_ semantics entails the use of infinitary methodology once the model theory is based on possible worlds. Reinterpreting a semantic theory computationally is not equivalent. More fundamentally, it is the usage of the word _logic_ which is at issue. With the plethora of logical calculi it makes little sense to claim one uses _a lot of logic_ in ones work. Indeed if anyone has an uncontentious definition of modern logic please forward it. ------------------------------ Date: Thu, 26 Feb 87 14:53 N From: MFMISTAL%HMARL5.BITNET@wiscvm.wisc.edu Subject: RE: A defence of vulgar tongue. Seth Steinberg proposes to use less formal notations in computer science presentations. I disagree completely! His argument about clarity is wrong. Although architects do not use mathematical notations, they do use a symbolic language (DRAWINGS or even better the lines that constitue a drawing) to express their ideas. These drawings, together with a description in specific "jargon" are necessary for the contractor to make a proper cost estimate and to make the necessary calculations for the strength of the constructions. So even for them it is necessary to use a formal language. I believe a formal language is useful to communicate ideas in a certain domain also in CS. Since the basic operations of computers are indeed logical/mathematical ones, there is no objection against using their symbolic notations. Computer programs are inplementations of the stuff, computer science is made of. Unfortunately, we have to check program code to check what the program is doing. Just for that reason, debugging and software maintenance is expensive. When we can better formalize the "art of programming" we might come up with better understood, and more easy to maintain programs. Discussions about program performance might then just as well be done in the formal language for that formalization. I just remembered that a language like APL is closely related with mathematics, specifically in matrixalgebra. It is probably possible to formaly proof (at least to some extent) the correctness of such a program. Looking forward to more CS presentations using formal (mathematical and logical notations) in order to increase the understanding what is really ment. Jan L. Talmon (not a computer scientist) MFMISTAL@HMARL5.BITNET ------------------------------ Date: Sat, 28 Feb 1987 13:18 CST From: Leff (Southern Methodist University) Subject: AI Software Revenues Artificial intelligence software generated $200 million dollars in revenue in 1986. Expert system tools generated 18.6 million. To put these numbers in perspective, the total software market is 12.3 billion and CAD/CAE software is 665 million. Also sold in 1986, was 464 million dollars worth of robot systems and 100 million dollars worth of vision equipment. ------------------------------ Date: Sat, 28 Feb 1987 13:19 CST From: Leff (Southern Methodist University) Subject: Review - Spang Robinson Report 3.2 Summary of the Spang Robinson Report February 1987, Vol. 3, No. 2 Development Tools Migrate From Micro to Mainframe Discussion of the trend for companies selling expert system software for IBM PC's to put their systems on higher level machines such as minis and mainframes and vice versa. Some companies are porting major applications to microcomputer tools that ostensibly offer "less functionality." Examples of this are a paint manufacturing application ported from KEE to Insight and another port from Inference's ART to ACORN. __________________________________________________ Connecting to the Corporate Database KEE connection provides an interface with the SQL query language. A data base relation maps into a class with a database attribute mapping into a slot. Data is downloaded from the database as needed to solve the problem. Projects to directly integrate the expert system into the database include Postgres at Berkeley, Probe at Computer Corporation of America and Starburst at IBM. The prices for development versions of the system range from $18,000 to $45000 with delivery versions ranging from $3000 to $18,750 depending upon size of VAX. __________________________________________________ Shorts Hitachi, IBM Japan and Carnegie Mellon are developing a multi-lingual machine translation system. They have already developed a system for analyzing the natural language utility specifications. Fuji Electric has developed an expert system to control turbines for a thermal power generation system. Also, thermostats are selected and configured for Tohoku oil. Fanac plans to build an intelligent robot integrating three-dimensional vision and touch sensors. Matsushita is developing a LISP machine with over 50 times the power of a VAX 8600. Expertelligence is selling an application builder for the Macintosh for non-programming users. Applied Expert Systems (APEX) has laid off a number of employees. They are selling a system to help financial institutions expand client relationships. Digitalk has announced a new release of Smalltalk/V. Extensions provide EGA capabilities, multiprocessing, DOS call features and music. Teknowledge reports revenues of $10,867,7000 for the latter half of 1986. Symbolics will be financing Whitney/Demos, a Los Angeles-based developer of computer graphics and animation technology. Symbolics will be getting marketing rights to various in-house programs of Whitney/Demos and will be providing them with various graphic workstations Europeans spent $200 million on expert system development. Ovum sells a complete report on European expert system development for $495.00. Halbrecht associates predicts a great deal of senior and mid-level turnover of AI professionals. ___________________________________________________________________ Review of the Sixth International Workshop on Expert Systems and Their Applications (Proceedings). ------------------------------ End of AIList Digest ********************