2-Sep-83 09:33:28-PDT,13722;000000000001 Mail-From: LAWS created at 2-Sep-83 09:29:28 Date: Thursday, September 1, 1983 2:02PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #53 To: AIList@SRI-AI AIList Digest Friday, 2 Sep 1983 Volume 1 : Issue 53 Today's Topics: Conferences - AAAI-83 Attendance & Logic Programming, AI Publications - Artificial Intelligence Journal & Courseware, Artificial Languages - LOGLAN, Lisp Availbility - PSL & T, Automatic Translation - Ada Request, NL & Scientific Method - Rebuttal, Intelligence - Definition ---------------------------------------------------------------------- Date: 31 Aug 83 0237 EDT From: Dave.Touretzky@CMU-CS-A Subject: AAAI-83 registration The actual attendance at AAAI-83 was about 2000, plus an additional 1700 people who came only for the tutorials. This gives a total of 3700. While much less than the 7000 figure, it's quite a bit larger than last year's attendance. Interest in AI seems to be growing rapidly, spurred partly by media coverage, partly by interest in expert systems and partly by the 5th generation thing. Another reason for this year's high attendance was the Washington location. We got tons of government people. Next year's AAAI conference will be hosted by the University of Texas at Austin. From a logistics standpoint, it's much easier to hold a conference in a hotel than at a university. Unfortunately, I'm told there are no hotels in Austin big enough to hold us. Such is the price of growth. -- Dave Touretzky, local arrangements committee member, AAAI-83 & 84 ------------------------------ Date: Thu 1 Sep 83 09:15:17-PDT From: PEREIRA@SRI-AI.ARPA Subject: Logic Programming Symposium This is a reminder that the September 1 deadline for submissions to the IEEE Logic Programming Symposium, to be held in Atlantic City, New Jersey, February 6-9, 1984, has now all but arrived. If you are planning to submit a paper, you are urged to do so without further delay. Send ten double-spaced copies to the Technical Chairman: Doug DeGroot, IBM Watson Research Center PO Box 218, Yorktown Heights, NY 10598 ------------------------------ Date: Wed, 31 Aug 83 12:10 PDT From: Bobrow.PA@PARC-MAXC.ARPA Subject: Subscriptions to the Artificial Intelligence Journal Individual (non institutions) belonging to the AAAI, to SIGART or to AISB can receive a reduced rate personal subscription to the Artificial Intelligence Journal. To apply for a subscription, send a copy of your membership form with a check for $50 (made out to Elsevier) to: Elsevier Science Publishers Attn: John Tagler 52 Vanderbilt Avenue New York, New York 10017 North Holland (Elsevier) will acknowledge receipt of the request for subscription, and provide information about which issues will be included in your subscription, and when they should arrive. Back issues are not available at the personal rate. Artificial Intelligence, an International journal, has been the journal of record for the field of Artificial Intelligence since 1970. Articles for submission should be sent (three copies) to Dr. Daniel G. Bobrow, Editor-in-chief, Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, California 94304, or to Prof. Patrick J. Hayes, Associate Editor, Computer Science Department, University of Rochester, Rochester N.Y. 14627. danny bobrow ------------------------------ Date: 31 Aug 1983 17:10:40 EDT (Wednesday) From: Marshall Abrams Subject: College-level courseware publishing I have learned that Addison-Wesley is setting up a new courseware/software operation and are looking for microcomputer software packages at the college level. I think the idea is for a student to be able to go to the bookstore and buy a disk and instruction manual for a specific course. Further details on request. ------------------------------ Date: 29 Aug 1983 2154-PDT From: VANBUER@USC-ECL Subject: Re: LOGLAN [...] The Loglan institute is in the middle of a year long "quiet spell" After several years of experiments with sounds, patching various small logical details (e.g. two unambiguous ways to say "pretty little girls"'s two interpretations), the Institute is busily preparing materials on the new version, preparing to "go public" again in a fairly big way. Darrel J. Van Buer ------------------------------ Date: 30 Aug 1983 0719-MDT From: Robert R. Kessler Subject: re: Lisps on 68000's Date: 24 Aug 83 19:47:17-PDT (Wed) From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax Subject: Re: Lisps on 68000's - (nf) Article-I.D.: uiucdcs.2626 .... I think PSL is definitely a superior lisp for the 68000, but I have no idea whether is will be available for non-HP machines... Jordan Pollack University of Illinois ...pur-ee!uiucdcs!uicsl!pollack Yes, PSL is available for other 68000's, particularly the Apollo. It is also being released for the DecSystem-20 and Vax running 4.x Unix. Send queries to Cruse@Utah-20 Bob. ------------------------------ Date: Tue, 30 Aug 1983 14:32 EDT From: MONTALVO@MIT-OZ Subject: Lisps on 68000's From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax Subject: Re: Lisps on 68000's - (nf) Article-I.D.: uiucdcs.2626 I played with a version of PSL on a HP 9845 for several hours one day. The environment was just like running FranzLisp under Emacs in ... A minor correction so people don't get confused: it was probably an HP 9836 not an HP 9845. I've used both machines including PSL on the 36, and doubt very much that PSL runs on a 45. ------------------------------ Date: Wed, 31 Aug 83 01:25:29 EDT From: Jonathan Rees Subject: Re: Lisps on 68000's Date: 19 Aug 83 10:52:11-PDT (Fri) From: harpo!eagle!allegra!jdd @ Ucb-Vax Subject: Lisps on 68000's Article-I.D.: allegra.1760 ... T sounds good, but the people who are saying it's great are the same ones trying to sell it to me for several thousand dollars, so I'd like to get some more disinterested opinions first. The only person I've talked to said it was awful, but he admits he used an early version. T is distributed by Yale for $75 to universities and other non-profit organizations. Yale has not yet decided on the means by which it will distribute T to for-profit institutions, but it has been negotiating with a few companies, including Cognitive Systems, Inc. To my knowledge no final agreements have been signed, so right now, no one can sell it. "Supported" versions will be available from commercial outfits who are willing to take on the extra responsibility (and reap the profits?), but unsupported versions will presumably still be available directly from Yale. Regardless of the final outcome, no company or companies will have exclusive marketing rights. We do not want a high price tag to inhibit availability. Jonathan Rees T Project Yale Computer Science Dept. P.S. As a regular T user, I can say that it is a good system. As its principal implementor, I won't claim to be disinterested. Testimonials from satisfied users may be found in previous AILIST digests; perhaps you can obtain back issues. ------------------------------ Date: 1 Sep 1983 11:58-EDT From: Dan Hoey Subject: Translation into Ada: Request for Info It is estimated that the WMCCS communications system will require five years to translate into Ada. Not man-years, but years; if the staffing is assumed to exceed two hundred then we are talking about a man-millenium for this task. Has any work been done on mechanical aids for translating programs into Ada? I seek pointers to existing and past projects, or assurances that no work has been done in this area. Any pointers to such information would be greatly appreciated. To illustrate my lack of knowledge in this field, the only work I have heard of for translating from one high-level language to another is UniLogic's translator for converting BLISS to PL/1. As I understand it, their program only works on the Scribe document formatter but could be extended to cover other programs. I am interested in hearing of other translators, especially those for translating into strongly-typed languages. Dan Hoey HOEY@NRL-AIC.ARPA ------------------------------ Date: Wed 31 Aug 83 18:42:08-PDT From: PEREIRA@SRI-AI.ARPA Subject: Solutions of the natural language analysis problem Given the downhill trend of some contributions on natural language analysis in this group, this is my last comment on the topic, and is essentially an answer to Stan the leprechaun hacker (STLH for short). I didn't "admit" that grammars only reflect some aspects of language. (Using loaded verbs such as "admit" is not conducive to the best quality of discussion.) I just STATED THE OBVIOUS. The equations of motion only reflect SOME aspects of the material world, and yet no engineer goes without them. I presented this point at greater length in my earlier note, but the substantive presentation of method seems to have gone unanswered. Incidentally, I worked for several years in a civil engineering laboratory where ACTUAL dams and bridges were designed, and I never saw there the preference for alchemy over chemistry that STLH suggests is the necessary result of practical concerns. Elegance and reproduciblity do not seem to be enemies of generality in other scientific or engineering disciplines. Claiming for AI an immunity from normal scientific standards (however flawed ...) is excellent support for our many detractors, who may just now be on the deffensive because of media hype, but will surely come back to the fray, with that weapon plus a long list of unfulfilled promises and irreproducible "results." Lack of rigor follows from lack of method. STLH tries to bludgeon us with "generating *all* the possible meanings" of a sentence. Does he mean ALL of the INFINITY of meanings a sentence has in general? Even leaving aside model-theoretic considerations, we are all familiar with he wanted me to believe P so he said P he wanted me to believe not P so he said P because he thought that I would think that he said P just for me to believe P and not believe it and so on ... in spy stories. The observation that "we need something that models human cognition closely enough..." begs the question of what human cognition looks like. (Silly me, it looks like STLH's program, of course.) STLH also forgets that is often better for a conversation partner (whether man or machine) to say "I don't understand" than to go on saying "yes, yes, yes ..." and get it all wrong, as people (and machines) that are trying to disguise their ignorance do. It is indeed not surprising that "[his] problems are really concerned with the acquisition of linguistic knowledge." Once every grammatical framework is thrown out, it is extremely difficult to see how new linguistic knowledge can be assimilated, whether automatically or even by programming it in. As to the notion that "everyone is an expert on the native language", it is similar to the claim that everyone with working ears is an expert in acoustics. As to "pernicious behavior", it would be better if STLH would first put his own house in order: he seems to believe that to work at SRI one needs to swear eternal hate to the "Schank camp" (whatever that is); and useful criticism of other people's papers requires at least a mention of the title and of the objections. A bit of that old battered scientific protocol would help... Fernando Pereira ------------------------------ Date: Tue, 30 Aug 1983 15:57 EDT From: MONTALVO@MIT-OZ Subject: intelligence is... Date: 25 Aug 1983 1448-PDT To: AIList at MIT-MC From: Jay Subject: intelligence is... An intelligence must have at least three abilities; To act; To perceive, and classify (as one of: better, the same, worse) the results of its actions, or the environment after the action; and lastly To change its future actions in light of what it has perceived, in attempt to maximize "goodness", and avoid "badness". My views are very obviously flavored by behaviorism. Where do you suppose the evolutionary cutoff is for intelligence? By this definition a Planaria (flatworm) is intelligent. It can learn a simple Y maze. I basically like this definition of intelligence but I think the learning part lends itself to many degrees of complexity, and therefore, the definition leads to many degrees of intelligence. Maybe that's ok. I would like to see an analysis (probably NOT on AIList, althought maybe some short speculation might be appropriate) of the levels of complexity that a learner could have. For example, one with a representation of the agent's action would be more complicated (therefore, more intelligent) than one without. Probably a Planaria has no representation of it's actions, only of the results of its actions. ------------------------------ End of AIList Digest ******************** 9-Sep-83 12:00:49-PDT,18690;000000000001 Mail-From: LAWS created at 9-Sep-83 11:58:11 Date: Friday, September 9, 1983 9:02AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #54 To: AIList@SRI-AI AIList Digest Friday, 9 Sep 1983 Volume 1 : Issue 54 Today's Topics: Robotics - Walking Robot, Fifth Generation - Book Review Discussion, Methodology - Rational Psychology, Lisp Availability - T, Prolog - Lisp Based Prolog, Foolog ---------------------------------------------------------------------- Date: Fri 2 Sep 83 19:24:59-PDT From: John B. Nagle Subject: Strong, agile robot [Reprinted from the SCORE BBoard.] There is a nice article in the current Robotics Age about an outfit down in Anaheim (not Disney) that has built a six-legged robot with six legs spaced radially around a circular core. Each leg has three motors, and there are enough degrees of freedom in the system to allow the robot to assume various postures such as a low, tucked one for tight spots; a tall one for looking around, and a wide one for unstable surfaces. As a demonstration, they had the robot climb into the back of a pickup truck, climb out, and then lift up the truck by the rear end and move the truck around by walking while lifting the truck.4 It's not a heavy AI effort; this thing is a teleoperator controlled by somebody with a joystick and some switches (although it took considerable computer power to make it possible for one joystick to control 18 motors in such a way that the robot can walk faster than most people). Still, it begins to look like walking machines are finally getting to the point where they are good for something. This thing is about human sized and can lift 900 pounds; few people can do that. ------------------------------ Date: 3 Sep 83 12:19:49-PDT (Sat) From: harpo!eagle!mhuxt!mhuxh!mhuxr!mhuxv!akgua!emory!gatech!pwh@Ucb-Vax Subject: Re: Fifth Generation (Book Review) Article-I.D.: gatech.846 In response to Richard Treitel's comments about the Fifth Generation book review recently posted: *This* turkey, for one, has not heard of the "Alvey report." Do tell... I believe that part of your disagreement with the book reviewer stems from the fact that you seem to be addressing different audiences. He, a concerned but ignorant lay-audience; you, the AI Intelligensia on the net. phil hutto CSNET pwh@gatech INTERNET pwh.gatech@udel-relay UUCP ...!{allegra, sb1, ut-ngp, duke!mcnc!msdc}!gatech!pwh p.s. - Please do elaborate on the Alvey Report. Sounds fascinating. ------------------------------ Date: Tue 6 Sep 83 14:24:28-PDT From: Richard Treitel Subject: Re: Fifth Generation (Book Review) Phil, I wish I were in a position to elaborate on the Alvey Report. Here's all I know, as relayed by a friend of mine who is working back in Britain: As a response to either (i) the challenge/promise of the Information Era or (ii) the announcement of a major Japanese effort to develop AI systems, Mrs. Thatcher's government commissioned a Commission, chaired by some guy named Alvey about whom I don't know anything (though I suspect he is an academic of some stature, else he wouldn't have been given the job). The mission of this Commission (or it may have been a Committee) was to produce recommendations for national policy, to be implemented probably by the Science and Engineering Research Council. They found that while a few British universities are doing quite good computer science, only one of them is doing AI worth mentioning, namely Edinburgh, and even there, not too much of it. (The reason for this is that an earlier Government commissioned another Report on AI, which was written by Professor Sir James Lighthill, an academic of some stature. Unfortunately he is a mathematician specialising in fluid dynamics -- said to have designed Concorde's wings, or some such -- and he concluded that the only bit of decent work that had been done in AI to date was Terry Wingorad's thesis (just out) and that the field showed very little promise. As a result of the Lighthill Report, AI was virtually a dirty word in Britain for ten years. Most people still think it means artificial insemination.) Alvey's group also found, what anyone could have told the Government, that research on all sorts of advanced science and technology was disgracefully stunted. So they recommended that a few hundred million pounds of state and industrial funds be pumped into research and education in AI, CS, and supporting fields. This happened about a year ago, and the Gov't basically bought the whole thing, with the result that certain segments of the academic job market over there went straight from famine to feast (the reverse change will occur pretty soon, I doubt not). It kind of remains to be seen what industry will do, since we don't have a MITI. I partly accept your criticism of my criticism of that review, but I also believe that a journalist has an obligation not to publish falsehoods, even if they are generally believed, and to do more than re-hash the output of his colleagues into a form consistent with the demands of the story he is "writing". - Richard ------------------------------ Date: Sat 3 Sep 83 13:28:36-PDT From: PEREIRA@SRI-AI.ARPA Subject: Rational Psychology I've just read Jon Doyle's paper "Rational Psychology" in the latest AI Magazine. It's one of those papers you wish you (I wish) had written it yourself. The paper shows implictly what is wrong with many of the arguments in discussions on intelligence and language analysis in this group. I am posting this as a starting shot in what I would like to be a rational discussion of methodology. Any takers? Fernando Pereira PS. I have been a long-time fan of Truesdell's rational mechanics and thermodynamics (being a victim of "black art" physics courses). Jon Doyle's emphasis on Truesdell's methodology is for me particularly welcome. [The article in question is rather short, more of an inspirational pep talk than a guide to the field. Could someone submit one "rational argument" or other exemplar of the approach? Since I am not familiar with the texts that Doyle cites, I am unable to discern what he and Fernando would like us to discuss or how they would have us go about it. -- KIL] ------------------------------ Date: 2 Sep 1983 11:26-PDT From: Andy Cromarty Subject: Availability of T Yale has not yet decided on the means by which it will distribute T to for-profit institutions, but it has been negotiating with a few companies, including Cognitive Systems, Inc. To my knowledge no final agreements have been signed, so right now, no one can sell it. ...We do not want a high price tag to inhibit availability. -- Jonathan Rees, T Project (REES@YALE) 31-Aug-83 About two days before you sent this to the digest, I received a 14-page T licensing agreement from Yale University's "Office of Cooperative Research". Prices ranged from $1K for an Apollo to $5K for a VAX 11/780 for government contractors (e.g. us), with no software support or technical assistance. The agreement does not actually say that sources are provided, although that is implied in several places. A rather murky trade secret clause was included in the contract. It thus appears that T is already being marketed. These cost figures, however, are approaching Scribe territory. Considering (a) the cost of $5K per VAX CPU, (b) the wide variety of alternative LISPs available for the VAX, and (c) the relatively small base of existing T (or Scheme) software, perhaps Yale does "want a high price tag to inhibit availability" after all.... asc ------------------------------ Date: Thursday, 1 September 1983 12:14:59 EDT From: Brad.Allen@CMU-RI-ISL1 Subject: Lisp Based Prolog [Reprinted from the Prolog Digest.] I would like to voice disagreement with Fernando Pereira's implication that Lisp Based Prologs are good only for pedagogical purposes. The flipside of efficiency is usability, and until there are Prolog systems with exploratory programming environments which exhibit the same features as, say Interlisp-D or Symbolics machines, there will be a place for Lisp Based Prologs which can use such features as, E.g., bitmap graphics and calls to packages in other languages. Lisp Based Prologs can fill the void between now and the point when software accumulation in standard Prolog has caught up to that of Lisp ( if it ever does ). ------------------------------ Date: Sat 3 Sep 83 10:51:22-PDT From: Pereira@SRI-AI Subject: Prolog in Lisp [Reprinted from the Prolog Digest.] Relying on ( inferior ) Prologs in Lisp is the best way of not contributing to Prolog software accumulation. The large number of tools that have been built at Edinburgh show the advantages for the whole Prolog community of sites 100% committed to building everything in Prolog. By far the best debugging environment for Prolog programs in use today is the one on the DEC-10/20 system, and that is written entirely in Prolog. Its operation is very different, and much superior for Prolog purposes, than all Prolog debuggers built on top of Lisp debuggers that I have seen to date. Furthermore, integrating things like screen management into a Prolog environment in a graceful way is a challenging problem ( think of how long it took until flavors came up as the way of building the graphics facilities on the MIT Lisp machines ), which will also advance our understanding of computer graphics ( I have written a paper on the subject, "Can drawing be liberated from the von Neumann style?" ). I am not saying that Prologs in Lisp are not to be used ( I use one myself on the Symbolics Lisp machines ), but that a large number of conceptual and language advances will be lost if we don't try to see environmental tools in the light of logic programming. -- Fernando Pereira ------------------------------ Date: Mon, 5 Sep 1983 03:39 EDT From: Ken%MIT-OZ@MIT-MC Subject: Foolog [Reprinted from the Prolog Digest.] In Pereira's introduction to Foolog [a misunderstanding; see the next article -- KIL] and my toy interpreter he says: However, such simple interpreters ( even the Abelson and Sussman one which is far better than PiL ) are not a sufficient basis for the claim that "it is easy extend Lisp to do what Prolog does." What Prolog "does" is not just to make certain deductions in a certain order, but also make them very fast. Unfortunately, all Prologs in Lisp I know of fail in this crucial aspect ( by factors between 30 and 1000 ). I never claim for my little interpreter that it was more than a toy. It primary value is pedagogic in that it makes the operational semantics of the pure part of Prolog clear. Regarding Foolog, I would defend it in that it is relatively complete; -- it contains cut, bagof, call, etc. and for i/o and arithmetic his primitive called "lisp" is adequate. In the introduction he claims that its 75% of the speed of the Dec 10/20 Prolog interpreter. If that makes it a toy then all but 2 or 3 Prolog implementations are non-toy. [Comment: I agree with Fernando Pereira and Ken that there are lots and again lots of horribly slow Prologs floating around. But I do not think that it is impossible to write a fast one in Lisp, even on a standard computer. One of the latest versions of the Foolog interpreters is actually slightly faster than Dec-10 Prolog when measuring LIPS. The Foolog compiler I am working on compiled naive-reverse to half the speed of compiled Dec-10 Prolog ( including mode declarations ). The compiler opencodes unification, optimizes tail recursion and uses determinism, and the code fits in about three pages ( all of it is in Prolog, of course ). -- Martin Nilsson] I tend to agree that too many claims are made for "one day wonders". Just because I can implement most of Prolog in one day in Lisp doesn't mean that the implentation is any good. I know because I started almost two years ago with a very tiny implementation of Prolog in Lisp. As I started to use it for serious applications it grew to the point where today its up to hundreds of pages of code ( the entire source code for the system comes to 230 Tops20 pages ). The Prolog runs on Lisp Machines ( so we call it LM-Prolog ). Mats Carlsson here in Uppsala wrote a compiler for it and it is a serious implementation. It runs naive reverse of a list 30 long on a CADR in less than 80 milliseconds (about 6250 Lips). Lambdas and 3600s typically run from 2 to 5 times faster than Cadrs so you can guess how fast it'll run. Not only is LM-Prolog fast but it incorporates many important innovations. It exploits the very rich programming environment of Lisp Machines. The following is a short list of its features: User Extensible Interpreter Extensible unification for implementing E.g. parallelism and constraints Optimizing Compiler Open compilation Tail recursion removal and automatic detection of determinacy Compiled unification with microcoded runtime support Efficient bi-directional interface to Lisp Database Features User controlled indexing Multiple databases (Worlds) Control Features Efficient conditionals Demand-driven computation of sets and bags Access To Lisp Machine Features Full programming environment, Zwei editor, menus, windows, processes, networks, arithmetic ( arbitrary precision, floating, rational and complex numbers, strings, arrays, I/O streams ) Language Features Optional occur check Handling of cyclic structures Arbitrary parity Compatability Package Automatic translation from DEC-10 Prolog to LM-Prolog Performance Compiled code up to 6250 LIPS on a CADR Interpreted code; up to 500 LIPS Availability LM-Prolog currently runs on LMI CADRs and Symbolics LM-2s. Soon to run on Lambdas. Commercially Available Soon. For more information contact Kenneth M. Kahn or Mats Carlsson. Inquires can be directed to: KEN@MIT-OZ or UPMAIL P. O. Box 2059 S-75002 Uppsala, Sweden Phone +46-18-111925 ------------------------------ Date: Tue 6 Sep 83 15:22:25-PDT From: Pereira@SRI-AI Subject: Misunderstanding [Reprinted from the PROLOG Digest.] I'm sorry that my first note on Prologs in Lisp was construed as a comment on Foolog, which appeared in the same Digest. In fact, my note was send to the digest BEFORE I knew Ken was submitting Foolog. Therefore, it was not a comment on Foolog. As to LM-Prolog, I have a few comments about its speed: 1. It depends essentially on the use of Lisp machine subprimitives and a microcoded unification, which are beyond Lisp the language and the Lisp environment in all but the MIT Lisp machines. It LM-Prolog can be considered as "a Prolog in Lisp," then DEC-10/20 Prolog is a Prolog in Prolog ... 2. To achieve that speed in determinate computation requires mapping Prolog procedure calls into Lisp function calls, which leaves backtracking in the lurch. The version of LM-Prolog I know of used stack group switches for bactracking, which is orders of magnitude slower than backtracking on the DEC-20 system. 3. Code compactness is sacrificed by compiling from Prolog into Lisp with open-coded unification. This is important because it makes worse the paging behavior of large programs. There are a lot of other issues in estimating the "real" efficiency of Prolog systems, such as GC requirements and exact TRO discipline. For example, using CONS space for runtime Prolog data structures is a common technique that seems adequate when testing with naive reverse of a 30 long list, but appears hopeless for programs that build structure and backtrack a lot, because CONS space is not stack allocated ( unless you use certain nonportable tricks, and even then... ), and therefore is not reclaimed on backtracking ( one might argue that Lisp programs for the same task have the same problem, but efficient backtracking is precisely one of the major advantages of good Prolog implementations ). The current Lisp machines have exciting environment tools from which Prolog users would like to benefit. I think that building Prolog systems in Lisp will hit artificial performance and language barriers much before the actual limits of the hardware employed are reached. The approach I favor is to take the latest developments in Prolog implementation and use them to build Prolog systems that coexist with Lisp on those machines, but use all the hardware resources. I think this is possible with a bit of cooperation from manufacturers, and I have reasons to hope this will happen soon, and produce Prolog systems with a performance far superior to DEC-20 Prolog. Ken's approach may produce a tolerable system in the short term, but I don't think it can ever reach the performance and functionality which I think the new machines can deliver. Furthermore, there are big differences between the requirements of experimental systems, with all sorts of new goodies, and day-to-day systems that do the standard things, but just much better. Ken's approach risks producing a system that falls between these (conflicting) goals, leading to a much larger implementation effort than is needed just for experimenting with language extensions ( most of the time better done in Prolog ) or just for a practical system. -- Fernando Pereira PS: For all it is worth, the source of DEC-20 Prolog is 177 pages of Prolog and 139 of Macro-10 (at 1 instruction per line...). The system comprises a full compiler, interpreter, debugger and run time system, not using anything external besides operating system I/O calls. We estimate it incorporates between 5 and 6 man years of effort. According to Ken, LM-Prolog is 230 pages of Lisp and Prolog ... ------------------------------ End of AIList Digest ******************** 9-Sep-83 15:26:42-PDT,13759;000000000001 Mail-From: LAWS created at 9-Sep-83 12:34:46 Date: Friday, September 9, 1983 12:29PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #55 To: AIList@SRI-AI AIList Digest Saturday, 10 Sep 1983 Volume 1 : Issue 55 Today's Topics: Intelligence - Turing Test & Definitions, AI Environments - Computing Power & Social Systems ---------------------------------------------------------------------- Date: Saturday, 3 Sep 1983 13:57-PDT From: bankes@rand-unix Subject: Turing Tests and Definitions of Intelligence As much as I dislike adding one more opinion to an overworked topic, I feel compelled to make a comment on the ongoing discussion of the Turing test. It seems to me quite clear that the Turing test serves as a tool for philosopical argument and not as a defining criterion. It serves the purpose of enlightening those who would assert the impossibility of any machine ever being intelligent. The point is, if a machine which would pass the test could be produced, then a person would have either to admit it to be intelligent or else accept that his definition of intelligence is something which cannot be perceived or tested. However, when the Turing test is used as a tool with which to think about "What is intelligence?" it leads primarily to insights into the psychology and politics of what people will accept as intelligent. (This is a consequence of the democratic definition - its intelligent if everybody agrees it is). Hence, we get all sorts of distractions: Must an intelligent machine make mistakes, should an intelligent machine have emotions, and most recently would an intelligent machine be prejudiced? All of this deals with a sociological viewpoint on what is intelligent, and gets us no closer to a fundamental understanding of the phenomenon. Intelligence is an old word, like virtue and honor. It may well be that the progress of our understanding will make it obsolete, the word may come to suggest the illusions of an earlier time. Certainly, it is much more complex than our language patterns allow. The Turing test suggests it to be a boolean, you got it or you don't. We commonly use smart as a relational, you're smarter than me but we're both smarter than rover. This suggests intelligence is a scaler, hence IQ tests. But recent experience with IQ testing across cultures together with the data from comparative psychology, would suggest that intelligence is at least multi-dimensional. Burrowing animals on the whole do better at mazes than others. Animals whose primary defense is flight respond differently to aversive conditioning than do more aggressive species. We may have seen a recapitulation of this in the last twenty years experience with AI. We have moved from looking for the philosophers stone, the single thing needed to make something intelligent, to knowledge based systems. No one would reasonably discuss (I think) whether my program is smarter than yours. But we might be able to say that mine knows more about medicine than yours or that mine has more capacity for discovering new relations of a specified type. Thus I would suggest that the word intelligence (noun that it is, suggesting a thing which might somehow be gotten ahold of) should be used with caution. And that the Turing test, as influential as it has been, may have outlived its usefulness, at least for discussions among the faithful. -Steve Bankes RAND ------------------------------ Date: Sat, 3 Sep 83 17:07:33 EDT From: "John B. Black" Subject: Learning Complexity There was recently a query on AIList about how to characterize learning complexity (and saying that may be the crucial issue in intelligence). Actually, I have been thinking about this recently, so I thought I would comment. One way to characterize the learning complexity of procedural skills is in terms of what kind of production system is needed to perform the skill. For example, the kind of things a slug or crayfish (currently popular species in biopsychology) can do seem characterizable by production systems with minimal internal memory, conditions that are simple external states of the world, and actions that are direct physical actions (this is stimulus-response psychology in a nutshell). However, human skills (progamming computers, doing geometry, etc.) need much more complex production systems with complex networks as internal memories, conditions that include variables, and actions that are mental in addition to direct physical actions. Of course, what form productions would have to be to exhibit human-level intelligence (if indeed, they can) is an open question and a very active field of research. ------------------------------ Date: 5 Sep 83 09:42:44 PDT (Mon) From: woodson%UCBERNIE@Berkeley (Chas Woodson) Subject: AI and computing power Can you direct me to some wise comments on the following question? Is the progress of AI being held up by lack of computing power? [Reply follows. -- KIL] There was a discussion of this on Human-Nets a year ago. I am reprinting some of the discussion below. My own feeling is that we are not being held back. If we had infinite compute power tomorrow, we would not know how to use it. Others take the opposite view: that intelligence may be brute force search, massive theorem proving, or large rule bases and that we are shying away from the true solutions because we want a quick finesse. There is also a view that some problems (e.g. vision) may require parallel solutions, as opposed to parallel speedup of iterative solutions. The AI principal investigators seem to feel (see the Fall AI Magazine) that it would be enough if each AI investigator had a Lisp Machine or equivalent funding. I would extend that a little further. I think that the biggest bottleneck right now is the lack of support staff -- systems wizards, apprentice programmers, program librarians, software editors (i.e., people who edit other people's code), evaluators, integrators, documentors, etc. Could Lukas have made Star Wars without a team of subordinate experts? We need to free our AI gurus from the day-to-day trivia of coding and system building just as we use secretaries and office machines to free our management personnel from administrative trivia. We need to move AI from the lone inventor stage to the industrial laboratory stage. This is a matter of social systems rather than hardware. -- Ken Laws ------------------------------ Date: Tuesday, 12 October 1982 13:50-EDT From: AGRE at MIT-MC Subject: artificial intelligence and computer architecture [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96] A couple of observations on the theory that AI is being held back by the sorry state of computer architecture. First, there are three projects that I know of in this country that are explicitly trying to deal with the problem. They are Danny Hillis' Connection Machine project at MIT, Scott Fahlman's NETL machine at CMU, and the NON-VON project at Columbia (I can't remember who's doing that one right offhand). Second, the associative memory fad came and went very many years ago. The problem, simply put, is that human memory is a more complicated place than even the hairiest associative memory chip. The projects I have just mentioned were all first meant as much more sophisticated approaches to "memory architectures", though they have become more than that since. Third, it is quite important to distinguish between computer architectures and computational concepts. The former will always lag ten years behind the latter. In fact, although our computer architectures are just now beginning to pull convincingly out of the von Neumann trap, the virtual machines that our computer languages run on haven't been in the von Neumann style for a long time. Think of object-oriented programming or semantic network models or constraint languages or "streams" or "actors" or "simulation" ideas as old as Simula and VDL. True these are implemented on serial machines, but they evoke conceptions of computation more closer to our ideas about how the physical world works, with notions of causal locality and data flow and asynchronous communication quite analogous to those of physics; one uses these languages properly not by thinking of serial computers but by thinking in these more general terms. These are the stuff of everyday programming, at least among the avant garde in the AI labs. None of this is to say that AI's salvation isn't in computer architecture. But it is to say that the process of freeing ourselves from the technology of the 40's is well under weigh. (Yes, I know, hubris.) - phiL ------------------------------ Date: 13 Oct 1982 08:34 PDT From: DMRussell at PARC-MAXC Subject: AI and alternative architectures [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96] There is a whole subfield of AI growing up around parallel processing models of computation. It is characterized by the use of massive compute engines (or models thereof) and a corresponding disregard for efficiency concerns. (Why not, when you've got n^n processors?) "Parallel AI" is a result of a crossing of interests from neural modelling, parallel systems theory, and straightforward AI. Currently, the most interesting work has been done in vision -- where the transformation from pixel data to more abstract representations (e.g. edges, surfaces or 2.5-D data) via parallel processing is pretty easy. There has been rather less success in other, not-so-obviously parallel, fields. Some work that is being done: Jerry Feldman & Dana Ballard (University of Rochester) -- neural modelling, vision Steve Small, Gary Cottrell, Lokendra Shastri (University of Rochester) -- parallel word sense and sentence parsing Scott Fahlman (CMU) -- knowledge rep in a parallel world ??? (CMU) -- distributed sensor net people Geoff Hinton (UC San Diego?) -- vision Daniel Sabbah (IBM) -- vision Rumelhart (UC San Diego) -- motor control Carl Hewitt, Bill Kornfeld (MIT) -- problem solving (not a complete list -- just a hint) The major concerns of these people has been controlling the parallel beasts they've created. Basically, each of the systems accepts data at one end, and then munges the data and various hypotheses about the data until the entire system settles down to a single interpretation. It is all very messy, and incredibly difficult to prove anything. (e.g. Under what conditions will this system converge?) The obvious question is this: What does all of this alternative architecture business buy you? So far, I think it's an open question. Suggestions? -- DMR -- ------------------------------ Date: 13 Oct 1982 1120-PDT From: LAWS at SRI-AI Subject: [LAWS at SRI-AI: AI Architecture] [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96] In response to Glasser @LLL-MFE: I doubt that new classes of computer architecture will be the solution to building artificial intelligence. Certainly we could use more powerful CPUs, and the new generation of LISP machines make practical approaches that were merely feasibility demonstrations before. The fact remains that if we don't have the algorithms for doing something with current hardware, we still won't be able to do it with faster or more powerful hardware. Associative memories have been built in both hardware and software. See, for example, the LEAP language that was incorporated into the SAIL language. (MAINSAIL, an impressive offspring of SAIL, has abandoned this approach in favor of subroutines for hash table maintenance.) Hardware is also being built for data flow languages, applicative languages, parallel processing, etc. To some extent these efforts change our way of thinking about problems, but for the most part they only speed up what we knew how to do already. For further speculation about what we would do with "massively parallel architectures" if we ever got them, I suggest the recent papers by Dana Ballard and Geoffrey Hinton, e.g. in the Aug. ['82] AAAI conference proceedings [...]. My own belief is that the "missing link" to AI is a lot of deep thought and hard work, followed by VLSI implementation of algorithms that have (probably) been tested using conventional software running on conventional architectures. To be more specific we would have to choose a particular domain since different areas of AI require different solutions. Much recent work has focused on the representation of knowledge in various domains: representation is a prerequisite to acquisition and manipulation. Dr. Lenat has done some very interesting work on a program that modifies its own representations as it analyzes its own behavior. There are other examples of programs that learn from experience. If we can master knowledge representation and learning, we can begin to get away from programming by full analysis of every part of every algorithm needed for every task in a domain. That would speed up our progress more than new architectures. [...] -- Ken Laws ------------------------------ End of AIList Digest ******************** 9-Sep-83 15:50:20-PDT,11413;000000000001 Mail-From: LAWS created at 9-Sep-83 15:45:10 Date: Friday, September 9, 1983 3:36PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #56 To: AIList@SRI-AI AIList Digest Saturday, 10 Sep 1983 Volume 1 : Issue 56 Today's Topics: Professional Activities - JACM Referees & Inst. for Retraining in CS, Artificial Languages - Loglan, Knowledge Representation - Multiple Inheritance Query, Games - Puzzle & Go Tournament ---------------------------------------------------------------------- Date: 8 Sep 83 10:33:25 EDT From: Sri Subject: referees for JACM (AI area) Since the time I became the AI Area Editor for the JACM, I have found myself handicapped for lack of a current roster of referees. This note is to ask you to volunteer to referee papers for the journal. JACM is the major outlet for theoretical papers in computer science. In the area of AI most of the submissions in the past have ranged over the topics of Automated Reasoning (Theorem Proving, Deduction, Induction, Default) and Automated Search (Search methods, state-space algorithms, And/Or reduction searches, analysis of efficiency and error and attendant tradeoffs). Under my editorship I would like to broader the scope to THEORETICAL papers in all areas of AI, including Knowledge Representation, Learning, Modeling (Space, Time, Causality), Problem Formulation & Reformulation etc. If you are willing to be on the roster of referees, please send me a note with your name, mailing address, net-address and telephone number. Please also list your areas of interest and competence. If you wish to submit a paper please follow the procedures described in the "instructions to authors" page of the journal. Copies of mss can be sent to either me or to the Editor-in-Chief. N.S. Sridharan [Sridharan@Rutgers] Area Editor, AI JACM ------------------------------ Date: Wed, 7 Sep 83 16:06 PDT From: Jeff Ullman Subject: Institute for Retraining in CS [Reprinted from the SU-SCORE BBoard.] A summer institute for retraining college faculty to teach computer science is being held at Clarkson College, Potsdam, NY, this summer, under the auspices of a joint ACM/MAA committee. They need lecturers in all areas of computer science, to deliver 1-month courses. People at or close to the PH. D. level are needed. If interested, contact Ed Dubinsky at 315-268-2382 (office) 315-265-2906 (home). ------------------------------ Date: 6 Sep 83 18:15:17-PDT (Tue) From: harpo!gummo!whuxlb!pyuxll!abnjh!icu0 @ Ucb-Vax Subject: Re: Loglan Article-I.D.: abnjh.236 [Directed to Pourne@MIT-MC] 1. Rumor has it that SOMEONE at the Univ. of Washington (State of, NOT D.C.) was working on the [LOGLAN] grammar online (UN*X, as I recall). I haven't yet had the temerity to post a general inquiry regarding their locale. If they read your request and respond, please POST it...some of us out here are also interested. 2. A friend of mine at Ohio State has typed in (by hand!) the glossary from Vol 1 (the laymans grammar) which could be useful for writing a "flashcard" program, but both of us are too busy. Art Wieners (who will only be at this addr for this week, but keep your modems open for a resurfacing at da Labs...) ------------------------------ Date: 7 Sep 83 16:43:58-PDT (Wed) From: decvax!genrad!grkermit!chris @ Ucb-Vax Subject: Re: Loglan Article-I.D.: grkermit.654 I just posted something relevant to net.nlang. (I'm not sure which is more appropriate, but I'm going to assume that "natural" language is closer than all of Artificial Intelligence.) I sent a request for information to the Loglan Institute, (Route 10, Box 260 Gainesville, FL 32601 [a NEW address]) and they are just about to go splashily public again. I posted the first page of their reply letter, see net.nlang for more details. Later postings will cover their short description of their Interactive Parser, which is among their many new or improved offerings. decvax!genrad!grkermit!chris allegra!linus!genrad!grkermit!chris harpo!eagle!mit-vax!grkermit!chris ------------------------------ Date: 2-Sep-83 19:33 PDT From: Kirk Kelley Subject: Multiple Inheritance query Can you tell me where I can find a discussion of the anatomy and value of multiple inheritance? I wonder if it is worth adding this feature to the design for a lay-person's language, called Players, for specifying adventures. -- kirk ------------------------------ Date: 24 August 1983 1536-PDT (Wednesday) From: Foonberg at AEROSPACE (Alan Foonberg) Subject: Another Puzzle [Reprinted from the Prolog Digest.] I was glancing at an old copy of Games magazine and came across the following puzzle: Can you find a ten digit number such that its left-most digit tells how many zeroes there are in the number, its second digit tells how many ones there are, etc.? For example, 6210001000. There are 6 zeroes, 2 ones, 1 two, no threes, etc. I'd be interested to see any efficient solutions to this fairly simple problem. Can you derive all such numbers, not only ten-digit numbers? Feel free to make your own extensions to this problem. Alan ------------------------------ Date: 5 Sep 83 20:11:04-PDT (Mon) From: harpo!psl @ Ucb-Vax Subject: Go Tournament Article-I.D.: harpo.1840 ANNOUNCING The First Ever USENIX COMPUTER ##### ####### # # # # # # # # #### # # # # # # # # # # ##### ####### ##### #### # # ##### # # ## # # ###### # # ##### # # # # # # # ## # # # ## ## # ## # # # # # # # # # # # # # # # ## # ##### # # # # # # # # # ##### # # # ###### # # # # # # # # # # # # # # # ## # # # # # # ## # # #### #### # # # # # # # # ###### # # # A B C D E F G H j K L M N O P Q R S T 19 + + + + + + + + + + + + + + + + + + + 19 18 + + + + + + + + + + + + + + + + + + + 18 17 + + + O @ + + + + + + + + + + + + + + 17 16 + + + O + + + O + @ + + + + + @ + + + 16 15 + + + + + + + + + + + + + + + + + + + 15 14 + + O O + + + O + @ + + + + + + + + + 14 13 + + @ + + + + + + + + + + + + + + + + 13 12 + + + + + + + + + + + + + + + + + + + 12 11 + + + + + + + + + + + + + + + + + + + 11 10 + + + + + + + + + + + + + + + + + + + 10 9 + + + + + + + + + + + + + + + + + + + 9 8 + + + + + + + + + + + + + O O O O @ + 8 7 + + O @ + + + + + + + + + O @ @ @ @ @ 7 6 + + @ O O + + + + + + + + + O O O @ + 6 5 + + O + + + + + + + + + + + + O @ @ + 5 4 + + + O + + + + + + + + + + + O @ + + 4 3 + + @ @ + @ + + + + + + + + @ @ O @ + 3 2 + + + + + + + + + + + + + + + + + + + 2 1 + + + + + + + + + + + + + + + + + + + 1 A B C D E F G H j K L M N O P Q R S T To be held during the Summer 1984 Usenix conference in Salt Lake City, Utah. Probable Rules -------- ----- 1) The board will be 19 x 19. This size was chosen rather than one of the smaller boards because there is a great deal of accumulated Go "wisdom" that would be worthless on smaller boards. 2) The board positions will be numbered as in the diagram above. The columns will be labeled 'A' through 'T' (excluding 'I') left to right. The rows will be labeled '19' through '1', top to bottom. 3) Play will continue until both programs pass in sequence. This may be a trouble spot, but looks like the best approach available. Several alternatives were considered: (1) have the referee decide when the game is over by identifying "uncontested" versus "contested" area; (2) limit the game to a certain number of moves; all of them had one or another unreasonable effect. 4) There will be a time limit for each program. This will be in the form of a limit on accumulated "user" time (60 minutes?). If a program goes over the time limit it will be allowed some minimum amount of time for each move (15 seconds?). If no move is generated within the minimum time the game is forfeit. 5) The tournament will use a "referee" program to execute each competing pair of programs; thus the programs must understand a standard set of commands and generate output of a standard form. a) Input to the program. All input commands to the program will be in the form of lines of text appearing on the standard input and terminated by a newline. 1) The placement of a stone will be expressed as letter-number (e.g. "G7"). Note that the letter "I" is not included. 2) A pass will be expressed as "pass". 3) The command "time" means the time limit has been exceeded and all further moves must be generated within the shorter minimum time limit. b) Output from the program. All output from the program will be in the form of lines of characters sent to the "standard output" (terminated by a newline) and had better be unbuffered. 1) The placement of a stone will be expressed as letter-number, as in "G12". Note that the letter "I" is not included. 2) A pass will be expressed as "pass". 3) Any other output lines will be considered garbage and ignored. 4) Any syntactically correct but semantically illegal move (e.g. spot already occupied, ko violation, etc.) will be considered a forfeit. The referee program will maintain a display of the board, the move history, etc. 6) The general form of the tournament will depend on the number of participants, the availability of computing power, etc. If only a few programs are entered each program will play every other program twice. If many are entered some form of Swiss system will be used. 7) These rules are not set in concrete ... yet; this one in particular. Comments, suggestions, contributions, etc. should be sent via uucp to harpo!psl or via U.S. Mail to Peter Langston / Lucasfilm Ltd. / P.O. Box 2009 / San Rafael, CA 94912. For the record: I am neither "at Bell Labs" nor "at Usenix", but rather "at" a company whose net address is a secret (cough, cough!). Thus notices like this must be sent through helpful intermediaries like Harpo. I am, however, organizing this tournament "for" Usenix. ------------------------------ End of AIList Digest ******************** 15-Sep-83 17:26:36-PDT,16490;000000000001 Mail-From: LAWS created at 15-Sep-83 17:21:26 Date: Thursday, September 15, 1983 4:57PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #57 To: AIList@SRI-AI AIList Digest Friday, 16 Sep 1983 Volume 1 : Issue 57 Today's Topics: Artificial Intelligence - Public Recognition, Programming Languages - Multiple Inheritance & Micro LISPs, Query Systems - Talk by Michael Hess, AI Architectures & Prolog - Talk by Peter Borgwardt, AI Architectures - Human-Nets Reprints ---------------------------------------------------------------------- Date: 10 Sep 1983 21:44:16-PDT From: Richard Tong Subject: "some guy named Alvey" John Alvey is Senior Director, Technology, at British Telecom. The committee that he headed reported to the British Minister for Information Technology in September 1982 ("A Program for Advanced Information Technology", HMSO 1982). The committee was formed in response to the announcement of the Japanese 5th Generation Project at the behest of the British Information Technology Industry. The major recommendations were for increased collaboration within industry, and between industry and academia, in the areas of Software Engineering, VLSI, Man-Machine Interfaces and Intelligent Knowledge-Based Systems. The recommended funding levels being approximately: $100M, $145M, $66M and $40M respectively. The British Government's response was entirely positive and resulted in the setting up of a small Directorate within the Department of Industry. This is staffed by people from industry and supported by the Government. The most obvious results so far have been the creation of several Information Technology posts in various universities. Whether the research money will appear as quickly remains to be seen. Richard. ------------------------------ Date: Mon 12 Sep 83 22:35:21-PDT From: Edward Feigenbaum Subject: The world turns; would you believe... [Reprinted from the SU-SCORE bboard.] 1. A thing called the Wall Street Computer Review, advertising a conference on computers for Wall Street professionals, with keynote speech by Isaac Asimov entitled "Artificial Intelligence on Wall Street" 2. In the employment advertising section of last Sunday's NY Times, Bell Labs (of all places!) showing Expert Systems prominently as one of their areas of work and need, and advertising for people to do Expert Systems development using methods of Artificial Intelligence research. Now I'm looking for a big IBM ad in Scientific American... 3. In 2 September SCIENCE, an ad from New Mexico State's Computing Research Laboratory. It says: "To enhance further the technological capabilities of New Mexico, the state has funded five centers of technical excellence including Computing Research Laboratory (CRL) at New Mexico State University. ...The CRL is dedicated to interdisciplinary research on knowledge- based systems" ------------------------------ Date: 15 Sep 1983 15:28-EST From: David.Anderson@CMU-CS-G.ARPA Subject: Re: Multiple Inheritance query For a discussion of multiple inheritance see "Multiple Inheritance in Smalltalk-80" by Alan Borning and Dan Ingalls in the AAAI-82 proceedings. The Lisp Machine Lisp manual also has some justification for multiple inheritance schemes in the chapter on Flavors. --david [See also any discussion of the LOOPS language, e.g., in the Fall issue of AI Magazine. -- KIL] ------------------------------ Date: Wed 14 Sep 83 19:16:41-EDT From: Ted Markowitz Subject: Info on micro LISP dialects Has anyone evaluated verions of LISP that run on micros? I'd like to find out what's already out there and people's impressions of them. The hardware would be something in the nature of an IBM PC or a DEC Rainbow. --ted ------------------------------ Date: 12 Sep 1983 1415-PDT From: Ichiki Subject: Talk by Michael Hess [This talk will be given at the SRI AI Center. Visitors should come to E building on Ravenswood Avenue in Menlo Park and call Joani Ichiki, x4403.] Text Based Question Answering Systems ------------------------------------- Michael Hess University of Texas, Austin Friday, 16 September, 10:30, EK242 Question Answering Systems typically operate on Data Bases consisting of object level facts and rules. This, however, limits their usefulness quite substantially. Most scientific information is represented as Natural Language texts. These texts provide relatively few basic facts but do give detailed explanantions of how they can be interpreted, i.e. how the facts can be linked with the general laws which either explain them, or which can be inferred from them. This type of information, however, does not lend itself to an immediate representation on the object level. Since there are no known proof procedures for higher order logics we have to find makeshift solutions for a suitable text representation with appropriate interpretation procedures. One way is to use the subset of First Order Predicate Calculus as defined by Prolog as a representation language, and a General Purpose Planner (implemented in Prolog) as an interpreter. Answering a question over a textual data base can then be reduced to proving the answer in a model of the world as described in the text, i.e. to planning a sequence of actions leading from the state of affairs given in the text to the state of affairs given in the question. The meta-level information contained in the text is used as control information during the proof, i.e. during the execution of the simulation in the model. Moreover, the format of the data as defined by the planner makes explicit some kinds of information particularly often addressed in questions. The simulation of an experiment in the Blocks World, using the kind of meta-level information important in real scientific experiments, can be used to generate data which, when generalised, could be used directly as DB for question answering about the experiment. Simultaneously, it serves as a pattern for the representation of possible texts describing the experiment. The question of how to translate NL questions and NL texts, into this kind of format, however, has yet to be solved. ------------------------------ Date: 12 Sep 1983 1730-PDT From: Ichiki Subject: Talk by Peter Borgwardt [This talk will be given at the SRI AI Center. Visitors should come to E building on Ravenswood Avenue in Menlo Park and call Joani Ichiki, x4403.] There will be a talk given by Peter Borgwardt on Monday, 9/19 at 10:30am in Conference Room EJ222. Abstract follows: Parallel Prolog Using Stack Segments on Shared-memory Multiprocessors Peter Borgwardt Computer Science Department University of Minnesota Minneapolis, MN 55455 Abstract A method of parallel evaluation for Prolog is presented for shared-memory multiprocessors that is a natural extension of the current methods of compiling Prolog for sequential execution. In particular, the method exploits stack-based evaluation with stack segments spread across several processors to greatly reduce the need for garbage collection in the distributed computation. AND parallelism and stream parallelism are the most important sources of concurrent execution in this method; these are implemented using local process lists; idle processors may scan these and execute any process as soon as its consumed (input) variables have been defined by the goals that produce them. OR parallelism is considered less important but the method does implement it with process numbers and variable binding lists when it is requested in the source program. ------------------------------ Date: Wed, 14 Sep 83 07:31 PDT From: "Glasser Alan"@LLL-MFE.ARPA Subject: human-nets discussion on AI and architecture Ken, I see you have revived the Human-nets discussion about AI and computer architecture. I initiated that discussion and saved all the replies. I thought you might be interested. I'm sending them to you rather than AILIST so you can use your judgment about what if anything you might like to forward to AILIST. Alan [The following is the original message. The remainder of this digest consists of the collected replies. I am not sure which, if any, appeared in Human-Nets. -- KIL] --------------------------------------------------------------------- Date: 4 Oct 1982 (Monday) 0537-EDT From: GLASSER at LLL-MFE Subject: artificial intelligence and computer architecture I am a new member of the HUMAN-NETS interest group. I am also newly interested in Artificial Intelligence, partly as a result of reading "Goedel,Escher,Bach" and similar recent books and articles on AI. While this interest group isn't really about AI, there isn't any other group which is, and since this one covers any computer topics not covered by others, this will do as a forum. From what I've read, it seems that most or all AI work now being done involves using von Neumann computer programs to model aspects of intelligent behavior. Meanwhile, others like Backus (IEEE Spectrum, August 1982, p.22) are challenging the dominance of von Neumann computers and exploring alternative programming styles and computer architectures. I believe there's a crucial missing link in understanding intelligent behavior. I think it's likely to involve the nature of associative memory, and I think the key to it is likely to involve novel concepts in computer architecture. Discovery of the structure of associative memory could have an effect on AI similar to that of the discovery of the structure of DNA on genetics. Does anyone out there have similar ideas? Does anyone know of any research and/or publications on this sort of thing? --------------------------------------------------------------------- Date: 15 Oct 1982 1406-PDT From: Paul Martin Subject: Re: HUMAN-NETS Digest V5 #96 Concerning the NON-VON project at Columbia, David Shaw, formerly of the Stanford A. I. Lab, is using the development of some non-VonNeuman hardware designs to make an interesting class of database access operations no longer require times that are exponential with the size of the db. He wouldn't call his project AI, but rather an approach to "breaking the VonNeuman bottleneck" as it applies to a number of well-understood but poorly solved problems in computing. --------------------------------------------------------------------- Date: 28 Oct 1982 1515-EDT From: David F. Bacon Subject: Parallelism and AI Reply-to: Columbia at CMU-20C Parallel Architectures for Artificial Intelligence at Columbia While the NON-VON supercomputer is expected to provide significant performance improvements in other areas as well, one of the principal goals of the project is the provision of highly efficient support for large-scale artificial intelligence applications. As Dr. Martin indicated in his recent message, NON-VON is particularly well suited to the execution of relational algebraic operations. We believe, however, that such functions, or operations very much like them, are central to a wide range of artificial intelligence applications. In particular, we are currently developing a parallel version of the PROLOG language for NON-VON (in addition to parallel versions of Pascal, LISP and APL). David Shaw, who is directing the NON-VON project, wrote his Ph.D. thesis at the Stanford A.I. Lab on a subject related to large-scale parallel AI operations. Many of the ideas from his dissertation are being exploited in our current work. The NON-VON machine will be constructed using custom VLSI chips, connected according to a binary tree-structured topology. NON-VON will have a very "fine granularity" (that is, a large number of very small processors). A full-scale NON-VON machine might embody on the order of 1 million processing elements. A prototype version incorporating 1000 PE's should be running by next August. In addition to NON-VON, another machine called DADO is being developed specifically for AI applications (for example, an optimal running time algorithm for Production System programs has already been implemented on a DADO simulator). Professor Sal Stolfo is principal architect of the DADO machine, and is working in close collaboration with Professor Shaw. The DADO machine will contain a smaller number of more powerful processing elements than NON-VON, and will thus have a a "coarser" granularity. DADO is being constructed with off-the-shelf Intel 8751 chips; each processor will have 4K of EPROM and 8K of RAM. Like NON-VON, the DADO machine will be configured as a binary tree. Since it is being constructed using "off-the-shelf" components, a working DADO prototype should be operational at an earlier date than the first NON-VON machine (a sixteen node prototype should be operational in three weeks!). While DADO will be of interest in its own right, it will also be used to simulate the NON-VON machine, providing a powerful testbed for the investigation of massive parallelism. As some people have legitimately pointed out, parallelism doesn't magically solve all your problems ("we've got 2 million processors, so who cares about efficiency?"). On the other hand, a lot of AI problems simply haven't been practical on conventional machines, and parallel machines should help in this area. Existing problems are also sped up substantially [ O(N) sort, O(1) search, O(n^2) matrix multiply ]. As someone already mentioned, vision algorithms seem particularly well suited to parallelism -- this is being investigated here at Columbia. New architectures won't solve all of our problems -- it's painfully obvious on our current machines that even fast expensive hardware isn't worth a damn if you haven't got good software to run on it, but even the best of software is limited by the hardware. Parallel machines will overcome one of the major limitations of computers. David Bacon NON-VON/DADO Research Group Columbia University ------------------------------ Date: 7 Nov 82 13:43:44 EST (Sun) From: Mark Weiser Subject: Re: Parallelism and AI Just to mention another project, The CS department at the University of Maryland has a parallel computing project called Zmob. A Zmob consists of 256 Z-80 processors called moblets, each with 64k memory, connected by a 48 bit wide high speed shift register ring network (100ns/shift, 25.6us/revolution) called the "conveyer belt". The conveyer belt acts almost like a 256x256 cross-bar since it rotates faster than a z-80 can do significant I/O, and it also provides for broadcast messages and messages sent and received by pattern match. Each Z-80 has serial and parallel ports, and the whole thing is served by a Vax which provides cross-compiling and file access. There are four projects funded and working on Zmob (other than the basic hardware construction), sponsored by the Air Force. One is parallel numerical analysis, matrix calculations, and the like (the Z-80's have hardware floating point). The second is parallel image processing and vision. The third is distributed problem solving using Prolog. The fourth (mine) is operating systems and software, developing remote-procedure-call and a distributed version of Unix called Mobix. A two-moblet prototype was working a year and half ago, and we hope to bring up a 128 processor version in the next few months. (The boards are all PC'ed and stuffed but timing problems on the bus are temporarily holding things back). ------------------------------ End of AIList Digest ******************** 16-Sep-83 16:23:43-PDT,14866;000000000001 Mail-From: LAWS created at 16-Sep-83 16:22:42 Date: Friday, September 16, 1983 4:10PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #58 To: AIList@SRI-AI AIList Digest Saturday, 17 Sep 1983 Volume 1 : Issue 58 Today's Topics: Automatic Translation - Ada, Games - Go Programs & Foonberg's Number Problem, Artificial Intelligence - Turing Test & Creativity ---------------------------------------------------------------------- Date: 10 Sep 83 13:50:18-PDT (Sat) From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!sdl@Ucb-Vax Subject: Re: Translation into Ada: Request for Info Article-I.D.: rayssd.142 There have been a number of translators from Pascal to Ada, the first successful one I know of was developed at UC Berkeley by P. Albrecht, S. Graham et al. See the "Source-to-Source Translation" paper in the 1980 Proceedings of Sigplan Symp. on Ada, Dec. 1980. At Univ. S. Calif. Info. Sci. Institute (USC-ISI), Steve Crocker (now at the Aerospace Corp.) developed AUTOPSY, a translator from CMS-2 to Ada. (CMS-2 is the Navy standard language for embedded software.) Steve Litvintchouk Raytheon Company Portsmouth, RI 02871 ------------------------------ Date: 10 Sep 83 13:56:17-PDT (Sat) From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!sdl@Ucb-Vax Subject: Re: Go Tournament Article-I.D.: rayssd.143 ARE there any available Go programs which run on VAX/UNIX which I could obtain? (Either commercially sold, or available from universities, or whatever.) I find Go fascinating and would love to have a Go program to play against. Please reply via USENET, or to: Steve Litvintchouk Raytheon Company Submarine Signal Division Portsmouth, RI 02871 (401)847-8000 x4018 ------------------------------ Date: 14 Sep 1983 16:18-EDT From: Dan Hoey Subject: Alan Foonberg's number problem I'm surprised you posted Alan Foonberg's number problem on AIlist since Vivek Sarkar's solution has already appeared (Prolog digest V1 #28). I enclose his solution below. His solution unfortunately omits the special cases , 2020, and 21200; I have sent a correction to the Prolog digest. Dan ------------------------------ Date: Wed 7 Sep 83 11:08:08-PD From: Vivek Sarkar Subject: Solution to Alan Foonberg's Number Puzzle Here is a general solution to the puzzle posed by Alan Foonberg: My generalisation is to consider n-digit numbers in base n. The digits can therefore take on values in the range 0 .. n-1 . A summary of the solution is: n = 4: 1210 n >= 7: (n-4) 2 1 0 0 ... 0 0 1 0 0 0 <---------> (n-7) 0's Further these describe ALL possible solutions, I.e. radix values of 2,3,5,6 have no solutions, and other values have exactly one solution for each radix. Proof: Case 2 <= n <= 6: Consider these as singular cases. It is simple to show that there are no solutions for 2,3,5,6 and that 1210 is the only solution for 4. You can do this by writing a program to generate all solutions for a given radix. ( I did that; unfortunately it works out better in Pascal than Prolog ! ) CASE n >= 7: It is easy to see that the given number is indeed a solution. ( The rightmost 1 represents the single occurrence of (n-4) at the beginning ). For motivation, we can substitute n=10 and get 6210001000, which was the decimal solution provided by Alan. The tough part is to show that this represents the only solution, for a given radix. We do this by considering all possible values for the first digit ( call it d0 ) and showing that d0=(n-4) is the only one which can lead to a solution. SUBCASE d0 < (n-4): Let d0 = n-4-j, where j>=1. Therefore the number has (n-4-j) 0's, which leaves (j+3) non-zero digits apart from d0. Further these (j+3) digits must add up to (j+4). ( The sum of the digits of a solution must be n, as there are n digits in the number, and the value of each digit contributes to a frequency count of digits with its positional value). The only way that (j+3) non-zero digits can add up to (j+4) is by having (j+2) 1's and one 2. If there are (j+2) 1's, then the second digit from the left, which counts the number of 1's (call it d1) must = (j+2). Since j >= 1, d1=(j+2) is neither a 1 nor a 2. Contradiction ! SUBCASE d0 > (n-4): This leads to 3 possible values for d0: (n-1), (n-2) & (n-3). It is simple to consider each value and see that it can't possibly lead to a solution, by using an analysis similar to the one above. We therefore conclude that d0=(n-4), and it is straightforward to show that the given solution is the only possible one, for this value of d0. -- Q.E.D. ------------------------------ Date: Wed 14 Sep 83 17:25:38-PDT From: Ken Laws Subject: Re: Alan Foonberg's number problem Thanks for the note and the correction. I get the Prolog digest a little delayed, so I hadn't seen the answer at the time I relayed the problem. My purpose in sending out the problem actually had nothing to do with finding the answer. The answer you forwarded is a nice mathematical proof, but the question is whether and how AI techniques could solve the problem. Would an AI program have to reason in the same manner as a mathematician? Would different AI techniques lead to different answers? How does one represent the problem and the solution in machine-readable form? Is this an interesting class of problems for cognitive science to deal with? I was expecting that someone would respond with a 10-line PROLOG program that would solve the problem. The discussion that followed might contrast that with the LISP or ALGOL infrastructure needed to solve the problem. Now, of course, I don't expect anyone to present algorithmic solutions. -- Ken Laws ------------------------------ Date: 9 Sep 83 13:15:56-PDT (Fri) From: harpo!floyd!cmcl2!csd1!condict @ Ucb-Vax Subject: Re: in defense of Turing - (nf) Article-I.D.: csd1.116 A comment on the statement that it is easy to trip up an allegedly intelligent machine that generates responses by using the input as an index into an array of possible outputs: Yes, but this machine has no state and hence hardly qualifies as a machine at all! The simple tricks you described cannot be used if we augment it to use the entire sequence of inputs so far as the index, instead of just the most recent one, when generating its response. This allows it to take into account sequences that contain runs of identical inputs and to understand inputs that refer to previous inputs (or even Hofstadteresque self-referential inputs). My point is not that this new machine cannot be tripped up but that the one described is such a straw man that fooling it gives no information about the real difficulty of programming a computer to pass the Turing test. ------------------------------ Date: 10 Sep 83 22:20:39-PDT (Sat) From: decvax!wivax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!speaker@Ucb-Vax Subject: Re: in defense of Turing Article-I.D.: umcp-cs.2538 It should be fairly obvious that the Turing test is not a precise test to determine intelligence because the very meaning of the word 'intellegence' cannot be precisely pinned down, despite what your Oxford dictionary might say. I think the idea here is that if a machine can perform such that it is indistinguishable from the behavior of a human then it can be said to display human intelligence. Note that I said, "human intelligence." It is even debatable whether certain members of the executive branch can be said to be intelligent. If we can't apply the Turing test there... then surely we're just spinning our wheels in an attempt to apply it universally. - Speaker -- Full-Name: Speaker-To-Animals Csnet: speaker@umcp-cs Arpa: speaker.umcp-cs@UDel-Relay This must be hell...all I can see are flames... towering flames! ------------------------------ Date: Wed 14 Sep 83 12:35:11-PDT From: David Rogers Subject: intelligence and genius [This continues a discussion on Human-Nets. My original statement, printed below, was shot down by several people. Individuals certainly derive satisfaction from hobbies at which they will never excel. It would take much of the fun out of my life, however, if I could not even imagine excelling at anything because cybernetic life had surpassed humans in every way. -- KIL] From: Ken Laws Life will get even worse if AI succeeds in automating true creativity. What point would there be in learning to paint, write, etc., if your home computer could knock out more artistic creations than you could ever hope to master? I was rather surprised that this suggestion was taken so quickly as it stands. Most people in AI believe that we will someday create an "intelligent" machine, but Ken's claim seems to go beyond that; "automating true creativity" seems to be saying that we can create not just intelligent, but "genius" systems, at will. The automation of genius is a more sticky claim in my mind. For example, if we create an intelligent system, do we make it a genius system by just turning up the speed or increasing its memory? That"s like saying a painter could become Rembrandt if he/she just painted 1000 times more. More likely is that the wrong (or uncreative) ideas would simply pour out faster, or be remembered longer. Turning up the speed of the early blind-search chess programs made them marginally better players, but no more creative. Or let's say we stumble onto the creation of some genius system, call it "Einstein". Do we get all of the new genius systems we need by merely duplicating "Einstein", something impossible to do with human systems? Again, we hit a dead end... "Einstein" will only be useful in a small domain of creativity, and will never be a Bach or a Rembrandt no matter how many we clone. Even more discouraging, if we xerox off 1000 of our "Einstein" systems, do we get 1000 times the creative ideas? Probably not; we will cover the range of "Einstein's" potential creativity better, but that's it. Even a genius has only a range of creativity. What is it about genius systems that makes them so intractable? If we will someday create intelligent systems consistently and reliably, what stands in the way of creating genius systems on demand? I would suggest that statistics get in our way here; that genius systems cannot be created out of dust, but that every once in a while, an intelligent system has the proper conditioning and evolves into a genius system. In this light, the number of genius systems possible depends on the pool of intelligent systems that are available as substrate. In short, while I feel we will be able to create intelligent systems, we will not be able to directly construct superintelligent ones. While there will be advantages in duplicating, speeding up, or otherwise manipulating a genius system once created, the process of creating one will remain maddeningly elusive. David Rogers DRogers@SUMEX-AIM.ARPA [I would like to stake out a middle ground: creative systems. We will certainly have intelligent systems, and we will certainly have trouble devising genius systems. (Genius in human terms: I don't want to get into whether an AI program can be >>sui generis<< if we can produce a thousand variations of it before breakfast.) A [scientific] genius is someone who develops an idea for which there is, or at least seems to be, no precedent. Creativity, however, can exist in a lesser being. Forget Picasso, just consider an ordinary artist who sees a new style of bold, imaginative painting. The artist has certain inborn or learned measures of artistic merit: color harmony, representational accuracy, vividness, brush technique, etc. He evaluates the new painting and finds that it exists in a part of his artistic "parameter space" that he has never explored. He is excited, and carefully studies the painting for clues as to the techniques that were used. He hypothesizes rules for creating similar visual effects, trys them out, modifies them, iterates, adds additional constraints (yes, but can I do it with just rectangles ...), etc. This is creativity. Nothing that I have said above precludes our artist from being a machine. Another example, which I believe I heard from a recent Stanford Ph.D. (sorry, can't remember who): consider Solomon's famous decision. Everyone knows that a dispute over property can often be settled by dividing the property, providing that the value of the property is not destroyed by the act of division. Solomon's creative decision involved the realization (at least, we hope he realized it) that in a particular case, if the rule was implemented in a particular theatrical manner, the precondition could be ignored and the rule would still achieve its goal. We can then imagine Solomon to be a rule-based system with a metasystem that is constantly checking for generalizations, specializations, and heuristic shortcuts to the normal rule sequences. I think that Doug Lenat's EURISKO program has something of this flavor, as do other learning programs. In the limit, we can imagine a system with nearly infinite computing power that builds models of its environment in its memory. It carries out experiments on this model, and verifies the experiments by carrying them out in the real world when it can. It can solve ordinary problems through various applicable rule invocations, unifications, planning, etc. Problems requiring creativity can often be solved by applying inappropriate rules and techniques (i.e., violating their preconditions) just to see what will happen -- sometimes it will turn out that the preconditions were unnecessarily strict. [The system I have just described is a fair approximation to a human -- or even to a monkey, dog, or elephant.] True genius in such a system would require that it construct new paradigms of thought and problem solving. This will be much more difficult, but I don't doubt that we and our cybernetic offspring will even be able to construct such progeny someday. -- Ken Laws ] ------------------------------ End of AIList Digest ******************** 19-Sep-83 16:30:08-PDT,16325;000000000001 Mail-From: LAWS created at 19-Sep-83 16:26:59 Date: Monday, September 19, 1983 4:16PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #59 To: AIList@SRI-AI AIList Digest Tuesday, 20 Sep 1983 Volume 1 : Issue 59 Today's Topics: Programming Languages - Micro LISP Reviews, Machine Translation - Ada & Dictionary Request & Grammar Translation, AI Journals - Addendum, Bibliography - SNePS Research Group ---------------------------------------------------------------------- Date: Mon, 19 Sep 1983 11:41 EDT From: WELD%MIT-OZ@MIT-MC Subject: Micro LISPs For a survey of micro LISPs see the August and Sept issues of Microsystems magazine. The Aug issue reviews muLISP, Supersoft LISP and The Stiff Upper Lisp. I believe that the Sept issue will continue the survey with some more reviews. Dan ------------------------------ Date: 14 Sep 83 1:44:58-PDT (Wed) From: decvax!genrad!mit-eddie!barmar @ Ucb-Vax Subject: Re: Translation into Ada: Request for Info Article-I.D.: mit-eddi.713 I think the reference to the WWMCS conversion effort is a bad example when talking aboutomatic programming language translation. I would be very surprised if WWMCS is written in a high-level language. It runs on Honeywell GCOS machines, I believe, and I think that GCOS system programming is traditionally done in GMAP (GCOS Macro Assembler Program), especially at the time that WWMCS was written. Only a masochist would even think of writing an automatic "anticompiler" (I have heard of uncompilers, but those are usually restricted to figuring out the code produced by a known compiler, not arbitrary human coding); researchers have found it hard enough to teach computers to "understand" programs in HLLs, and it is often pretty difficult for humans to understand others' assembler code. -- Barry Margolin ARPA: barmar@MIT-Multics UUCP: ..!genrad!mit-eddie!barmar ------------------------------ Date: Mon 19 Sep 83 14:56:49-CDT From: Werner Uhrig Subject: Request for m/c-readable foreign language dictionary info I am looking for foreign-language dictionaries in machine-readable form. Of particular interest would be a subset containing EDP-terminology. This would be used to help automate translation of computer-related technical materials. Of major interest are German, Spanish, French, but others might be useful also. Any pointers appreciated. Werner (UUCP: ut-ngp!werner or ut-ngp!utastro!werner via: { decvax!eagle , ucbvax!nbires , gatech!allegra!eagle , ihnp4 } ARPA: werner@utexas-20 or werner@utexas-11 ) ------------------------------ Date: 19 Sep 1983 0858-PDT From: PAZZANI at USC-ECL Subject: Parsifal I have a question about PARSIFAL (Marcus's deterministic parser) that I hope someone can answer: Is it easy (or possible) to convert grammar rules to the kind of rules that Parsifal uses? Is there an algoritm to do so? (i.e., by grammar rule, I mean things like: S -> NP VP VP -> VP2 NP PP VP -> V3 INF INF -> to VP etc. where by grammar rule Marcus means things like {RULE MAJOR-DECL-S in SS-START [=np][=verb]--> Label c decl,major. Deactivate ss-start. Activate parse-subj.} {RULE UNMARKED-ORDER IN PARSE-SUBJ [=np][=verb]--> Attach 1st to c as np. Deactivate Parse-subj. Activate parse-aux.} Thanks in advance, Mike Pazzani Pazzani@usc-ecl ------------------------------ Date: 16 Sep 83 16:58:30-PDT (Fri) From: ihnp4!cbosgd!cbscc!cbscd5!lvc @ Ucb-Vax Subject: addendum to AI journal list Article-I.D.: cbscd5.589 The following are journals that readers have sent me since the time I posted the list of AI journals. As has been pointed out, individuals can get subscriptions at a reduced rate. Most of the prices I quoted were the institutional price. The American Journal of Computational Linguistics -- will now be called -> Computational Linguistics Subscription $15 Don Walker, ACL SRI International Menlo Park, CA 94025. ------------------------------ Cognition and Brain Theory Lawrence Erlbaum Associates, Inc. 365 Broadway, Hillsdale, New Jersey 07642 $18 Individual $50 Institutional Quarterly Basic cognition, proposed models and discussion of consciousness and mental process, epistemology - from frames to neurons, as related to human cognitive processes. A "fringe" publication for AI topics, and a good forum for issues in cognitive science/psychology. ------------------------------ New Generation Computing Springer-Verlag New York Inc. Journal Fulfillment Dept. 44 Hartz Way Secaucus, NJ 07094 A quarterly English-language journal devoted to international research on the fifth generation computer. [It seems to be very strong on hardware and logic programming.] 1983 - 2 issues - $52. (Sample copy free.) 1984 - 4 issues - $104. Larry Cipriani cbosgd!cbscd5!lvc ------------------------------ Date: 16 Sep 1983 10:38:57-PDT From: shapiro%buffalo-cs@UDel-Relay Subject: Your request for bibliographies Bibliography SNeRG: The SNePS Research Group Department of Computer Science State University of New York at Buffalo Amherst, New York 14226 Copies of Departmental Technical Reports (marked with an "*") should be requested from The Library Committee, Dept. of Computer Science, SUNY/Buffalo, 4226 Ridge Lea Road, Amherst, NY 14226. Businesses are asked to enclose $3.00 per report requested with their requests. Others are asked to enclose $1.00 per report. Copies of papers other than Departmental Technical Reports may be requested directly from Prof. Stuart C. Shapiro at the above address. 1. Shapiro, S. C. [1971] A net structure for semantic information storage, deduction and retrieval. Proc. Second International Joint Conference on Artificial Intelligence, William Kaufman, Los Altos, CA, 212-223. 2. Shapiro, S. C. [1972] Generation as parsing from a network into a linear string. American Journal of Computational Linguistics, Microfiche 33, 42-62. 3. Shapiro, S. C. [1976] An introduction to SNePS (Semantic Net Processing System). Technical Report No. 31, Computer Science Department, Indiana University, Bloomington, IN, 21 pp. 4. Shapiro, S. C. and Wand, M. [1976] The Relevance of Relevance. Technical Report No. 46, Computer Science Department, Indiana University, Bloomington, IN, 21pp. 2. Bechtel, R. and Shapiro, S. C. [1976] A logic for semantic networks. Technical Report No. 47, Computer Science Department, Indiana University, Bloomington, IN, 29pp. 6. Shapiro, S. C. [1977] Representing and locating deduction rules in a semantic network. Proc. Workshop on Pattern-Directed Inference Systems. SIGART Newsletter, 63 14-18. 7. Shapiro, S. C. [1977] Representing numbers in semantic networks: prolegomena Proc. 2th International Joint Conference on Artificial Intelligence, William Kaufman, Los Altos, CA, 284. 8. Shapiro, S. C. [1977] Compiling deduction rules from a semantic network into a set of processes. Abstracts of Workshop on Automatic Deduction, MIT, Cambridge, MA. (Abstract only), 7pp. 9. Shapiro, S. C. [1978] Path-based and node-based inference in semantic networks. In D. Waltz, ed. TINLAP-2: Theoretical Issues in Natural Languages Processing. ACM, New York, 219-222. 10. Shapiro, S. C. [1979] The SNePS semantic network processing system. In N. V. Findler, ed. Associative Networks: The Representation and Use of Knowledge by Computers. Academic Press, New York, 179-203. 11. Shapiro, S. C. [1979] Generalized augmented transition network grammars for generation from semantic networks. Proc. 17th Annual Meeting of the Association for Computational Linguistics. University of California at San Diego, 22-29. 12. Shapiro, S. C. [1979] Numerical quantifiers and their use in reasoning with negative information. Proc. Sixth International Joint Conference on Artificial Intelligence, William Kaufman, Los Altos, CA, 791-796. 13. Shapiro, S. C. [1979] Using non-standard connectives and quantifiers for representing deduction rules in a semantic network. Invited paper presented at Current Aspects of AI Research, a seminar held at the Electrotechnical Laboratory, Tokyo, 22pp. 14. * McKay, D. P. and Shapiro, S. C. [1980] MULTI: A LISP Based Multiprocessing System. Technical Report No. 164, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 20pp. (Contains appendices not in LISP conference version) 12. McKay, D. P. and Shapiro, S. C. [1980] MULTI - A LISP based multiprocessing system. Proc. 1980 LISP Conference, Stanford University, Stanford, CA, 29-37. 16. Shapiro, S. C. and McKay, D. P. [1980] Inference with recursive rules. Proc. First Annual National Conference on Artificial Intelligence, William Kaufman, Los Altos, CA, 121-123. 17. Shapiro, S. C. [1980] Review of Fahlman, Scott. NETL: A System for Representing and Using Real-World Knowledge. MIT Press, Cambridge, MA, 1979. American Journal of Computational Linguistics 6, 3, 183-186. 18. McKay, D. P. [1980] Recursive Rules - An Outside Challenge. SNeRG Technical Note No. 1, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 11pp. 19. * Maida, A. S. and Shapiro, S. C. [1981] Intensional concepts in propositional semantic networks. Technical Report No. 171, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 69pp. 20. * Shapiro, S. C. [1981] COCCI: a deductive semantic network program for solving microbiology unknowns. Technical Report No. 173, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 24pp. 21. * Martins, J.; McKay, D. P.; and Shapiro, S. C. [1981] Bi-directional Inference. Technical Report No. 174, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 32pp. 22. * Martins, J., and Shapiro, S. C. [1981] A Belief Revision System Based on Relevance Logic and Heterarchical Contexts. Technical Report No. 172, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 42pp. 23. Shapiro, S. C. [1981] Summary of Scientific Progress. SNeRG Technical Note No. 3, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 2pp. 24. Mckay, D. P. and Martins, J. SNePSLOG User's Manual. SNeRG Technical Note No. 4, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 8pp. 22. McKay, D. P.; Shubin, H.; and Martins, J. [1981] RIPOFF: Another Text Formatting Program. SNeRG Technical Note No. 2, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 18pp. 26. * Neal, J. [1981] A Knowledge Engineering Approach to Natural Language Understanding. Technical Report No. 179, Computer Science Department, SUNY at Buffalo, Amherst, NY, 67pp. 27. * Srihari, R. [1981] Combining Path-based and Node-based Reasoning in SNePS. Technical Report No. 183, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 22pp. 28. McKay, D. P.; Martins, J.; Morgado, E.; Almeida, M.; and Shapiro, S. C. [1981] An Assessment of SNePS for the Navy Domain. SNeRG Technical Note No. 6, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 48pp. 29. Shapiro, S. C. [1981] What do Semantic Network Nodes Represent? SNeRG Technical Note No. 7, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 12pp. Presented at the workshop on Foundational Threads in Natural Language Processing, SUNY at Stony Brook. 30. McKay, D. P., and Shapiro, S. C. [1981] Using active connection graphs for reasoning with recursive rules. Proceedings of the Seventh International Joint Conference on Artificial Intelligence, William Kaufman, Los Altos, CA, 368-374. 31. Shapiro, S. C. and The SNePS Implementation Group [1981] SNePS User's Manual. Department of Computer Science, SUNY at Buffalo, Amherst, NY, 44pp. 32. Shapiro, S. C.; McKay, D. P.; Martins, J.; and Morgado, E. [1981] SNePSLOG: A "Higher Order" Logic Programming Language. SNeRG Technical Note No. 8, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 16pp. Presented at the Workshop on Logic Programming for Intelligent Systems, R.M.S. Queen Mary, Long Beach, CA. 33. * Shubin, H. [1981] Inference and Control in Multiprocessing Environments. Technical Report No. 186, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 26pp. 34. Shapiro, S. C. [1982] Generalized Augmented Transition Network Grammars for Generation from Semantic Networks. The American Journal of Computational Linguistics 8, 1 (January - March), 12-22. 32. Almeida, M.J. [1982] NETP2 - A Parser for a Subset of English. SNERG Technical Note No. 9, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 32pp. 36. * Tranchell, L.M. [1982] A SNePS Implementation of KL-ONE, Technical Report No. 198, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 21pp. 37. Shapiro, S.C. and Neal, J.G. [1982] A Knowledge engineering Approach to Natural language understanding. Proceedings of the 20th Annual Meeting of the Association for Computational Linguistics, ACL, Menlo Park, CA, 136-144. 38. Donlon, G. [1982] Using Resource Limited Inference in SNePS. SNeRG Technical Note No. 10, Department of Computer Science, SUNY at Buffalo, Amherst, NY, 10pp. 39. Nutter, J. T. [1982] Defaults revisited or "Tell me if you're guessing". Proceedings of the Fourth Annual Conference of the Cognitive Science Society, Ann Arbor, MI, 67-69. 40. Shapiro, S. C.; Martins, J.; and McKay, D. [1982] Bi-directional inference. Proceedings of the Fourth Annual Meeting of the Cognitive Science Society, Ann Arbor, MI, 90-93. 41. Maida, A. S. and Shapiro, S. C. [1982] Intensional concepts in propositional semantic networks. Cognitive Science 6, 4 (October-December), 291-330. 42. Martins, J. P. [1983] Belief revision in MBR. Proceedings of the 1983 Conference on Artificial Intelligence, Rochester, MI. 43. Nutter, J. T. [1983] What else is wrong with non-monotonic logics?: representational and informational shortcomings. Proceedings of the Fifth Annual Meeting of the Cognitive Science Society, Rochester, NY. 44. Almeida, M. J. and Shapiro, S. C. [1983] Reasoning about the temporal structure of narrative texts. Proceedings of the Fifth Annual Meeting of the Cognitive Science Society, Rochester, NY. 42. * Martins, J. P. [1983] Reasoning in Multiple Belief Spaces. Ph.D. Dissertation, Technical Report No. 203, Computer Science Department, SUNY at Buffalo, Amherst, NY, 381 pp. 46. Martins, J. P. and Shapiro, S. C. [1983] Reasoning in multiple belief spaces. Proceedings of the Eighth International Joint Conference on Artificial Intelligence, William Kaufman, Los Altos, CA, 370-373. 47. Nutter, J. T. [1983] Default reasoning using monotonic logic: a modest proposal. Proceedings of The National Conference on Artificial Intelligence, William Kaufman, Los Altos, CA, 297-300. ------------------------------ End of AIList Digest ******************** 20-Sep-83 09:53:36-PDT,13854;000000000001 Mail-From: LAWS created at 20-Sep-83 09:50:05 Date: Tuesday, September 20, 1983 9:41AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #60 To: AIList@SRI-AI AIList Digest Tuesday, 20 Sep 1983 Volume 1 : Issue 60 Today's Topics: AI Journals - AI Journal Changes, Applications - Cloud Data & AI and Music, Games - Go Tournament, Intelligence - Turing test & Definitions ---------------------------------------------------------------------- Date: Mon, 19 Sep 83 18:51 PDT From: Bobrow.PA@PARC-MAXC.ARPA Subject: News about the Artificial Intelligence Journal Changes in the Artificial Intelligence Journal Daniel G. Bobrow (Editor-in-chief) There have been a number of changes in the Artificial Intelligence Journal which are of interest to the AI community. 1) The size of the journal is increasing. In 1982, the journal was published in two volumes of three issues each (about 650 printed pages per year). In 1983, we increased the size to two volumes of four issues each (about 900 printed pages per year). In order to accomodate the increasing number of high quality papers that are being submitted to the journal, in 1984 the journal will be published in three volumes of three issues each (about 1000 printed pages per year). 2) Despite the journal size increase, North Holland will maintain the current price of $50 per year for personal subscriptions for individual (non-institutuional) members of major AI organizations (e.g. AAAI, SIGART). To obtain such a subscription, members of such organizations should send a copy of their membership acknowledgement, and their check for $50 (made out to Artificial Intelligence) to: Elsevier Science Publishers Attn: John Tagler 52 Vanderbilt Avenue New York, New York 10017 North Holland (Elsevier) will acknowledge receipt of the request for subscription, provide information about which issues will be included in your subscription, and when they should arrive. Back issues are not available at the personal rate. 3) The AIJ editorial board has recognized the need for good review articles in subfields of AI. To encourage the writing of such articles, an honorarium of $1000 will be awarded the authors of any review accepted by the journal. Although review papers will go through the usual review process, when accepted they will be given priority in the publication queue. Potential authors are reminded that review articles are among the most cited articles in any field. 4) The publication process takes time. To keep an even flow of papers in the journal, we must maintain a queue of articles of about six months. To allow people to know about important research results before articles have been published, we will lists of papers accepted for publication in earlier issues of the journal, and make such lists available to other magazines (e.g. AAAI magazine, SIGART news). 5) New book review editor: Mark Stefik has taken the job of book review editor for the Artificial Intelligence Journal. The following note from Mark describes his plans to make the book review section much more active than it has been in the past. ------------------ The Book Review Section of the Artificial Intelligence Journal Mark Stefik - Book Review Editor I am delighted for this opportunity to start an active review column for AI, and invite your suggestions and participation. This is an especially good time to review work in artificial intelligence. Not only is there a surge of interest in AI, but there are also many new results and publications in computer science, in the cognitive sciences and in other related sciences. Many new projects are just beginning and finding new directions (e.g., machine learning, computational linguistics), new areas of work are opening up (e.g., new architectures), and others are reporting on long term projects that are maturing (computer vision). Some readers will want to track progress in specialized areas; others will find inspiration and direction from work breaking outside the field. There is enough new and good but unreviewed work that I would like to include two or three book reviews in every issue of Artificial Intelligence. I would like this column of book reviews to become essential reading for the scientific audience of this journal. My goal is to cover both scientific works and textbooks. Reviews of scientific work will not only provide an abstract of the material, but also show how it fits into the body of existing work. Reviews of textbooks will discuss not only clarity and scope, but also how well the textbook serves for teaching. For controversial work of major interest I will seek more than one reviewer. To get things started, I am seeking two things from the community now. First, suggestions of books for review. Books written in the past five years or so will be considered. The scope of the fields considered will be broad. The main criteria will be scientific interest to the readership. For example, books from as far afield as cultural anthropology or sociobiology will be considered if they are sufficiently relevent, and readable by an AI audience. Occasionally, important books intended for a popular audience will also be considered. My second request is for reviewers. I will be asking colleagues for reviews of particular books, but will also be open both to volunteers and suggestions. Although I will tend to solicit reviews from researchers of breadth and maturity, I recognize that graduate students preparing theses are some of the best read people in specialized areas. For them, reviews in Artificial Intelligence will be a good way to to share the fruits of intensive reading in thesis preparation, and also to achieve some visibility. Reviewers will receive a personal copy of the book reviewed. Suggestions will reach me at the following address. Publishers should send two copies of works to be reviewed. Mark Stefik Knowledge Systems Area Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, California 94304 ARPANET Address: STEFIK@PARC ------------------------------ Date: Mon, 19 Sep 83 17:09:09 PDT From: Alex Pang Subject: help on satellite image processing I'm planning to do some work on cloud formation prediction based either purely on previous cloud formations or together with some other information - e.g. pressure, humidity, wind, etc. Does anyone out there know of any existing system doing any related stuff on this, and if so, how and where I can get more information on it. Also, do any of you know where I can get satellite data with 3D cloud information? Thank you very much. alex pang ------------------------------ Date: 16 Sep 83 22:26:21 EDT (Fri) From: Randy Trigg Subject: AI and music Speaking of creativity and such, I've had an interest in AI and music for some time. What I'd like is any pointers to companies and/or universities doing work in such areas as cognitive aspects of appreciating and creating music, automated music analysis and synthesis, and "smart" aids for composers and students. Assuming a reasonable response, I'll post results to the AIList. Thanks in advance. Randy Trigg ...!seismo!umcp-cs!randy (Usenet) randy.umcp-cs@udel-relay (Arpanet) ------------------------------ Date: 17 Sep 83 23:51:40-PDT (Sat) From: harpo!utah-cs!utah-gr!thomas @ Ucb-Vax Subject: Re: Go Tournament Article-I.D.: utah-gr.908 I'm sure we could find some time on one of our Vaxen for a Go tournament. If you're writing it on some other machine, make sure it is portable. =Spencer ------------------------------ Date: Fri 16 Sep 83 20:07:31-PDT From: Richard Treitel Subject: Turing test It was once playfully proposed to permute the actors in the classical definition of the Turing test, and thus define an intelligent entity as one that can tell the difference between a human and a (deceptively programmed) computer. May have been prompted by the well-known incident involving Eliza. The result is that, as our AI systems get better, the standard for intelligence will increase. This definition may even enable some latter-day Goedel to prove mathematically that computers can never be intelligent! - Richard :-) ------------------------------ Date: Fri, 16 Sep 83 19:36:53 PDT From: harry at lbl-nmm Subject: Psychology and Artificial Intelligence. Members of this list might find it interesting to read an article ``In Search of Unicorns'' by M. A. Boden (author of ``Artificial Intelligence and Natural Man'') in The Sciences (published by the New York Academy of Sciences). It discusses the `computational style' in theoretical psychology. It is not a technical article. Harry Weeks ------------------------------ Date: 15 Sep 83 17:10:04-PDT (Thu) From: ihnp4!arizona!robert @ Ucb-Vax Subject: Another Definition of Intelligence Article-I.D.: arizona.4675 A problem that bothers me about the Turing test is having to provoke the machine with such specific questioning. So jumping ahead a couple of steps, I would accept a machine as an adequate intelligence if it could listen to a conversation between other intelligences, and be able to interject at appropriate points such that these others would not be able to infer the mechanical aspect of this new source. Our experiences with human intelligence would make us very suspicous of anyone or anything that sits quietly without new, original, or synthetic comments while being within a environment of discussion. And then to fully qualify, upon overhearing these discussions over net, I'd expect it to start conjecturing on the question of intelligence, produce its own definition, and then start sending out any feelers to ascertain if there is anything out there qualifying under its definition. ------------------------------ Date: 16 Sep 83 23:11:08-PDT (Fri) From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax Subject: Re: Another Definition of Intelligence Article-I.D.: umcp-cs.2608 Finally, someone has come up with a fresh point of view in an otherwise stale discussion! Arizona!robert suggests that a machine could be classified as intelligent if it can discern intelligence within its environment, as opposed to being prodded into displaying intelligence. But how can we tell if the machine really has a discerning mind? Does it get involved in an interesting conversation and respond with its own ideas? Perhaps it just sits back and says nothing, considering the conversation too trivial to participate in. And therein lies the problem with this idea. What if the machine doesn't feel compelled to interact with its environment? Is this a sign of inability, or disinterest? Possibly disinterest. A machine mind might not be interested in its environment, but in its own thoughts. Its own thoughts ARE its environment. Perhaps its a sign of some mental aberration. I'm sure that sufficiently intelligent machines will be able to develop all sorts of wonderfully neurotic patterns of behavior. I know. Let's build a machine with only a console for an output device and wait for it to say, "Hey, anybody intelligent out there?" "You got any VAXEN out there?" - Speaker -- Full-Name: Speaker-To-Animals Csnet: speaker@umcp-cs Arpa: speaker.umcp-cs@UDel-Relay ------------------------------ Date: 17 Sep 83 19:17:21-PDT (Sat) From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax Subject: Life, don't talk to me about life.... Article-I.D.: umcp-cs.2628 From: jpj@mss Subject: Re: Another Definition of Intelligence To: citcsv!seismo!rlgvax!cvl!umcp-cs!speaker I find your notion of an artificial intelligence sitting back, taking in all that goes on around it, but not being motivated to comment (perhaps due to boredom) an amusing idea. Have you read "The Restaurant at the End of the Universe?" In that story is a most entertaining ai - a chronically depressed robot (whos name escapes me at the moment - I don't have my copy at hand) who thinks so much faster than all the mortals around it that it is always bored and *feels* unappreciated. (Sounds like some of my students!) Ah yes, Marvin the paranoid android. "Here I am, brain the size of a planet and all they want me to do is pick up a peice of paper." This is really interesting. You might think that a robot with such a huge intellect would also develop an oversized ego... but just the reverse could be true. He thinks so fast and so well that he becomes bored and disgusted with everything around himself... so he withdraws and wishes his boredom and misery would end. I doubt Adams had this in mind when he wrote the book, but it fits together nicely anyway. -- - Speaker speaker@umcp-cs speaker.umcp-cs@UDel-Relay ------------------------------ End of AIList Digest ******************** 22-Sep-83 17:31:13-PDT,17207;000000000001 Mail-From: LAWS created at 22-Sep-83 17:30:17 Date: Thursday, September 22, 1983 5:15PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #61 To: AIList@SRI-AI AIList Digest Friday, 23 Sep 1983 Volume 1 : Issue 61 Today's Topics: AI Applications - Music, AI at Edinburgh - Request, Games - Prolog Puzzle Solution, Seminars - Talkware & Hofstader, Architectures - Parallelism, Technical Reports - Rutgers ---------------------------------------------------------------------- Date: 20 Sep 1983 2120-PDT From: FC01@USC-ECL Subject: Re: Music in AI Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci. He had a real nice program to imitate Debuse (experts could not tell its compositions from originals). ------------------------------ Date: 18 Sep 83 12:01:27-PDT (Sun) From: decvax!dartvax!lorien @ Ucb-Vax Subject: U of Edinburgh, Scotland Inquiry Article-I.D.: dartvax.224 Who knows anything about the current status of the Artificial Intelligence school at the University of Edinburgh? I've heard they've been through hard times in recent years, what with the Lighthill report and British funding shakeups, but what has been going on within the past year or so? I'd appreciate any gossip/rumors/facts and if anyone knows that they're on the net, their address. --decvax!dartvax!dartlib!lorien Lorien Y. Pratt ------------------------------ Date: Mon 19 Sep 83 02:25:41-PDT From: Motoi Suwa Subject: Puzzle Solution [Reprinted from the Prolog Digest.] Date: 14 Sep. 1983 From: K.Handa ETL Japan Subject: Another Puzzle Solution This is the solution of Alan's puzzle introduced on 24 Aug. ?-go(10). will display the ten disgit number as following: -->6210001000 and ?-go(4). will: -->1210 -->2020 I found following numbers: 6210001000 521001000 42101000 3211000 21200 1210 2020 The Following is the total program ( DEC10 Prolog Ver.3 ) /*** initial assertion ***/ init(D):- ass_xn(D),assert(rest(D)),!. ass_xn(0):- !. ass_xn(D):- D1 is D-1,asserta(x(D1,_)),asserta(n(D1)),ass_xn(D1). /*** main program ***/ go(D):- init(D),guess(D,0). go(_):- abolish(x,2),abolish(n,1),abolish(rest,1). /* guess 'N'th digit */ guess(D,D):- result,!,fail. guess(D,N):- x(N,X),var(X),!,n(Y),N=),n(N),x(N,M),print(M),fail. result:- nl. ------------------------------ Date: 21 Sep 83 1539 PDT From: David Wilkins Subject: Talkware Seminars [Reprinted from the SU-SCORE bboard.] 1127 TW Talkware seminar Weds. 2:15 I will be organizing a weekly seminar this fall on a new area I am currently developing as a research topic: the theory of "talkware". This area deals with the design and analysis of languages that are used in computing, but are not programming languages. These include specification languages, representation languages, command languages, protocols, hardware description languages, data base query languages, etc. There is currently a lot of ad hoc but sophisticated practice for which a more coherent and general framework needs to be developed. The situation is analogous to the development of principles of programming languages from the diversity of "coding" languages and methods that existed in the early fifties. The seminar will include outside speakers and student presentations of relevant literature, emphasizing how the technical issues dealt with in current projects fit into the development talkware theory. It will meet at 2:15 every Wednesday in Jacks 301. The first meeting will be Wed. Sept. 28. For a more extensive description, see {SCORE}TALKWARE or {SAIL}TALKWA[1,TW]. ------------------------------ Date: Thu 22 Sep 00:23 From: Jeff Shrager Subject: Hofstader seminar at MIT [Reprinted from the CMU-AI bboard.] Douglas Hofstader is giving a course this semester at MIT. I thought that the abstract would interest some of you. The first session takes place today. ------ "Perception, Semanticity, and Statistically Emergent Mentality" A seminar to be given fall semester by Douglas Hofstader In this seminar, I will present my viewpoint about the nature of mind and the goals of AI. I will try to explain (and thereby develop) my vision of how we perceive the essence of things, filtering out the details and getting at their conceptual core. I call this "deep perception", or "recognition". We will review some earlier projects that attacked some related problems, but primarily we will be focussing on my own research projects, specifically: Seek-Whence (perception of sequential patterns), Letter Spirit (perception of the style of letters), Jumbo (reshuffling of parts to make "well-chunked" wholes), and Deep Sea (analogical perception). These tightly related projects share a central philosophy: that cognition (mentality) cannot be programmed explicitly but must emerge "epiphenomenally", i.e., as a consequence of the nondeterministic interaction of many independent "subcognitive" pieces. Thus the overall "mentality" of such a system is not directly programmed; rather, it EMERGES as an observable (but onnprogrammed) phenomenon -- a statistical consequence of many tiny semi-cooperating (and of course programmed) pieces. My projects all involve certain notions under development, such as: -- "activation level": a measure of the estimated relevance of a given Platonic concept at a given time; -- "happiness": a measure of how easy it is to accomodate a structure and its currently accepted Platonic class to each other; -- "nondeterministic terraced scan": a method of homing in to the best category to which to assign something; -- "semanticity": the measure of how abstractly rooted (intensional) a perception is; -- "slippability": the ease of mutability of intensional representational structures into "semantically close" structures; -- "system temprature": a number measuring how chaotically active the whole system is. This strategy for AI is permeated by probabilistic or statistical ideas. The main idea is that things need not happen in any fixed order; in fact, that chaos is often the best path to follow in building up order. One puts faith in the reliability of statistics: a sensible, coherent total behavior will emerge when there are enouh small independent events being influenced by high-level parameters such as temperature, activation levels, happinesses. A challange is to develop ways such a system can watch its own activities and use those observations ot evaluate its own progress, to detect and pull itself out of ruts it chances to fall into, and to guide itself toward a satisfying outcome. ... Prerequisits: an ability to program well, preferably in Lisp, and an interest in philosophy of mind and artificial intelligence. ------------------------------ Date: 18 Sep 83 22:48:56-PDT (Sun) From: decvax!dartvax!lorien @ Ucb-Vax Subject: Parallelism et. al. Article-I.D.: dartvax.229 The Parallelism and AI projects at the University of Maryland sound very interesting. I agree with an article posted a few days back that parallel hardware won't necessarily produce any significantly new methods of computing, as we've been running parallel virtual machines all along. Parallel hardware is another milestone along the road to "thinking in parallel", however, getting away from the purely Von Neumann thinking that's done in the DP world these days. It's always seemed silly to me that our computers are so serial when our brains: the primary analogy we have for "thinking machines" are so obviously parallel mechanisms. Finally we have the technology (software AND hardware) to follow in our machine architecture cognitive concepts that evolution has already found most powerful. I feel that the sector of the Artificial Intelligence community that pays close attention to psychology and the workings of the human brain deserves more attention these days, as we move from writing AI programs that "work" (and don't get me wrong, they work very well!) to those that have generalizable theoretical basis. One of these years, and better sooner than later, we'll make a quantum leap in AI research and articulate some of the fundamental structures and methods that are used for thinking. These may or may not be isomorphic to human thinking, but in either case we'll do well to look to the human brain for inspiration. I'd like to hear more about the work at the University of Maryland; in particular the prolog and the parallel-vision projects. What do you think of the debate between what I'll call the Hofstadter viewpoint: that we should think long term about the future of artificial intelligence, and the Feigenbaum credo: that we should stop philosophizing and build something that works? (Apologies to you both if I've misquoted) --Lorien Y. Pratt decvax!dartvax!lorien (Dartmouth College) ------------------------------ Date: 18 Sep 83 23:30:54-PDT (Sun) From: pur-ee!uiucdcs!uiuccsb!cytron @ Ucb-Vax Subject: AI and architectures - (nf) Article-I.D.: uiucdcs.2883 Forward at the request of speaker: /***** uiuccsb:net.arch / umcp-cs!speaker / 12:20 am Sep 17, 1983 */ The fact remains that if we don't have the algorithms for doing something with current hardware, we still won't be able to do it with faster or more powerful hardware. The fact remains that if we don't have any algorithms to start with then we shouldn't even be talking implementation. This sounds like a software engineer's solution anyway, "design the software and then find a CPU to run it on." New architectures, while not providing a direct solution to a lot of AI problems, provide the test-bed necessary for advanced AI research. That's why everyone wants to build these "amazingly massive" parallel architectures. Without them, AI research could grind to a standstill. To some extent these efforts change our way of thinking about problems, but for the most part they only speed up what we knew how to do already. Parallel computation is more than just "speeding things up." Some problems are better solved concurrently. My own belief is that the "missing link" to AI is a lot of deep thought and hard work, followed by VLSI implementation of algorithms that have (probably) been tested using conventional software running on conventional architectures. Gad...that's really provincial: "deep thought, hard work, followed by VLSI implementation." Are you willing to wait a millenia or two while your VAX grinds through the development and testing of a truly high-velocity AI system? If we can master knowledge representation and learning, we can begin to get away from programming by full analysis of every part of every algorithm needed for every task in a domain. That would speed up our progress more than new architectures. I agree. I also agree with you that hardware is not in itself a solution and that we need more thought put to the problems of building intelligent systems. What I am trying to point out, however, is that we need integrated hardware/software solutions. Highly parallel computer systems will become a necessity, not only for research but for implementation. - Speaker -- Full-Name: Speaker-To-Animals Csnet: speaker@umcp-cs Arpa: speaker.umcp-cs@UDel-Relay This must be hell...all I can see are flames... towering flames! ------------------------------ Date: 19 Sep 83 9:36:35-PDT (Mon) From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax Subject: RE: AI and Architecture Article-I.D.: ncsu.2338 Sheesh. Everyone seems so excited about whether a parallel machine is or will lead to fundamentally new things. I agree with someone's comment that conceptually time-sharing and multi-programming have been conceptually quite parellel "virtual" machines for some time. Just more and cheaper of the same. Perhaps the added availability will lead someone to have a good idea or two about how to do something better -- in that sense it seems certain that something good will come of proliferation and popularization of parallelism. But for my money, there is nothing really, fundamentally different. Unless it is non-determinism. Parallel system tend to be less deterministic then their simplex brethern, though vast effort are usually expended in an effort to stamp out this property. Take me for example: I am VERY non-deterministic (just ask my wife) and yet I am also smarter then a lot of AI programs. The break thru in AI/Arch will, in my non-determined opinion, come when people stop trying to sqeeze paralle systems into the more restricted modes of simplex systems, and develop new paradigms for how to let such a system spred its wings in a dimension OTHER THAN performance. From a pragmatic view, I think this will not happen until people take error recovery and exception processing more seriously, since there is a fine line between an error and a new thought .... ----GaryFostel---- ------------------------------ Date: 20 Sep 83 18:12:15 PDT (Tuesday) From: Bruce Hamilton Reply-to: Hamilton.ES@PARC-MAXC.ARPA Subject: Rutgers technical reports This is probably of general interest. --Bruce From: PETTY@RUTGERS.ARPA Subject: 1983 abstract mailing Below is a list of our newest technical reports. The abstracts for these are available for access via FTP with user account with any password. The file name is: tecrpts-online.doc If you wish to order copies of any of these reports please send mail via the ARPANET to LOUNGO@RUTGERS or PETTY@RUTGERS. Thank you!! CBM-TR-128 EVOLUTION OF A PLAN GENERATION SYSTEM, N.S. Sridharan, J.L. Bresina and C.F. Schmidt. CBM-TR-133 KNOWLEDGE STRUCTURES FOR A MODULAR PLANNING SYSTEM, N.S. Sridharan and J.L. Bresina. CBM-TR-134 A MECHANISM FOR THE MANAGEMENT OF PARTIAL AND INDEFINITE DESCRIPTIONS, N.S. Sridharan and J.L. Bresina. DCS-TR-126 HEURISTICS FOR FINDING A MAXIMUM NUMBER OF DISJOINT BOUNDED BATHS, D. Ronen and Y. Perl. DCS-TR-127 THE BALANCED SORTING NETWORK,M. Dowd, Y. Perl, L. Rudolph and M. Saks. DCS-TR-128 SOLVING THE GENERAL CONSISTENT LABELING (OR CONSTRAINT SATISFACTION) PROBLEM: TWO ALGORITHMS AND THEIR EXPECTED COMPLEXITIES, B. Nudel. DCS-TR-129 FOURIER METHODS IN COMPUTATIONAL FLUID AND FIELD DYNAMICS, R. Vichnevetsky. DCS-TR-130 DESIGN AND ANALYSIS OF PROTECTION SCHEMES BASED ON THE SEND-RECEIVE TRANSPORT MECHANISM, (Thesis) R.S. Sandhu. (If you wish to order this thesis, a pre-payment of $15.00 is required.) DCS-TR-131 INCREMENTAL DATA FLOW ANALYSIS ALGORITHMS, M.C. Paull and B.G. Ryder. DCS-TR-132 HIGH ORDER NUMERICAL SOMMERFELD BOUNDARY CONDITIONS: THEORY AND EXPERIMENTS, R. Vichnevetsky and E.C. Pariser. LCSR-TR-43 NUMERICAL METHODS FOR BASIC SOLUTIONS OF GENERALIZED FLOW NETWORKS, M. Grigoriadis and T. Hsu. LCSR-TR-44 LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT RECOGNITION, R. Keller. LCSR-TR-45 LEARNING AND PROBLEM SOLVING, T.M. Mitchell. LRP-TR-15 CONCEPT LEARNING BY BUILDING AND APPLYING TRANSFORMATIONS BETWEEN OBJECT DESCRIPTIONS, D. Nagel. ------------------------------ End of AIList Digest ******************** 25-Sep-83 16:54:52-PDT,17154;000000000001 Mail-From: LAWS created at 25-Sep-83 16:48:43 Date: Sunday, September 25, 1983 4:27PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #62 To: AIList@SRI-AI AIList Digest Sunday, 25 Sep 1983 Volume 1 : Issue 62 Today's Topics: Language Understanding & Scientific Method, Conferences - COLING 84 ---------------------------------------------------------------------- Date: 19 Sep 83 17:50:32-PDT (Mon) From: harpo!utah-cs!shebs @ Ucb-Vax Subject: Re: Natural Language Understanding Article-I.D.: utah-cs.1914 Lest usenet readers think things had gotten silent all at once, here's an article by Fernando Pereira that (apparently and inexplicably) was *not* sent to usenet, and my reply (fortunately, I now have read-only access to Arpanet, so I was able to find out about this) _____________________ Date: Wed 31 Aug 83 18:42:08-PDT From: PEREIRA@SRI-AI.ARPA Subject: Solutions of the natural language analysis problem [I will abbreviate the following since it was distributed in V1 #53 on Sep. 1. -- KIL] Given the downhill trend of some contributions on natural language analysis in this group, this is my last comment on the topic, and is essentially an answer to Stan the leprechaun hacker (STLH for short). [...] Lack of rigor follows from lack of method. STLH tries to bludgeon us with "generating *all* the possible meanings" of a sentence. Does he mean ALL of the INFINITY of meanings a sentence has in general? Even leaving aside model-theoretic considerations, we are all familiar with he wanted me to believe P so he said P he wanted me to believe not P so he said P because he thought that I would think that he said P just for me to believe P and not believe it and so on ... in spy stories. [...] Fernando Pereira ___________________ The level of discussion *has* degenerated somewhat, so let me try to bring it back up again. I was originally hoping to stimulate some debate about certain assumptions involved in NLP, but instead I seem to see a lot of dogma, which is *very* dismaying. Young idealistic me thought that AI would be the field where the most original thought was taking place, but instead everyone seems to be divided into warring factions, each of whom refuses to accept the validity of anybody else's approach. Hardly seems scientific to me, and certainly other sciences don't evidence this problem (perhaps there's some fundamental truth here - that the nature of epistemology and other AI activities are such that it's very difficult to prevent one's thought from being trapped into certain patterns - I know I've been caught a couple times, and it was hard to break out of the habit - more on that later) As a colleague of mine put it, we seem to be suffering from a "difference in context". So let me describe the assumptions underpinning my theory (yes I do have one): 1. Language is a very fuzzy thing. More precisely, the set of sound strings meaningful to a human is almost (if not exactly) the set of all possible sound strings. Now, before you flame, consider: Humans can get at least *some* understanding out of a nonsense sequence, especially if they have any expectations about what they're hearing (this has been demonstrated experimentally) although it will likely be wrong. Also, they can understand mispronounced or misspelled words, sentences with missing words, sentences with repeated words, sentences with scrambled word order, sentences with mixed languages (I used to have fun by speaking English using German syntax, and you can sometimes see signs using English syntax with "German" words), and so forth. Language is also used creatively (especially netters!). Words are continually invented, metaphors are created and mixed in novel ways. I claim that there is no rule of grammar that cannot be violated. Note that I have said *nothing* about changes of meaning, nor have I claimed that one could get much of anything out of a random sequence of words strung together. I have only claimed that the set of linguistically valid utterances is actually a large fuzzy set (in the technical sense of "fuzzy"). If you accept this, the implications for grammar are far-reaching - in fact, it may be that classical grammar is a curious but basically irrelevant description of language (however, I'm not completely convinced of that). 2. Meaning and interpretation are distinct. Perhaps I should follow convention and say "s-meaning" and "s-interpretation", to avoid terminology trouble. I think it's noncontroversial that the "true meaning" of an utterance can be defined as the totality of response to that utterance. In that case, s-meaning is the individual-independent portion of meaning (I know, that's pretty vague. But would saying that 51% of all humans must agree on a meaning make it any more precise? Or that there must be a predicate to represent that meaning? Who decides which predicate is appropriate?). Then s-interpretation is the component that depends primarily on the individual and his knowledge, etc. Let's consider an example - "John kicked the bucket." For most people, this has two s-meanings - the usual one derived directly from the words and an idiomatic way of saying "John died". Of course, someone may not know the idiom, so they can assign only one s-meaning. But as Mr. Pereira correctly points out, there are an infinitude of s-interpretations, which will completely vary from individual to individual. Most can be derived from the s-meaning, for instance the convoluted inferences about belief and intention that Mr. Pereira gave. On the other hand, I don't normally make those s-interpretations, and a "naive" person might *never* do so. Other parts of the s-interpretation could be (if the second s-meaning above was intended) that the speaker tends to be rather blunt; certainly a part of the response to the utterance, but is less clearly part of a "meaning". Even s- meanings are pretty volatile though - to use another spy story example, the sentence might actually be a code phrase with a completely arbitrary meaning! 3. Cognitive science is relevant to NLP. Let me be the first to say that all of its results are at best suspect. However, the apparent inclination of many AI people to regard the study of human cognition as "unscientific" is inexplicable. I won't claim that my program defines human cognition, since that degree of hubris requires at least a PhD :-) . But cognitive science does have useful results, like the aforementioned result about making sense out of nonsense. Also, lot of common-sense results can be more accurately described by doing experiments. "Don't think of a zebra for the next ten minutes" - my informal experimentation indicates that *nobody* is capable - that seems to say a lot about how humans operate. Perhaps cognitive science gets a bad review because much of it is Gedanken experiments; I don't need tests on a thousand subjects to know that most kinds of ungrammaticality (such as number agreement) are noticeable, but rarely affect my understanding of a sentence. That's why I say that humans are experts at their own languages - we all (at least intuitively) understand the different parts of speech and how sentences are put together, even though we have difficulty expressing that knowledge (sounds like the knowledge engineer's problems in dealing with experts!). BTW, we *have* had a non- expert (a CS undergrad) add knowledge to our NLP system, and the folks at Berkeley have reported similar results [Wilensky81]. 4. Theories should reflect reality. This is especially important because the reverse is quite pernicious - one ignores or discounts information not conforming to one's theories. The equations of motion are fine for slow-speed behavior, but fail as one approaches c (the language or the velocity? :-) ). Does this mean that Lorenz contractions are experimental anomalies? The grammar theory of language is fine for very restricted subsets of language, but is less satisfactory for explaining the phenomena mentioned in 1., nor does it suggest how organisms *learn* language. Mr. Pereira's suggestion that I do not have any kind of theoretical basis makes me wonder if he knows what Phrase Analysis *is*, let alone its justification. Wilensky and Arens of UCB have IJCAI-81 papers (and tech reports) that justify the method much better than I possibly could. My own improvement was to make it follow multiple lines of parsing (have to be contrite on this; I read Winograd's new book recently and what I have is really a sort of active chart parser; also noticed that he gives nary a mention to Phrase Analysis, which is inexcusable - that's the sort of thing I mean by "warring factions"). 4a. Reflecting reality means "all of it" or (less preferable) "as much as possible". Most of the "soft sciences" get their bad reputation by disregarding this principle, and AI seems to have a problem with that also. What good is a language theory that cannot account for language learning, creative use of language, and the incredible robustness of language understanding? The definition of language by grammar cannot properly explain these - the first because of results (again mentioned by Winograd) that children receive almost no negative examples, and that a grammar cannot be learned from positive examples alone, the third because the grammar must be extended and extended until it recognizes all strings as valid. So perhaps the classical notion of grammar is like classical mechanics - useful for simple things, but not so good for photon drives or complete NLP systems. The basic notions in NLP have been thoroughly investigated; IT'S TIME TO DEVELOP THEORIES THAT CAN EXPLAIN *ALL* ASPECTS OF LANGUAGE BEHAVIOR! 5. The existence of "infinite garden-pathing". To steal an example from [Wilensky80], John gave Mary a piece of his.........................mind. Only the last word disambiguates the sentence. So now, what did *you* fill in, before you read that last word? There's even more interesting situations. Part of my secret research agenda (don't tell Boeing!) has been the understanding of jokes, particularly word plays. Many jokes are multi-sentence versions of garden- pathing, where only the punch line disambiguates. A surprising number of crummy sitcoms can get a whole half-hour because an ambiguous sentence is interpreted differently by two people (a random thought - where *did* this notion of sentence as fundamental structure come from? Why don't speeches and discourses have a "grammar" precisely defining *their* structure?). In general, language is LR(lazy eight). Miscellaneous comments: This has gotten pretty long (a lot of accusations to respond to!), so I'll save the discussion of AI dogma, fads, etc for another article. When I said that "problems are really concerned with the acquisition of linguistic knowledge", that was actually an awkward way to say that, having solved the parsing problem, my research interests switched to the implementation of full-scale error correction and language learning (notice that Mr. Pereira did not say "this is ambiguous - what did you mean?", he just assumed one of the meanings and went on from there. Typical human language behavior, and inadequately explained by most existing theories...). In fact, I have a detailed plan for implementation, but grad school has interrupted that and it may be a while before it gets done. So far as I can tell, the implementation of learning will not be unusually difficult. It will involve inductive learning, manipulation of analogical representations to acquire meanings ("an mtrans is like a ptrans, but with abstract objects"....), and other good things. The nonrestrictive nature of Phrase Analysis seems to be particularly well-suited to language knowledge acquisition. Thanks to Winograd (really quite a good book, but biased) I now know what DCG's are (the paper I referred to before was [Pereira80]). One of the first paragraphs in that paper was revealing. It said that language was *defined* by a grammar, then proceeded from there. (Different assumptions....) Since DCG's were compared only to ATN's, it was of course easy to show that they were better (almost any formalism is better than one from ten years before, so that wasn't quite fair). However, I fail to see any important distinction between a DCG and a production rule system with backtracking. In that case, a DCG is really a special case of a Phrase Analysis parser (I did at one time tinker with the notion of compiling phrase rules into OPS5 rules, but OPS5 couldn't manage it very well - no capacity for the parallelism that my parser needed). I am of course interested in being contradicted on any of this. Mr. Pereira says he doesn't know what the "Schank camp" is. If that's so then he's the only one in NLP who doesn't. I have heard some highly uncomplimentary comments about Schank and his students. But then that's the price for going against conventional wisdom... Sorry for the length, but it *was* time for some light rather than heat! I have refrained from saying much of anything about my theories of language understanding, but will post details if accusations warrant :-) Theoretically yours*, Stan (the leprechaun hacker) Shebs utah-cs!shebs * love those double meanings! [Pereira80] Pereira, F.C.N., and Warren, D.H.D. "Definite Clause Grammars for Language Analysis - A Survey of the Formalism and a Comparison with Augmented Transition Networks", Artificial Intelligence 13 (1980), pp 231-278. [Wilensky80] Wilensky, R. and Arens, Y. PHRAN: A Knowledge-based Approach to Natural Language Analysis (Memorandum No. UCB/ERL M80/34). University of California, Berkeley, 1980. [Wilensky81] Wilensky, R. and Morgan, M. One Analyzer for Three Languages (Memorandum No. UCB/ERL M81/67). University of California, Berkeley, 1981. [Winograd83] Winograd, T. Language as a Cognitive Process, vol. 1: Syntax. Addison-Wesley, 1983. ------------------------------ Date: Fri 23 Sep 83 14:34:44-CDT From: Lauri Karttunen Subject: COLING 84 -- Call for papers [Reprinted from the UTexas-20 bboard.] CALL FOR PAPERS COLING 84, TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS COLING 84 is scheduled for 2-6 July 1984 at Stanford University, Stanford, California. It will also constitute the 22nd Annual Meeting of the Association for Computational Linguistics, which will host the conference. Papers for the meeting are solicited on linguistically and computationally significant topics, including but not limited to the following: o Machine translation and machine-aided translation. o Computational applications in syntax, semantics, anaphora, and discourse. o Knowledge representation. o Speech analysis, synthesis, recognition, and understanding. o Phonological and morpho-syntactic analysis. o Algorithms. o Computational models of linguistic theories. o Parsing and generation. o Lexicology and lexicography. Authors wishing to present a paper should submit five copies of a summary not more than eight double-spaced pages long, by 9 January 1984 to: Prof. Yorick Wilks, Languages and Linguistics, University of Essex, Colchester, Essex, CO4 3SQ, ENGLAND [phone: 44-(206)862 286; telex 98440 (UNILIB G)]. It is important that the summary contain sufficient information, including references to relevant literature, to convey the new ideas and allow the program committee to determine the scope of the work. Authors should clearly indicate to what extent the work is complete and, if relevant, to what extent it has been implemented. A summary exceeding eight double-spaced pages in length may not receive the attention it deserves. Authors will be notified of the acceptance of their papers by 2 April 1984. Full length versions of accepted papers should be sent by 14 May 1984 to Dr. Donald Walker, COLING 84, SRI International, Menlo Park, California, 94025, USA [phone: 1-(415)859-3071; arpanet: walker@sri-ai]. Other requests for information should be addressed to Dr. Martin Kay, Xerox PARC, 3333 Coyote Hill Road, Palo Alto, California 94304, USA [phone: 1-(415)494-4428; arpanet: kay@parc]. ------------------------------ End of AIList Digest ******************** 25-Sep-83 20:10:43-PDT,14746;000000000001 Mail-From: LAWS created at 25-Sep-83 20:07:09 Date: Sunday, September 25, 1983 7:47PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #63 To: AIList@SRI-AI AIList Digest Monday, 26 Sep 1983 Volume 1 : Issue 63 Today's Topics: Robotics - Physical Strength, Parallelism & Physiology, Intelligence - Turing Test, Learning & Knowledge Representation, Rational Psychology ---------------------------------------------------------------------- Date: 21 Sep 83 11:50:31-PDT (Wed) From: ihnp4!mtplx1!washu!eric @ Ucb-Vax Subject: Re: Strong, agile robot Article-I.D.: washu.132 I just glanced at that article for a moment, noting the leg mechanism detail drawing. It did not seem to me that the beastie could move very fast. Very strong IS nice, tho... Anyway, the local supplier of that mag sold them all. Anyone remember if it said how fast it could move, and with what payload? eric ..!ihnp4!washu!eric ------------------------------ Date: 23 Sep 1983 0043-PDT From: FC01@USC-ECL Subject: Parallelism I thought I might point out that virtually no machine built in the last 20 years is actually lacking in parallelism. In reality, just as the brain has many neurons firing at any given time, computers have many transistors switching at any given time. Just as the cerebellum is able to maintain balance without the higher brain functions in the cerebrum explicitly controlling the IO, most current computers have IO controllers capable of handling IO while the CPU does other things. Just as people have faster short term memory than long term memory but less of it, computers have faster short term memory than long term memory and use less of it. These are all results of cost/benefit tradeoffs for each implementation, just as I presume our brains and bodies are. Don't be so fast to think that real computer designers are ignorant of physiology. The trend towards parallelism now is more like the human social system of having a company work on a problem. Many brains, each talking to each other when they have questions or results, each working on different aspects of a problem. Some people have breakdowns, but the organization keeps going. Eventually it comes up with a product, although it may not really solve the problem posed at the beginning, it may have solved a related problem or found a better problem to solve. Another copyrighted excerpt from my not yet finished book on computer engineering modified for the network bboards, I am ever yours, Fred ------------------------------ Date: 14 Sep 83 22:46:10-PDT (Wed) From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax Subject: Re: in defense of Turing - (nf) Article-I.D.: uiucdcs.2822 Two points where Martin Taylor's response reveals that I was not emphatic enough [you see, it is possible to underflame, and thus be misunderstood!] in my comments on the Turing test. 1. One of Dennett's main points (which I did not mention, since David Rogers had already posted it in the original note of this string) is that the unrestricted Turing-like test of which he spoke is a SUFFICIENT, but not a NECESSARY test for intelligence comparable to that possessed and displayed by most humans in good working order. [I myself would add that it tests as much for mastery of human communication skills (which are indeed highly dependent on particular cultures) as it does for intelligence.] That is to say, if a program passes such a rigorous test, then the practitioners of AI may congratulate themselves for having built such a clever beast. However, a program which fails such a test need not be considered unintelligent. Indeed, a human which fails such a test need not be considered unintelligent -- although one would probably consider him/her to be of substandard intelligence, or of impaired intelligence, or dyslexic, or incoherent, or unconscious, or amnesic, or aphasic, or drunk (i.e. disabled in some fashion). 2. I did not post "a set of criteria which an AI system should pass to be accepted as human-like at a variety of levels." I posted a set of tests by which to gauge progress in the field of AI. I don't imagine that these tests have anything to do with human-ness. I also don't imagine that many people who discuss and discourse upon "intelligence" have any coherent definition for what it might be. Other comments that seem relevant (but might not be) ----- -------- ---- ---- -------- ---- ----- --- --- Neither Dennett's test, nor my tests are intended to discern whether or not the entity in question possesses a human brain. In addition to flagrant use of hindsight, my tests also reveal my bias that science is an endeavor which requires intelligence on the part of its human practitioners. I don't mean to imply that it is the only such domain. Other domains which require that the people who live in them have "smarts" are puzzle solving, language using, language learning (both first and second), etc. Other tasks not large enough to qualify as domains that require intelligence (of a degree) from people who do them include: figuring out how to use a paper clip or a stapler (without being told or shown), figuring out that someone was showing you how to use a stapler (without being told that such instruction was being given), improvising a new tool or method for a routine task that one is accustomed to doing with an old tool or method, realizing that an old method needs improvement, etc. The interdependence of intelligence and culture is much more important that we usually give it credit for. Margaret Mead must have been quite a curiousity to the peoples she studied. Imagine that a person of such a different and strange (to us) culture could be made to understand enough about machines and the Turing test so that he/she could be convinced to serve as an interlocutor... On second thought, that opens up such a can of worms that I'd rather deny having proposed it in the first place. ------------------------------ Date: 19 Sep 83 17:43:53-PDT (Mon) From: harpo!utah-cs!shebs @ Ucb-Vax Subject: Re: Rational Psychology Article-I.D.: utah-cs.1913 I just read Jon Doyle's article about Rational Psychology in the latest AI Magazine (Fall '83), and am also very interested in the ideas therein. The notion of trying to find out what is *possible* for intelligences is very intriguing, not to mention the idea of developing some really sound theories for a change. Perhaps I could mention something I worked on a while back that appears to be related. Empirical work in machine learning suggests that there are different levels of learning - learning by being programmed, learning by being told, learning by example, and so forth, with the levels being ordered by their "power" or "complexity", whatever that means. My question: is there something fundamental about this classification? Are there other levels? Is there a "most powerful" form of learning, and if so, what is it? I took the approach of defining "learning" as "behavior modification", even though that includes forgetting (!), since I wasn't really concerned with whether the learning resulted in an "improvement" in behavior or not. The model of behavior was somewhat interesting. It's kind of a dualistic thing, consisting of two entities: the organism and the environment. The environment is everything outside, including the organsism's own physical body, while the organism is more or less equivalent to a mind. Each of these has a state, and behavior can be defined as functions mapping the set of all states to itself. Both the environment and the organism have behaviors that can be treated in the same way (that is, they are like mirror images of each other). The whole development is too elaborate for an ASCII terminal, but it boiled down to this: that since learning is a part of behavior, but it also *modifies* behavior, then there is a part of the behavior function that is self-modifying. One can then define "1st order learning" as that which modifies ordinary behavior. 2nd order learning would be "learning how to learn", 3rd order would be "learning how to learn how to learn" (whatever *that* means!). The definition of these is more precise than my Anglicization here, and seem to indicate a whole infinite heirarchy of learning types, each supposedly more powerful than the last. It doesn't do much for my original questions, because the usual types of learning are all 1st order - although they don't have to be. Lenat's work on learning heuristics might be considered 2nd order, and if you look at it in the right way, it may actually be that EURISKO actually implements all orders of learning at the same time, so the above discussion is garbage (sigh). Another question that has concerned me greatly (particularly since building my parser) is the relation of the Halting Problem to AI. My program was basically a production system, and had an annoying tendency to get caught in infinite loops of various sorts. More misfeatures than bugs, though, since the theory did not expressly forbid such loops! To take a more general example, why don't circular definitions cause humans to go catatonic? What is the mechanism that seems to cut off looping? Do humans really beat the Halting Problem? One possible mechanism is that repetition is boring, and so all loops are cut off at some point or else pushed so far down on the agenda of activities that they are effectively terminated. What kind of theory could explain this? Yet another (last one folks!) question is one that I raised a while back, about all representations reducing down to attribute-value pairs. Yes, they used to be fashionable but are now out of style, but I'm talking about a very deep underlying representation, in the same way that the syntax of s-expressions underlies Lisp. Counterexamples to my conjecture about AV-pairs being universal were algebraic expressions (which can be turned into s-expressions, which can be turned into AV-pairs) and continuous values, but they must have *some* closed form representation, which can then be reduced to AV-pairs. So I remained unconvinced that the notion of objects with AV-pairs attached is *not* universal (of course, for some things, the representation is so primitive as to be as bad as Fortran, but then this is an issue of possibility, not of goodness or efficiency). Looking forward to comments on all of these questions... stan the l.h. utah-cs!shebs ------------------------------ Date: 22 Sep 83 11:26:47-PDT (Thu) From: ihnp4!drux3!drufl!samir @ Ucb-Vax Subject: Re: Rational Psychology Article-I.D.: drufl.663 To me personally, Rational Psychology is a misnomer. "Rational" negates what "Psychology" wants to understand. Flames to /dev/null. Interesting discussions welcome. Samir Shah drufl!samir AT&T Information Systems, Denver. ------------------------------ Date: 22 Sep 83 17:12:11-PDT (Thu) From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax Subject: Re: Rational Psychology Article-I.D.: ariel.456 Samir's view: "To me personally, Rational Psychology is a misnomer. "Rational" negates what "Psychology" wants to understand." How so? Can you support your claim? What does psychology want to understand that Rationality negates? Psychology is the Logos of the Psyche or the logic of the psyche. How does one understand without logic? How does one understand without rationality? What is understand? Isn't language itself dependent upon the rational faculty, or more specifically, upon the ability to form concepts, as opposed to percepts? Can you understand without language? To be totally without rationality (lacking the functional capacity for rationality - the CONCEPTUAL faculty) would leave you without language, and therefore without understanding. In what TERMS is something said to be understood? How can terms have meaning without rationality? Or perhaps you might claim that because men are not always rational that man does not possess a rational faculty, or that it is defective, or inadequate? How about telling us WHY you think Rational negates Psychology? These issues are important to AI, psychology and philosophy students... The day may not be far off when AI research yields methods of feature abstraction and integration that approximate percept-formation in humans. The next step, concept formation, will be much harder. How does an epistemology come about? What are the sequential steps necessary to form an epistemology of any kind? By what method does the mind (what's that?) integrate percepts into concepts, make identifications on a conceptual level ("It is an X"), justify its identifications ("and I know it is an X because..."), and then decide (what's that?) what to do about it ("...so therefore I should do Y")? Do you seriously think that understanding these things won't take Rationality? Norm Andrews, AT&T Information Systems, Holmdel, N.J. ariel!norm ------------------------------ Date: 22 Sep 83 12:02:28-PDT (Thu) From: decvax!genrad!mit-eddie!mit-vax!eagle!mhuxi!mhuxj!mhuxl!achilles !ulysses!princeton!leei@Ucb-Vax Subject: Re: Rational Psychology Article-I.D.: princeto.77 I really think that the ability that we humans have that allows us to avoid looping is the simple ability to recognize a loop in our logic when it happens. This comes as a direct result of our tendency for constant self- inspection and self-evaluation. A machine with this ability, and the ability to inspect its own self-inspections . . ., would probably also be able to "solve" the halting problem. Of course, if the loop is too subtle or deep, then even we cannot see it. This may explain the continued presence of various belief systems that rely on inherently circular logic to get past their fundamental problems. -Lee Iverson ..!princeton!leei ------------------------------ End of AIList Digest ******************** 26-Sep-83 21:56:39-PDT,11871;000000000001 Mail-From: LAWS created at 26-Sep-83 21:53:40 Date: Monday, September 26, 1983 9:28PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #64 To: AIList@SRI-AI AIList Digest Tuesday, 27 Sep 1983 Volume 1 : Issue 64 Today's Topics: Database Systems - DBMS Software Available, Symbolic Algebra - Request for PRESS, Humor - New Expert Systems, AI at Edinburgh - Michie & Turing Institute, Rational Psychology - Definition, Halting Problem & Learning, Knowledge Representation - Course Announcement ---------------------------------------------------------------------- Date: 21 Sep 83 16:17:08-PDT (Wed) From: decvax!wivax!linus!philabs!seismo!hao!csu-cs!denelcor!pocha@Ucb-Vax Subject: DBMS Software Available Article-I.D.: denelcor.150 Here are 48 vendors of the most popular DBMS packages which will be presented at the National Database & 4th Generation Language Symposium. Boston, Dec. 5-8 1983, Radisson-Ferncroft Hotel, 50 Ferncroft Rd., Davers, Ma For information write. Software Institute of America, 339 Salem St, Wakefield Mass 01880 (617)246-4280. ______________________________________________________________________________ Applied Data Research DATACOM, IDEAL |Mathamatica Products RAMIS II Battelle - - - - - - - BASIS |Manager Software Prod. DATAMANAGER Britton-Lee IDM | DESIGNMANAGER Cincom Systems TIS, TOTAL, | SOURCEMANAGER MANTIS |National CSS, Inc. NOMAD2 Computer Associates CA-UNIVERSE |Oracle Corp. ORACLE Computer Co. of America MODEL 204 |Perkin-Elmer RELIANCE PRODUCT LINE |Prime Computer PRIME DBMS Computer Techniques QUEO-IV | INFORMATION Contel - - - - - - - - RTFILE |Quassar Systems POWERHOUSE Cullinet Software IDMS, ADS | POWERPLAN Database Design, Inc. DATA DESIGNER |Relational Tech. Inc. INGRES Data General DG/DBMS |Rexcom Corp. REXCOM PRESENT |Scientific Information SIR/DBMS Digital Equipment Co. VAX INFO. ARCH |Seed Software SEED Exact Systems & Prog. DNA-4 |Sensor Based System METAFILE Henco Inc. INFO |Software AG of N.A. ADABAS Hewlett Packard IMAGE |Software House SYSTEM 1022 IBM Corp. SQL/DS, DB2 |Sydney Development Co. CONQUER Infodata Systems INQUIRE |Tandem Computers ENCOMPASS Information Builders FOCUS |Tech. Info. Products IP/3 Intel Systems Corp. SYSTEM 2000 |Tominy, Inc. DATA BASE-PLUS ______________________________________________________________________________ John Pocha Denelcor, Inc. 17000 E. Ohio Place Aurora, Colorado 80017 work (303)337-7900 x379 home (303)794-5190 {csu-cs|nbires|brl-bmd}!denelcor!pocha ------------------------------ Date: 23 Sep 83 19:04:12-PDT (Fri) From: decvax!tektronix!tekchips!wm @ Ucb-Vax Subject: Request for PRESS Article-I.D.: tekchips.317 Does anyone know where I can get the PRESS algebra system, by Alan Bundy, written in Prolog? Wm Leler tektronix!tekchips!wm wm.Tektronix@Rand-relay ------------------------------ Date: 23 Sep 83 1910 EDT (Friday) From: Jeff.Shrager@CMU-CS-A Subject: New expert systems announced: Dentrol: A dental expert system based upon tooth maintenance principles. Faust: A black magic advisor with mixed initiative goal generation. Doug: A system which will convert any given domain into set theory. Cray: An expert arithmetic advisor. Heuristics exist for any sort of real number computation involving arithmetic functions (+, -, and several others) within a finite (but large) range around 0.0. The heuristics are shown to be correct for typical cases. Meta: An expert at thinking up new domains in which there should be expert systems. Flamer: A expert at seeming to be an expert in any domain in which it is not an expert. IT: (The Illogic Theorist) A expert at fitting any theory to any quanity of protocol data. Theories must be specified in "ITLisp" but IT can construct the protocols if need be. ------------------------------ Date: 22 Sep 83 23:25:15-PDT (Thu) From: pur-ee!uiucdcs!marcel @ Ucb-Vax Subject: Re: U of Edinburgh, Scotland Inquiry - (nf) Article-I.D.: uiucdcs.2935 I can't tell you about the Dept of AI at Edinburgh, but I do know about the Machine Intelligence Research Unit chaired by Prof. Donald Michie. The MIRU will fold in future, because Prof Michie intends to set up a new research institute in the UK. He's been planning this and fighting for it for quite a while now. It will be called the "Turing Institute", and is intended to become one of the prime centers of AI research in the UK. In fact, it will be one of the very few centers at which research is the top priority, rather than teaching. Michie has recently been approached by the University of Strathclyde near Glasgow, which is interested in functioning as the associated teaching institution (cp SRI and Stanford). If that works out, the Turing Institute may be operational by September 1984. ------------------------------ Date: 23 Sep 83 5:04:46-PDT (Fri) From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax Subject: Re: Rational Psychology Article-I.D.: ssc-vax.538 (should be posting from utah, but I saw it here first and just couldn't resist...) I think we've got a terminology problem here. The word "rational" is so heavily loaded that it can hardly move! (as net.philosophy readers well know). The term "rational psychology" does seem to exclude non-rational behavior (whatever that is) from consideration, which is not true at all. Rather, the idea is to explore the entire universe of possibilities for intelligent behavior, rather than restricting oneself to observing the average college sophomore or the AI programs small enough to fit on present-day machines. Let me propose the term "universal psychology" as a substitute, analogous to the mathematical study of universal algebras. Fewer connotations, and it better suggests the real thrust of this field - the study of *possible* intelligent behavior. stan the r.h. (of lightness) ssc-vax!sts (but mail to harpo!utah-cs!shebs) ------------------------------ Date: 26 Sep 1983 0012-PDT From: Jay Subject: re: the halting problem, orders of learning Certain representaions of calculations lead to easy detection of looping. Consider the function... f(x) = x This could lead to ... f(f(x)) = x Or to ... f(f(f(f( ... )))) = x But why bother! Or for another example, consider the life blinker.. + + + + becomes + becomes + + + becomes (etc.) + Why bother calculateing all the generations for this arangement? The same information lies in ... for any integer i + Blinker(2i) = + + + and Blinker(2i+1) = + + There really is no halting problem, or infinite looping. The information for the blinker need not be fully decoded, it can be just the above "formulas". So humans could choses a representation of circular or "infinite looping" ideas, so that the circularity is expresed in a finite number of bits. As for the orders of learning; Learning(1) is a behavior. That is modifying behaivor is a behavior. It can be observed in schools, concentration camps, or even in the laboratory. So learning(2) is modifying a certain behavior, and thus nothing more (in one view) than learning(1). Indeed it is just learning(1) applied to itself! So learning(i) is just i (the way an organism modifies) its behavior But since behavior is just the way an organism modifies the enviroment, i+1 Learning(i) = (the way an organism modifies) the enviroment. and learning(0) is just behavior. So depending on your view, there are either an infinite number of ways to learn, or there are an infinite number of organisms (most of whose enviroments are just other organisms). j' ------------------------------ Date: Mon 26 Sep 83 11:48:33-MDT From: Jed Krohnfeldt Subject: Re: learning levels, etc. Some thoughts about Stan Shebs' questions: I think that your continuum of 1st order learning, 2nd order learning, etc. can really be collapsed to just two levels - the basic learning level, and what has been popularly called the "meta level". Learning about learning about learning, is really no different than learning about learning, is it? It is simply a capability to introspect (and possibly intervene) into basic learning processes. This also proposes an answer to your second question - why don't humans go catatonic when presented with circular definitions - the answer may be that we do have heuristics, or meta-level knowledge, that prevents us from endlessly looping on circular concepts. Jed Krohnfeldt utah-cs!jed krohnfeldt@utah-20 ------------------------------ Date: Mon 26 Sep 83 10:44:34-PDT From: Bob Moore Subject: course announcement COURSE ANNOUNCEMENT COMPUTER SCIENCE 400 REPRESENTATION, MEANING, AND INFERENCE Instructor: Robert Moore Artificial Intelligence Center SRI International Time: MW @ 11:00-12:15 (first meeting Wed. 9/28) Place: Margaret Jacks Hall, Rm. 301 The problem of the formal representation of knowledge in intelligent systems is subject to two important constraints. First, a general knowledge-representation formalism must be sufficiently expressive to represent a wide variety of information about the world. A long-term goal here is the ability to represent anything that can be expressed in natural language. Second, the system must be able to draw inferences from the knowledge represented. In this course we will examine the knowledge representation problem from the perspective of these constraints. We will survey techniques for automatically drawing inferences from formalizations of commonsense knowledge; we will look at some of the aspects of the meaning of natural-language expressions that seem difficult to formalize (e.g., tense and aspect, collective reference, propositional attitudes); and we will consider some ways of bridging the gap between formalisms for which the inference problem is fairly well understood (first-order predicate logic) and the richer formalisms that have been proposed as meaning representations for natural language (higher-order logics, intentional and modal logics). ------------------------------ End of AIList Digest ******************** 29-Sep-83 10:03:55-PDT,14706;000000000001 Mail-From: LAWS created at 29-Sep-83 10:00:14 Date: Thursday, September 29, 1983 9:46AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #65 To: AIList@SRI-AI AIList Digest Thursday, 29 Sep 1983 Volume 1 : Issue 65 Today's Topics: Automatic Translation - French-to-English Request, Music and AI - Request, Publications - CSLI Newsletter & Apollo User's Mailing List, Seminar - Parallel Algorithms: Cook at UTexas Oct. 6, Lab Reports - UM Expansion, Software Distributions - Maryland Franz Lisp Code, Conferences - Intelligent Sys. and Machines, CSCSI, ---------------------------------------------------------------------- Date: Wed 28 Sep 83 11:37:27-PDT From: David E.T. Foulser Subject: Re: Automatic Translation I'm looking for a program to perform automatic translation from French to English. The output doesn't have to be perfect (I hardly expect it). I'll appreciate any leads you can give me. Dave Foulser ------------------------------ Date: Wed 28 Sep 83 18:46:09-EDT From: Ted Markowitz Subject: Music & AI, pointers wanted I'd like to hear from anyone doing work that somehow relates AI and music in some fashion. Particularly, are folks using AI programs and techniques in composition (perhaps as a composer's assistant)? Any responses will be passed on to those interested in the results. --ted ------------------------------ Date: Mon 26 Sep 83 12:08:44-CDT From: Lauri Karttunen Subject: CSLI newsletter [Reprinted from the UTexas-20 bboard.] A copy of the first newsletter from the Center for the Study of Language and Information (CSLI) at Stanford is in PS:CSLI.NEWS. The section on "Remote Affiliates" is of some interest to many people here. ------------------------------ Date: Thu, 22 Sep 83 14:29:56 EDT From: Nathaniel Mishkin Subject: Apollo Users Mailing List This message is to announce the creation of a new mailing list: Apollo@YALE in which I would like to include all users of Apollo computers who are interested in sharing their experiences about Apollos. I think all people could benefit from finding out what other people are doing on their Apollos. Mail to the list will be archived in some public place that I will announce at a later date. At least initially, the list will not be moderated or digested. If the volume is too great, this may change. If you are interested in getting on this mailing list, send mail to: Apollo-Request@YALE If several people at your site are interested in being members and your mail system supports local redistribution, please tell me so I can add a single entry (e.g. "Apollo-Podunk@PODUNK") instead of one for each person. ------------------------------ Date: Mon 26 Sep 83 16:44:31-CDT From: CS.GLORIA@UTEXAS-20.ARPA Subject: Cook Colloquium, Oct 6 [Reprinted from the UTexas-20 bboard.] Stephen A. Cook, University of Toronto, will present a talk entitled "Which Problems are Subject to Exponential Speed-up by Parallel Computers?" on Thursday, Oct. 6 at 3:30 p.m. in Painter Hall 4.42. Abstract: In the future we expect large parallel computers to exist with thousands or millions of processors able to work together on a single problem. There is already a significant literature of published algorithms for such machines in which the number of processors available is treated as a resource (generally polynomial in the input size) and the computation time is extremely fast (polynomial in the logarithm of the input size). We shall give many examples of problems for which such algorithms exist and classify them according to the kind of algirithm which can be used. On the other hand, we will give examples of problems with feasible sequential algorithms which appear not to be amenable to such fast parallel algorithms. ------------------------------ Date: 21 Sep 83 16:33:08 EDT (Wed) From: Mark Weiser Subject: UM Expansion [Due to a complaint that even academic job ads constitute an "egregious violation" of Arpanet standards, and following failure of anyone to reply to my subsequent queries, I have decided to publish general notices of lab expansions but not specific positions. The following solicitation has been edited accordingly. -- KIL] The University of Maryland was recently awarded 4.2 million dollars by the National Science Foundation to develop the hardware and software for a parallel processing laboratory. More than half of the award amount is going directly for hardware acquisition, and this money is also being leveraged through substantial vendor discounts and joint research programs now being negotiated. We will be buying things like lots of Vaxes, Sun's, Lisp Machines, etc., to augment our current 2 780's, ethernet, etc. system. Several new permanent positions are being created in the Computer Science Department for this laboratory. [...] Anyone interested should make initial inquiries, send resumes, etc. to Mark Weiser at one of the addresses below: Mark Weiser Computer Science Department University of Maryland College Park, MD 20742 (301) 454-6790/4251/6291 (in that order). UUCP: {seismo,allegra,brl-bmd}!umcp-cs!mark CSNet: mark@umcp-cs ARPA: mark.umcp-cs@UDel-Relay ------------------------------ Date: 26 Sep 83 17:32:04-PDT (Mon) From: decvax!mcvax!philabs!seismo!rlgvax!cvl!umcp-cs!liz @ Ucb-Vax Subject: Maryland software distribution Article-I.D.: umcp-cs.2755 This is to announce the availability of the Univ of Maryland software distribution. This includes source code for the following: 1. The flavors package written in Franz Lisp. This package has been used successfully in a number of large systems at Maryland, and while it does not implement all the features of Lisp Machine Flavors, the features present are as close to the Lisp Machine version as possible within the constraints of Franz Lisp. (Note that Maryland flavors code *can* be compiled.) 2. Other Maryland Franz hacks including the INTERLISP-like top level, the lispbreak error handling package, the for macro and the new loader package. 3. The YAPS production system written in Franz Lisp. This is similar to OPS5 but more flexible in the kinds of lisp expressions that may appear as facts and patterns (sublists are allowed and flavor objects are treated atomically), the variety of tests that may appear in the left hand sides of rules and the kinds of actions may appear in the right hand sides of rules. In addition, YAPS allows multiple data bases which are flavor objects and may be sent messages such as "fact" and "goal". 4. The windows package in the form of a C loadable library. This flexible package allows convenient management of multiple contexts on the screen and runs on ordinary character display terminals as well as bit-mapped displays. Included is a Franz lisp interface to the window library, a window shell for executing shell processes in windows, and a menu package (also a C loadable library). You should be aware of the fact that the lisp software is based on Franz Opus 38.26 and that we will be switching to the newer version of lisp that comes with Berkeley 4.2 whenever that comes out. --------------------------------------------------------------------- To obtain the Univ of Maryland distribution tape: 1. Fill in the form below, make a hard copy of it and sign it. 2. Make out a check to University of Maryland Foundation for $100, mail it and the form to: Liz Allen Univ of Maryland Dept of Computer Science College Park MD 20742 3. If you need an invoice, send me mail, and I will get one to you. Don't forget to include your US Mail address. Upon receipt of the money, we will mail you a tape containing our software and the technical reports describing the software. We will also keep you informed of bug fixes via electronic mail. --------------------------------------------------------------------- The form to mail to us is: In exchange for the Maryland software tape, I certify to the following: a. I will not use any of the Maryland software distribution in a commercial product without obtaining permission from Maryland first. b. I will keep the Maryland copyright notices in the source code, and acknowledge the source of the software in any use I make of it. c. I will not redistribute this software to anyone without permission from Maryland first. d. I will keep Maryland informed of any bug fixes. e. I am the appropriate person at my site who can make guarantees a-d. Your signature, name, position, phone number, U.S. and electronic mail addresses. --------------------------------------------------------------------- If you have any questions, etc, send mail to me. -- -Liz Allen, U of Maryland, College Park MD Usenet: ...!seismo!umcp-cs!liz Arpanet: liz%umcp-cs@Udel-Relay ------------------------------ Date: Tue, 27 Sep 83 14:57:00 EDT From: Morton A. Hirschberg Subject: Conference Announcement **************** CONFERENCE **************** "Intelligent Systems and Machines" Oakland University, Rochester Michigan April 24-25, 1984 ********************************************* A notice for call for papers should also appear through SIGART soon. Conference Chairmen: Dr. Donald Falkenburg (313-377-2218) Dr. Nan Loh (313-377-2222) Center for Robotics and Advanced Automation School of Engineering Oakland University Rochester, MI 48063 *************************************************** AUTHORS PLEASE NOTE: A Public Release/Sensitivity Approval is necessary. Authors from DOD, DOD contractors, and individuals whose work is government funded must have their papers reviewed for public release and more importantly sensitivity (i.e. an operations security review for sensitive unclassified material) by the security office of their sponsoring agency. In addition, I will try to answer questions for those on the net. Mort Queries can be sent to mort@brl ------------------------------ Date: Mon 26 Sep 83 11:08:58-PDT From: Ray Perrault Subject: CSCSI call for papers CALL FOR PAPERS C S C S I - 8 4 Canadian Society for Computational Studies of Intelligence University of Western Ontario London, Ontario May 18-20, 1984 The Fifth National Conference of the CSCSI will be held at the University of Western Ontario in London, Canada. Papers are requested in all areas of AI research, particularly those listed below. The Program Committee members responsible for these areas are included. Knowledge Representation : Ron Brachman (Fairchild R & D), John Mylopoulos (U of Toronto) Learning : Tom Mitchell (Rutgers U), Jaime Carbonell (CMU) Natural Language : Bonnie Weber (U of Pennsylvania), Ray Perrault (SRI) Computer Vision : Bob Woodham (U of British Columbia), Allen Hanson (U Mass) Robotics : Takeo Kanade (CMU), John Hollerbach (MIT) Expert Systems and Applications : Harry Pople (U of Pittsburgh), Victor Lesser (U Mass) Logic Programming : Randy Goebel (U of Waterloo), Veronica Dahl (Simon Fraser U) Cognitive Modelling : Zenon Pylyshyn, Ed Stabler (U of Western Ontario) Problem Solving and Planning : Stan Rosenschein (SRI), Drew McDermott (Yale) Authors are requested to prepare Full papers, of no more than 4000 words in length, or Short papers of no more than 2000 words in length. A full page of clear diagrams counts as 1000 words. When submitting, authors must supply the word count as well as the area in which they wish their paper reviewed. (Com- binations of the above areas are acceptable). The Full paper classification is intended for well-developed ideas, with signi- ficant demonstration of validity, while the Short paper classifi- cation is intended for descriptions of research in progress. Au- thors must ensure that their papers describe original contribu- tions to or novel applications of Artificial Intelligence, re- gardless of length classification, and that the research is prop- erly compared and contrasted with relevant literature. Three copies of each submitted paper must be in the hands of the Program Chairman by December 7, 1983. Papers arriving after that date will be returned unopened, and papers lacking word count and classifications will also be returned. Papers will be fully reviewed by appropriate members of the program committee. Notice of acceptance will be sent on February 28, 1984, and final camera ready versions are due on March 31, 1984. All accepted papers will appear in the conference proceedings. Correspondence should be addressed to either the General Chairman or the Program Chairman, as appropriate. General Chairman Program Chairman Ted Elcock, John K. Tsotsos Dept. of Computer Science, Dept. of Computer Science, Engineering and Mathematical 10 King's College Rd., Sciences Bldg., University of Toronto, University of Western Ontario Toronto, Ontario, Canada, London, Ontario, Canada M5S 1A4 N6A 5B9 (416)-978-3619 (519)-679-3567 ------------------------------ End of AIList Digest ******************** 29-Sep-83 12:57:23-PDT,15769;000000000001 Mail-From: LAWS created at 29-Sep-83 12:54:41 Date: Thursday, September 29, 1983 12:50PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #66 To: AIList@SRI-AI AIList Digest Friday, 30 Sep 1983 Volume 1 : Issue 66 Today's Topics: Rational Psychology - Definition, Halting Problem Natural Language Understanding ---------------------------------------------------------------------- Date: Tue 27 Sep 83 22:39:35-PDT From: PEREIRA@SRI-AI.ARPA Subject: Rational X Oh dear! "Rational psychology" is no more about rational people than "rational mechanics" is about rational rocks or "rational thermodynamics" about rational hot air. "Rational X" is the traditional name for the mathematical, axiomatic study of systems inspired and intuitively related to the systems studied by the empirical science "X." Got it? Fernando Pereira ------------------------------ Date: 27 Sep 83 11:57:24-PDT (Tue) From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax Subject: Re: Rational Psychology Article-I.D.: ariel.463 Actually, the word "rational" in "rational psychology" is merely redundant. One would hope that psychology would be, as other sciences, rational. This would in no way detract from its ability to investigate the causes of human irrationality. No science really should have to be prefaced with the word "rational", since we should be able to assume that science is not "irrational". Anyone for "Rational Chemistry"? Please note that the scientist's "flash of insight", "intuituion", "creative leap" is heavily dependent upon the rational faculty, the faculty of CONCEPT-FORMATION. We also rely upon the rational faculty for verifying and for evaluating such insights and leaps. --Norm Andrews, AT&T Information Systems, Holmdel, New Jersey ------------------------------ Date: 26 Sep 83 13:01:56-PDT (Mon) From: ihnp4!drux3!drufl!samir @ Ucb-Vax Subject: Rational Psychology Article-I.D.: drufl.670 Norm, Let me elaborate. Psychology, or logic of mind, involves BOTH rational and emotional processes. To consider one exclusively defeats the purpose of understanding. I have not read the article we are talking about so I cannot comment on that article, but an example of what I consider a "Rational Psychology" theory is "Personal Construct Theory" by Kelly. It is an attractive theory but, in my opinion, it falls far short of describing "logic of mind" as it fails to integrate emotional aspects. I consider learning-concept formation-creativity to have BOTH rational and emotional attributes, hence it would be better if we studied them as such. I may be creating a dichotomy where there is none. (Rational vs. Emotional). I want to point you to an interesting book "Metaphors we live by" (I forget the names of Authors) which in addition to discussing many other ai-related (without mentioning ai) concepts discusses the question of Objective vs. Subjective, which is similar to what we are talking here, Rational vs. Emotional. Thanks. Samir Shah AT&T Information Systems, Denver. drufl!samir ------------------------------ Date: Tue, 27 Sep 1983 13:30 EDT From: MINSKY@MIT-OZ Subject: Re: Halting Problem About learning: There is a lot about how to get out of loops in my paper "Jokes and the Cognitive Unconscious". I can send it to whoever wants, either over this net or by U.S. Snail. -- minsky ------------------------------ Date: 26 Sep 83 10:31:31-PDT (Mon) From: ihnp4!clyde!floyd!whuxlb!pyuxll!eisx!pd @ Ucb-Vax Subject: the Halting problem. Article-I.D.: eisx.607 There are two AI problems that I know about: the computing power problem (combinatorial explosions, etc) and the "nature of thought" problem (knowledge representation, reasoning process etc). This article concerns the latter. AI's method (call it "m") seems to model human information processing mechanisms, say legal reasoning methods, and once it is understood clearly, and a calculus exists for it, programming it. This idea can be transferred to various problem domains, and voila, we have programs for "thinking" about various little cubbyholes of knowledge. The next thing to tackle is, how do we model AI's method "m" that was used to create all these cubbyhole programs ? How did whoever thought of Predicate Calculus, semantic networks, Ad nauseum block world theories come up with them ? Let's understand that ("m"), formalize it, and program it. This process (let's call it "m'") gives us a program that creates cubbyhole programs. Yeah, it runs on a zillion acres of CMOS, but who cares. Since a human can do more than just "m", or "m'", we try to make "m''", "m'''" et al. When does this stop ? Evidently it cannot. The problem is, the thought process that yields a model or simulation of a thought process is necessarily distinct from the latter (This is true of all scientific investigation of any kind of phenomenon, not just thought processes). This distinction is one of the primary paradigms of western Science. Rather naively, thinking "about" the mind is also done "with" the mind. This identity of subject and object that ensues in the scientific (dualistic) pursuit of more intelligent machine behavior - do you folks see it too ? Since scientific thought relies on the clear separation of a theory/model and reality, is a mathematical/scientific/engineering discipline inadequate for said pursuit ? Is there a system of thought that is self-describing ? Is there a non-dualistic calculus ? What we are talking about here is the ability to separate oneself from the object/concept/process under study, understand it, model it, program it... it being anything, including the ability it self. The ability to recognize that a model is a representation within one's mind of a reality outside of ones mind. Trying to model this ability is leads one to infinite regress. What is this ability ? Lets call it conciousness. What we seem to be coming up with here is, the INABILITY of math/sci etc to deal with this phenomenon, codify at it, and to boldly program a computer that has conciousness. Does this mean that the statement: "CONCIOUSNESS CAN, MUST, AND WILL ONLY COME TO EXISTENCE OF ITS OWN ACCORD" is true ? "Conciousness" was used for lack of a better word. Replace it by X, and you still have a significant statement. Conciousness already has come to existence; and according to the line of reasoning above, cannot be brought into existence by methods available. If so, how can we "help" machines to achieve conciousness, as benevolent if rather impotent observers ? Should we just mechanistically build larger and larger neural network simulators until one says "ouch" when we shut a portion of it off, and better, tries to deliberately modify(sic) its environment so that that doesn't happen again? And may be even can split infinitives ? As a parting shot, it's clear that such neural networks, must have tremendous power to come close to a fraction of our level of abstraction ability. Baffled, but still thinking... References, suggestions, discussions, pointers avidly sought. Prem Devanbu ATTIS Labs , South Plainfield. ------------------------------ Date: 27 Sep 83 05:20:08 EDT (Tue) From: rlgvax!cal-unix!wise@SEISMO Subject: Natural Language Analysis and looping A side light to the discussions of the halting problem is "what then?" What do we do when a loop is detected? Ignore the information? Arbitrarily select some level as the *true* meaning? In some cases, meaning is drawn from outside the language. As an example, consider a person who tells you, "I don't know a secret". The person may really know a secret but doesn't want you to know, or may not know a secret and reason that you'll assume that nobody with a secret would say something so suspicious ... A reasonable assumption would be that if the person said nothing, you'd have no reason to think he knows a secret, so if that was the assumption which he desired for you to make, he would just have kept quiet, so you may conclude that the person knows no secret. This rather simplistic example demonstrates one response to the loop, i.e., when confronted with circular logic, we disregard it. Another possibility is that we may use external information to attempt to help dis-ambiguate by selecting a level of the loop. (e.g. this is a three-year-old, who is sufficiently unsophisticated that he may say the above when he does, in fact, know a secret.) This may support the study of cognition as an underpinning for NLP. Certainly we can never expect a machine to react as we (who is 'we'?) do unless we know how we react. ------------------------------ Date: 28 Sep 1983 1723-PDT From: Jay Subject: NLP, Learning, and knowledge rep. As an undergraduate student here at USC, I am required to pass a Freshman Writting class. I have noticed in this class that one field of the NL Problem is UNSOLVED even in humans. I am speaking of the generation of prose. In AI terms the problems are... The selection of a small area of the knowledge base which is small enough to be written about in a few pages, and large enough that a paper can be generated at all. One of the solutions to this problem is called 'clustering.' In the middle of a page one draws a circle about the topic. Then a directed graph is built by connecting associated ideas to nodes in the graph. Just free association does not seem to work very well, so it is sugested to ask a number of question, about the main idea, or any other node. Some of the questions are What, Where, When, Why (and the rest of the "Journalistic" q's), can you RELATE an incident about it, can you name its PARTS, can you describe a process to MAKE or do it. Finally this smaller data base is reduced to a few interesting areas. This solution is then a process of Q and A on the data base to construct a smaller data base. Once a small data base has been selected, it needs to be given a linear representation. That is, it must be organized into a new data base that is suitable to prose. There are no solutions offered for this step. Finally the data base is coded into English prose. There are no solutions offered for this step. This prose is read back in, and compared to the original data base. Ambiguities need to be removed, some areas elaborated on, and others rewritten in a clearer style. There are no solutions offered for this step, but there are some rules - Things to do, and things not to do. j' ------------------------------ Date: Tuesday, 27 September 1983 15:25:35 EDT From: Robert.Frederking@CMU-CS-CAD Subject: Re: NL argument between STLH and Pereira Several comments in the last message in this exchange seemed worthy of comment. I think my basic sympathies lie with STLH, although he overstates his case a bit. While language is indeed a "fuzzy thing", there are different shades of correctness, with some sentences being completely right, some with one obvious *error*, which is noticed by the hearer and corrected, while others are just a mess, with the hearer guessing the right answer. This is similar in some ways to error-correcting codes, where after enough errors, you can't be sure anymore which interpretation is correct. This doesn't say much about whether the underlying ideal is best expressed by a grammar. I don't think it is, for NL, but the reason has more to do with the fact that the categories people use in language seem to include semantics in a rather pervasive way, so that making a major distinction between grammatical (language-specific, arbitrary) and other knowledge (semantics) might not be the best approach. I could go on at length about this (in fact I'm currently working on a Tech Report discussing this idea), but I won't, unless pressed. As for ignoring human cognition, some AI people do ignore it, but others (especially here at C-MU) take it very seriously. This seems to be a major division in the field -- between those who think the best search path is to go for what the machine seems best suited for, and those who want to use the human set-up as a guide. It seems to me that the best solution is to let both groups do their thing -- eventually we'll find out which path (or maybe both) was right. I read with interest your description of your system -- I am currently working on a semantic chart parser that sounds fairly similar to your brief description, except that it is written in OPS5. Thus I was surprised at the statement that OPS5 has "no capacity for the parallelism" needed. OPS5 users suffer from the fact that there are some fairly non-obvious but simple ways to build powerful data structures in it, and these have not been documented. Fortunately, a production system primer is currently being written by a group headed by Elaine Kant. Anyway, I have an as-yet-unaccepted paper describing my OPS5 parser available, if anyone is interested. As for scientific "camps" in AI, part of the reason for this seems to be the fact that AI is a very new science, and often none of the warring factions have proved their points. The same thing happens in other sciences, when a new theory comes out, until it is proven or disproven. In AI, *all* the theories are unproven, and everyone gets quite excited. We could probably use a little more of the "both schools of thought are probably partially correct" way of thinking, but AI is not alone in this. We just don't have a solid base of proven theory to anchor us (yet). In regard to the call for a theory which explains all aspects of language behavior, one could answer "any Turing-equivalent computer". The real question is, how *specifically* do you get it to work? Any claim like "my parser can easily be extended to do X" is more or less moot, unless you've actually done it. My OPS5 parser is embedded in a Turing-equivalent production system language. I can therefore guarantee that if any computer can do language learning, so can my program. The question is, how? The way linguists have often wanted to answer "how" is to define grammars that are less than Turing-equivalent which can do the job, which I suspect is futile when you want to include semantics. In any event, un-implemented extensions of current programs are probably always much harder than they appear to be. (As an aside about sentences as fundamental structures, there is a two-prong answer: (1) Sentences exist in all human languages. They appear to be the basic "frame" [I can hear nerves jarring all over the place] or unit for human communication of packets of information. (2) Some folks have actually tried to define grammars for dialogue structures. I'll withhold comment.) In short, I think warring factions aren't that bad, as long as they all admit that no one has proven anything yet (which is definitely not always the case), semantic chart parsing is the way to go for NL, theories that explain all of cognitive science will be a long time in coming, and that no one should accept a claim about AI that hasn't been implemented. ------------------------------ End of AIList Digest ******************** 29-Sep-83 13:07:22-PDT,14862;000000000001 Mail-From: LAWS created at 29-Sep-83 13:04:10 Date: Thursday, September 29, 1983 12:56PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #67 To: AIList@SRI-AI AIList Digest Friday, 30 Sep 1983 Volume 1 : Issue 67 Today's Topics: Alvey Report & Fifth Generation, AI at Edinburgh - Reply, Machine Organisms - Desirability, Humor - Famous Flamer's School ---------------------------------------------------------------------- Date: 23 Sep 83 13:17:41-PDT (Fri) From: decvax!genrad!security!linus!utzoo!watmath!watdaisy!rggoebel@Ucb-Vax Subject: Re: Alvey Report and Fifth Generation Article-I.D.: watdaisy.298 The ``Alvey Report'' is the popular name for the following booklet: A Programme for Advanced Information Technology The Report of the Alvey Committee published by the British Department of Industry, and available from Her Majesty's Stationery Office. One London address is 49 High Holborn London WC1V 6HB The report is indeed interesting because it is a kind of response to the Japanese Fifth Generation Project, but is is also interesting in that it is not nearly so much the genesis of a new project as the organization of existing potential for research and development. The quickest way to explain the point is that of the proposed 352 million pounds that the report suggests to be spent, only 42 million is for AI (Actually it's not for AI, but for IKBS-Intelligent Knowledge Based Systems; seniors will understand the reluctance to use the word AI after the Lighthill report). The areas of proposed development include 1) Software engineering, 2) Man/Machine Interfaces, 3) IKBS, and 4) VLSI. I have heard the the most recent national budget in Britain has not committed the funds expected for the project, but this is only rumor. I would appreciate further information (Can you help D.H.D.W.?). On another related topic, I think it displays a bit of AI chauvinism to believe that anyone, including the Japanese and the British are so naive as to put all the eggs in one basket. Incidently, I believe Feigenbaum and McCorduck's book revealed at least two things: a disguised plea for more funding, and a not so disguised expose of American engineering chauvinism. Much of the American reaction to the Japanese project sounds like the old cliches of male chauvinism like ``...how could a women ever do the work of a real man?'' It just maybe that American Lisper's may end up ``eating quiche.'' 8-) Randy Goebel Logic Programming Group University of Waterloo UUCP: watmath!rggoebel ------------------------------ Date: Tue 27 Sep 83 22:31:28-PDT From: PEREIRA@SRI-AI.ARPA Subject: Re: U of Edinburgh, Scotland Inquiry Since the Lighthill Report, a lot has changed for AI in Britain. The Alvey Report (British Department of Industry) and the Science and Engineering Research Council (SERC) initiative on Intelligent Knowledge-Based Systems (IKBS) have released a lot of money for Information Technology in general, and AI in particular (It remains to be seen whether that huge amount of money -- 100s of millions -- is going to be spent wisely). The Edinburgh Department of AI has managed to get a substantial slice of that money. They have been actively looking for people both at lecturer and research associate/fellow level [a good opportunity for young AIers from the US to get to know Scotland, her great people and unforgetable Highlands]. The AI Dept. have recently added 3 (4?) new people to their teaching staff, and have more machines, research staff, and students than ever. The main areas they work on are: Natural Language (Henry Thompson, Mark Steedman, Graeme Ritchie), controlled deduction and problem solving (Alan Bundy and his research assistant and students), Robotics (Robin Popplestone, Pat Ambler and a number of others), LOGO-style stuff (Jim Howe [head of department] and Peter Ross) and AI languages (Robert Rae, Dave Bowen and others). There are probably others I don't remember. The AI Dept. is both on UUCP and on a network connected to ARPANET: %edxa%ucl-cs@isid (ARPANET) ...!vax135!edcaad!edee!edai! (UUCP) I have partial lists of user names for both connections which I will mail directly to interested persons. Fernando Pereira SRI AI Center [an old Edinburgh hand] pereira@sri-ai (ARPA) ...!ucbvax!pereira@sri-ai (UUCP) ------------------------------ Date: 24 Sep 83 3:54:20-PDT (Sat) From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax Subject: Machine Organisms? - (nf) Article-I.D.: hp-pcd.1920 I was reading a novel recently, and ran across the following passage re- lating to "intelligent" machines, robots, etc. In case anyone is interested, the book is Satan's World, by Poul Anderson, Doubleday 1969 (p. 132). (I hope this article doesn't seem more appropriate to sf-lovers than to ai.) ... They had electronic speed and precision, yes, but not full decision-making capacity. ... This is not for lack of mystic vital forces. Rather, the biological creature has available to him so much more physical organization. Besides sensor-computer-effector systems comparable to those of the machine, he has feed-in from glands, fluids, chemistry reaching down to the molecular level -- the integrated ultracomplexity, the entire battery of *instincts* -- that a billion-odd years of ruthlessly selective evolution have brought forth. He perceives and thinks with a wholeness transcending any possible symbolism; his purposes arise from within, and therefore are infinitely flexible. The robot can only do what it was designed to do. Self-programming has [can] extended these limits, to the point where actual consciousness may occur if desired. But they remain narrower than the limits of those who made the machines. Later in the book, the author describes a view that if a robot "were so highly developed as to be equivalent to a biological organism, there would be no point in building it." This is explained as being true because "nature has already provided us means for making new biological organisms, a lot cheaper and more fun than producing robots." I won't go on with the discussion in the book, as it degenerates into the usual debate about the theoretical, fully motivated computer that is superior in any way..., and how such a computer would rule the world, etc. My point in posting the above passage was to ask the experts of netland to give their opinions of the aforementioned views. More specifically, how do we feel about the possibilities of building machines that are "equivalent" to intelligent biological organisms? Or even non-intelligent ones? Is it possible? And if so, why bother? It's probably obvious that we don't need to disagree with the views given by the author in order to want to continue with our studies in Artificial Intelligence. But how many of us do agree? Disagree? Marion Hakanson {hp-pcd,teklabs}!orstcs!hakanson (Usenet) hakanson.oregon-state@rand-relay (CSnet) hakanson@{oregon-state,orstcs} (also CSnet) ------------------------------ Date: Wed 28 Sep 83 17:18:53-PDT From: Peter Karp Subject: Amusement from CMU's opinion bboard [Reprinted from the CMU opinion board via the SU-SCORE bboard.] [The originator of this piece is Jeff.Shrager@CMU-CS-A.] Ever dreamed of flaming with the Big Boys? ... Had that desire to write an immense diatribe, berating de facto all your peers who hold contrary opinions? ... Felt the urge to have your fingers moving without being connected to your brain? Well, by simply sending in the form on the back of this bboard post, you could begin climbing into your pulpit alongside greats from all walks of life such as Chomsky, Weizenbaum, Reagan, Von Danneken, Ellison, Abzug, Arifat and many many more. You don't even have to leave the comfort of your armchair! Here's how it works: Each week we send you a new lesson. You read the notes and then simply write one essay each week on the assigned topic. Your essays will be read by our expert pool of professional flamers and graded on Sparsity, Style, Overtness, Incoherence, and a host of other important aspects. You will receive a long letter from your specially selected advisor indicating in great detail why you obviously have the intellectual depth of a soap dish. This apprenticeship is all there is to it. Here are some examples of the courses offered by The School: Classical Flames: You will study the flamers who started it all. For example, Descarte's much quoted demonstration that reality isn't. Special attention is paid, in this course, to the old and new testaments and how western flaming was influenced by their structure. (The Bible plays a particularly important role in our program and most courses will spend at least some time tracing biblical origins or associations of their special topic. See, particularly, the special seminar on Space Cadetism, which concentrate on ESP and UFO phenomena.) Contemporary Flame Technique: Attention is paid to the detail of flame form in this course. The student will practice the subtle and overt ad hominem argument; fact avoidance maneuvers; "at length" writing style; over generalization; and other important factor which make the modern flame inaccessible to the general populace. Readings from Russell ("Now I will admit that some unusually stupid children of ten may find this material a bit difficult to fathom..."), Skinner, (primarily concentrating on his Verbal Learning), Sagan (on abstract overestimation) and many others. This course is most concerned with politicians (sometimes, redundantly, referred to as "political flamers") since their speech writers are particularly adept at the technique that we wish to foster. Appearing Brilliant (thanks to the Harvard Lampoon): Nobel laureates lecture on topics of world import but which are very much outside their field of expertise. There is a large representation of Nobels in physics: the discoverer of the UnCharmed Pi Mesa Beta Quark explains how the population explosion can be averted through proper reculterization of mothers; and professor Nikervator, first person to properly develop the theory of faster- than-sound "Whizon" docking coreography, tells us how mind is the sole theological entity. Special seminar in terminology: The name that you give something is clearly more important than its semantics. Experts in nomenclature demonstrate their skills. Pulitzer Prize winner Douglas Hofstader makes up 15,000 new words whose definitions, when read sideways prove the existence of themselves and constitute fifteen months of columns in Scientific American. A special round table of drug company and computer corporation representatives discuss how to construct catchy names for new products and never give the slightest hint to the public about what they mean. Writing the Scientific Journal Flame: Our graduates will be able to compete in the modern world of academic and industrial research flaming, where the call is high for trained pontificators. the student reads short sections from several fields and then may select a field of concentration for detailed study. Here is an example description of a detailed scientific flaming seminar: Computer Science: This very new field deals directly with the very metal of the flamer's tools: information and communication. The student selecting computer science will study several areas including, but not exclusively: Artificial Intelligence: Roger Schank explains the design of his flame understanding and generation engine (RUSHIN) and will explain how the techniques that it employs constitute a complete model of mind, brain, intelligence, and quantum electrodynamics. For contrast, Marvin Minsky does the same. Weizenbaum tells us, with absolutely no data or alternative model, why AI is logically impossible, and moreover, immoral. Programming Languages: A round table is held between Wirth, Hoare, Dykstra, Iverson, Perlis, and Jean Samett, in order to keep them from killing each other. Machines and systems: Fred Brooks and Gordon Bell lead a field of experts over the visual cliff of hardware considerations. The list of authoritative lectures goes on and on. In addition, an inspiring introduction by Feigenbaum explains how important it is that flame superiority be maintained by the United States in the face of the recent challenges from Namibia and the Panama Canal zone. But there's more. Not only will you read famous flamers in abundance, but you will actually have the opportunity to "run with the pack". The Famous Flamer's School has arranged to provide access for all computer science track students, to the famous ARPANet where students will be able to actually participate in discussions of earthshaking current importance, along with the other brilliant young flamers using this nationwide resource. You'll read and write about whether keyboards should have a space bar across the whole bottom or split under the thumbs; whether or not Emacs is God, and which deity is the one true editor; whether the brain actually cools the body or not; whether the earth revolves around the sun or vice versa -- and much more. You contributions will be whisked across the nation, faster than throwing a 2400 foot magtape across the room, into the minds of thousands of other electrolusers whose brain cells will merge with yours for the moment that they read your personal opinion of matters of true science! What importance! We believe that the program we've constructed is very special and will provide, for the motivated student, an atmosphere almost completely content free in which his or her ideas can flow in vacuity. So, take the moment to indicate your name, address, age, and hat size by filling out the rear of this post and mailing it to: FAMOUS FLAMER'S SCHOOL c/o Locker number 6E Grand Central Station North New York, NY. Act now or forever hold your peace. ------------------------------ End of AIList Digest ******************** 3-Oct-83 09:53:18-PDT,15403;000000000001 Mail-From: LAWS created at 3-Oct-83 09:47:43 Date: Monday, October 3, 1983 9:33AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #68 To: AIList@SRI-AI AIList Digest Monday, 3 Oct 1983 Volume 1 : Issue 68 Today's Topics: Humor - Famous Flamer's School Credit, Technology Transfer & Research Ownership, AI Reports - IRD & NASA, TV Coverage - Computer Chronicles, Seminars - Ullman, Karp, Wirth, Mason, Conferences - UTexas Symposium & IFIP Workshop ---------------------------------------------------------------------- Date: Mon 3 Oct 83 09:29:16-PDT From: Ken Laws Subject: Famous Flamer's School -- Credit The Famous Flamer's School was created by Jeff.Shrager@CMU-CS-A; my apologies for not crediting him in the original article. If you saved or distributed a copy, please add a note crediting Jeff. -- Ken Laws ------------------------------ Date: Thu 29 Sep 83 17:58:29-PDT From: David Rogers Subject: Alas, I must flame... [ I hate to flame, but here's an issue that really got to me...] From the call for papers for the "Artificial Intelligence and Machines": AUTHORS PLEASE NOTE: A Public Release/Sensitivity Approval is necessary. Authors from DOD, DOD contractors, and individuals whose work is government funded must have their papers reviewed for public release and more importantly sensitivity (i.e. an operations security review for sensitive unclassified material) by the security office of their sponsoring agency. How much AI work does *NOT* fall under one of the categories "Authors from DOD, DOD contractors, and individuals whose work is government funded" ? I read this to mean that essentially any government involvement with research now leaves one open to goverment "protection". At issue here is not the goverment duty to safeguard classified materials; it is the intent of the government to limit distribution of non-military basic research (alias "sensitive unclassified material"). This "we paid for it, it's OURS (and the Russians can't have it)" mentality seems the rule now. But isn't science supposed to be for the benefit of all mankind, and not just another economic bargaining chip? I cannot help but to be chilled by this divorce of science from a higher moral outlook. Does it sound old fashioned to believe that scientific thought is part of a common heritage, to be used to improve the lives of all? A far as I can see, if all countries in the world follow the lead of the US and USSR toward scientific protectionism, we scientists will have allowed science to abandon its primary role toward learning about ourselves and become a mere intellectual commodity. David Rogers DRogers@SUMEX-AIM.ARPA ------------------------------ Date: Fri 30 Sep 83 10:09:08-PDT From: Ken Laws Subject: IRD Report [Reprinted from IEEE Computer, Sep. 1983, p. 116.] Rapid Growth Predicted for AI-Based System Expert systems are now moving out of the research laboratory and into the commercial marketplace, according to "Artificial Intelligence," a 167-page research report from International Resource Development. Revenue from all AI hardware, software, and services will amount to only $70 million this year but is expected to reach $8 billion in the next 10 years. Biomedical applications promise to be among the fastest growing uses of AI, reducing the time and cost of diagnosing illnesses and adding to the accuracy of diagnoses. AI-based systems can range from "electronic encyclopedias," which physicians can use as reference sources, to full-fledged "electronic consultants" capable of taking a patient through an extensive series of diagnostic tests and determining the patient's ailments with great precision. "Two immediate results of better diagnostic procedures may be a reduction in the number of unnecessary surgical procedures performed on patients and a decrease in the average number of expensive tests performed on patients," predicts Dave Ledecky of the IRD research staff. He also notes that the AI technology may leave hospitals half-empty, since some operations turn out to be unnecessary. However, he expects no such dramatic result anytime soon, since widespread medical application of AI technology isn't expected for about five years. The IRD report also describes the activities of several new companies that are applying AI technology to medical systems. Helena Laboratories in Beaumont, Texas, is shipping a densitometer/analyzer, which includes a serum protein diagnostic program developed by Rutgers University using AI technology. Still in the development stage are the AI-based products of IntelliGenetics in Palo Alto, California, which are based on work conducted at Stanford University over the last 15 years. Some larger, more established companies are also investing in AI research and development. IBM is reported to have more than five separate programs underway, while Schlumberger, Ltd., is spending more than $5 million per year on AI research, much of which is centered on the use of AI in oil exploration. AI software may dominate the future computer industry, according to the report, with an increasing percentage of applications programming being performed in Lisp or other AI-based "natural" languages. Further details on the $1650 report are available from IRD, 30 High Street, Norwalk, CT 06851; (800) 243-5008, Telex: 64 3452. ------------------------------ Date: Fri 30 Sep 83 10:16:43-PDT From: Ken Laws Subject: NASA Report [Reprinted from IEEE Spectrum, Oct. 1983, p. 78] Overview Explains AI A technical memorandum from the National Aeronautics and Space Administration offers an overview of the core ingredients of artificial intelligence. The volume is the first in a series that is intended to cover both artificial intelligence and robotics for interested engineers and managers. The initial volume gives definitions and a short history entitled "The rise, fall, and rebirth of AI" and then lists applications, principal participants in current AI work, examples of the state of the art, and future directions. Future volumes in AI will cover application areas in more depth and will also cover basic topics such as search-oriented problem-solving and planning, knowledge representation, and computational logic. The report is available from the National Technical Information Service, Springfield, Va. 22161. Please ask for NASA Technical Memorandum Number 85836. ------------------------------ Date: Thu 29 Sep 83 20:13:09-PDT From: Ellie Engelmore Subject: TV documentary [Reprinted from the SU-SCORE bboard.] KCSM-TV Channel 60 is producing a series entitled "The Computer Chronicles". This is a series of 30-minute programs intended to be a serious look at the world of computers, a potential college-level teaching device, and a genuine historical document. The first episode in the series (with Don Parker discussing computer security) will be broadcast this evening...Thursday, September 29...9pm. The second portion of the series, to be broadcast 9 pm Thursday, October 6, will be on the subject of Artificial Intelligence (with Ed Feigenbaum). ------------------------------ Date: Thu 29 Sep 83 19:03:27-PDT From: Andrei Broder Subject: AFLB [Reprinted from the SU-SCORE bboard.] The "Algorithms for Lunch Bunch" (AFLB) is a weekly seminar in analysis of algorithms held by the Stanford Computer Science Department, every Thursday, at 12:30 p.m., in Margaret Jacks Hall, rm. 352. At the first meeting this year, (Thursday, October 6) Prof. Jeffrey D. Ullman, from Stanford, will talk on "A time-communication tradeoff" Abstract follows. Further information about the AFLB schedule is in the file [SCORE]aflb.bboard . If you want to get abstracts of the future talks, please send me a message to put you on the AFLB mailing list. If you just want to know the title of the next talk and the name of the speaker look at the weekly Stanford CSD schedule that is (or should be) sent to every bboard. ------------------------ 10/6/83 - Prof. Jeffrey D. Ullman (Stanford): "A time-communication tradeoff" We examine how multiple processors could share the computation of a collection of values whose dependencies are in the fom of a grid, e.g., the estimation of nth derivatives. Two figures of merit are the time t the shared computation takes and the amount of communication c, i.e., the number of values that are either inputs or are computed by one processor and used by another. We prove that no matter how we share the responsibility for computing an n by n grid, the law ct = OMEGA(n^3) must hold. ******** Time and place: Oct. 6, 12:30 pm in MJ352 (Bldg. 460) ******* ------------------------------ Date: Thu 29 Sep 83 09:33:24-CDT From: CS.GLORIA@UTEXAS-20.ARPA Subject: Karp Colloquium, Oct. 13, 1983 [Reprinted from the UTexas-20 bboard.] Richard M. Karp, University of California at Berkeley, will present a talk entitled, "A Fast Parallel Algorithm for the Maximal Independent Set Problem" on Thursday, October 13, 1983 at 3:30 p.m. in Painter Hall 4.42. Coffee at 3 p.m. in PAI 3.24. Abstract: One approach to understanding the limits of parallel computation is to search for problems for which the best parallel algorithm is not much faster than the best sequential algorithm. We survey what is known about this phenomenon and show that--contrary to a popular conjecture--the problem of finding a maximal inependent set of vertices in a graph is highly amenable to speed-up through parallel computation. We close by suggesting some new candidates for non-parallelizable problems. ------------------------------ Date: Fri 30 Sep 83 21:39:45-PDT From: Doug Lenat Subject: N. Wirth, Colloquium 10/4/83 [Reprinted from the SU-SCORE bboard.] CS COLLOQUIUM: Niklaus Wirth will be giving the opening colloquium of this quarter on Tuesday (Oct. 4), at 4:15 in Terman Auditorium. His talk is titled "Reminiscences and Reflections". Although there is no official abstract, in discussing this talk with him I learned that Reminiscences refer to his days here at Stanford one generation ago, and Reflections are on the current state of both software and hardware, including his views on what's particularly good and bad in the current research in each area. I am looking forward to this talk, and invite all members of our department, and all interested colleagues, to attend. Professor Wirth's talk will be preceded by refreshments served in the 3rd floor lounge (in Margaret Jacks Hall) at 3:45. Those wishing to schedule an appointment with Professor Wirth should contact ELYSE@SCORE. ------------------------------ Date: 30 Sep 83 1049 PDT From: Carolyn Talcott Subject: SEMINAR IN LOGIC AND FOUNDATIONS [Reprinted from the SU-SCORE bboard.] Organizational and First Meeting Time: Wednesday, Oct. 5, 4:15-5:30 PM Place: Mathematics Dept. Faculty Lounge, 383N Stanford Speaker: Ian Mason Title: Undecidability of the metatheory of the propositional calculus. Before the talk there will be a discussion of plans for the seminar this fall. S. Feferman [PS - If you read this notice on a bboard and would like to be on the distribution list send me a message. - CLT@SU-AI] ------------------------------ Date: Thu 29 Sep 83 14:24:36-CDT From: Clive Dawson Subject: Schedule for C.S. Dept. Centennial Symposium [Reprinted from the UTexas-20 bboard.] COMPUTING AND THE INFORMATION AGE October 20 & 21, 1983 Joe C. Thompson Conference Center Thursday, Oct. 20 ----------------- 8:30 Welcoming address - A. G. Dale (UT Austin) G. J. Fonken, VP for Acad. Affairs and Research 9:00 Justin Rattner (Intel) "Directions in VLSI Architecture and Technology" 10:00 J. C. Browne (UT Austin) 10:15 Coffee Break 10:45 Mischa Schwartz (Columbia) "Computer Communications Networks: Past, Present and Future" 11:45 Simon S. Lam (UT Austin) 12:00 Lunch 2:00 Herb Schwetman (Purdue) "Computer Performance: Evaluation, Improvement, and Prediction" 3:00 K. Mani Chandy (UT Austin) 3:15 Coffee Break 3:45 William Wulf (Tartan Labs) "The Evolution of Programming Languages" 4:45 Don Good (UT Austin) Friday, October 21 ------------------ 8:30 Raj Reddy (CMU) "Supercomputers for AI" 9:30 Woody Bledsoe (UT Austin) 9:45 Coffee Break 10:15 John McCarthy (Stanford) "Some Expert Systems Require Common Sense" 11:15 Robert S. Boyer and J Strother Moore (UT Austin) 11:30 Lunch 1:30 Jeff Ullman (Stanford) "A Brief History of Achievements in Theoretical Computer Science" 2:30 James Bitner (UT Austin) 2:45 Coffee Break 3:15 Cleve Moler (U. of New Mexico) "Mathematical Software -- The First of the Computer Sciences" 4:15 Alan Cline (UT Austin) 4:30 Summary - K. Mani Chandy, Chairman, Dept. of Computer Sciences ------------------------------ Date: Sunday, 2 October 1983 17:49:13 EDT From: Mario.Barbacci@CMU-CS-SPICE Subject: Call For Participation -- IFIP Workshop CALL FOR PARTICIPATION IFIP Workshop on Hardware Supported Implementation of Concurrent Languages in Distributed Systems March 26-28, 1984, Bristol, U.K. TOPICS: - the impact of distributed computing languages and compilers on the architecture of distributed systems. - operating systems; centralized/decentralized control, process communications and synchronization, security - hardware design and interconnections - hardware/software interrelation and trade offs - modelling, measurements, and performance Participation is by INVITATION ONLY, if you are interested in attending this workshop write to the workshop chairman and include an abstract (1000 words approx.) of your proposed contribution. Deadline for Abstracts: November 15, 1983 Workshop Chairman: Professor G.L. Reijns Chairman, IFIP Working Group 10.3 Delft University of Technology P.O. Box 5031 2600 GA Delft The Netherlands ------------------------------ End of AIList Digest ******************** 3-Oct-83 09:57:46-PDT,18430;000000000001 Mail-From: LAWS created at 3-Oct-83 09:54:52 Date: Monday, October 3, 1983 9:50AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #69 To: AIList@SRI-AI AIList Digest Monday, 3 Oct 1983 Volume 1 : Issue 69 Today's Topics: Rational Psychology - Examples, Organization - Reflexive Reasoning & Conciousness & Learning & Parallelism ---------------------------------------------------------------------- Date: Thu, 29 Sep 83 18:29:39 EDT From: "John B. Black" Subject: "Rational Psychology" Recently on this list, Pereira held up as a model for us all, Doyle's "Rational Psychology" article in AI Magazine. Actually, I think what Pereira is really requesting is a reduction of overblown claims and assertions with no justification (e.g., "solutions" to the natural language problem). However, since he raised the "rational psychology" issue I though I would comment on it. I too read Doyle's article with interest (although it seemed essentially the same as Don Norman's numerous calls for a theoretical psychology in the early 1970s), but (like the editor of this list) I was wondering what the referents were of the vague descriptions of "rational psychology." However, Doyle does give some examples of what he means: mathematical logic and decision theory, mathematical linguistics, and mathematical theories of perception. Unfortunately, this list is rather disappointing because -- with the exception of the mathematical theories of perception -- they have all proved to be misleading when actually applied to people's behavior. Having a theoretical (or "rational" -- terrible name with all the wrong connotations) psychology is certainly desirable, but it does have to make some contact with the field it is a theory of. One of the problems here is that the "calculus" of psychology has yet to be invented, so we don't have the tools we need for the "Newtonian mechanics" of psychology. The latest mathematical candidate was catastrophe theory, but it turned out to be a catastrophe when applied to human behavior. Perhaps Periera and Doyle have a "calculus" to offer. Lacking such a appropriate mathematics, however, does not stop a theoretical psycholology from existing. In fact, I offer three recent examples of what a theoretical psychology ought to be doing at this time: Tversky, A. Features of similarity. PSYCHOLOGICAL REVIEW, 1977, 327-352. Schank, R.C. DYNAMIC MEMORY. Cambridge University Press, 1982. Anderson, J.R. THE ARCHITECTURE OF COGNITION. Harvard University Press, 1983. ------------------------------ Date: Thu 29 Sep 83 19:03:40-PDT From: PEREIRA@SRI-AI.ARPA Subject: Self-description, multiple levels, etc. For a brilliant if tentative attack of the questions noted by Prem Devanbu, see Brian Smith's thesis "Reflection and Semantics in a Procedural Language,", MIT/LCS/TR-272. Fernando Pereira ------------------------------ Date: 27 Sep 83 22:25:33-PDT (Tue) From: pur-ee!uiucdcs!marcel @ Ucb-Vax Subject: reflexive reasoning ? - (nf) Article-I.D.: uiucdcs.3004 I believe the pursuit of "consciousness" to be complicated by the difficulty of defining what we mean by it (to state the obvious). I prefer to think in less "spiritual" terms, say starting with the ability of the human memory to retain impressions for varying periods of time. For example, students cramming for an exam can remember long lists of things for a couple of hours -- just long enough -- and forget them by the end of the same day. Some thoughts are almost instantaneously lost, others last a lifetime. Here's my suggestion: let's start thinking in terms of self-observation, i.e. the construction of models to explain the traces that are left behind by things we have already thought (and felt?). These models will be models of what goes on in the thought processes, can be incorrect and incomplete (like any other model), and even reflexive (the thoughts dedicated to this analysis leave their own traces, and are therefore subject to modelling, creating notions of self-awareness). To give a concrete (if standard) example: it's quite reasonable for someone to say to us, "I didn't know that." Or again, "Oh, I just said it, what was his name again ... How can I be so forgetful!" This leads us into an interesting "problem": the fading of human memory with time. I would not be surprized if this was actually desirable, and had to be emulated by computer. After all, if you're going to retain all those traces of where a thought process has gone; traces of the analysis of those traces, etc; then memory would fill up very quickly. I have been thinking in this direction for some time now, and am working on a programming language which operates on several of the principles stated above. At present the language is capable of responding dynamically to any changes in problem state produced by other parts of the program, and rules can even respond to changes induced by themselves. Well, that's the start; the process of model construction seems to me to be by far the harder part of the task. It becomes especially interesting when you think about modelling what look like "levels" of self-awareness, but could actually be manifestations of just one mechanism: traces of some work, which are analyzed, thus leaving traces of self-analysis; which are analyzed ... How are we to decide that the traces being analyzed are somehow different from the traces of the analysis? Even "self-awareness" (as opposed to full-blown "consciousness") will be difficult to understand. However, at this point I am convinced that we are not dealing with a potential for infinite regress, but with a fairly simple mechanism whose results are hard to interpret. If I am right, we may have some thinking to do about subject-object distinctions. In case you're interested in my programming language, look for some papers due to appear shortly: Logic-Programming Production Systems with METALOG. Software Practice and Experience, to appear shortly. METALOG: a Language for Knowledge Representation and Manipulation. Conf on AI (April '83). Of course, I don't say that I'm thinking about "self-awareness" as a long-term goal (my co-author isn't) ! If/when such a goal becomes acceptable to the AI community it will probably be called something else. Doesn't "reflexive reasoning" sound more scientific?. Marcel Schoppers, Dept of Comp Sci, U of Illinois @ Urbana-Champaign uiucdcs!marcel ------------------------------ Date: 27 Sep 83 19:24:19-PDT (Tue) From: decvax!genrad!security!linus!philabs!cmcl2!floyd!vax135!ariel!ho u5f!hou5e!hou5d!mat@Ucb-Vax Subject: Re: the Halting problem. Article-I.D.: hou5d.674 I may be naive, but it seems to me that any attempt to produce a system that will exhibit conciousness-;like behaviour will require emotions and the underlying base that they need and supply. Reasoning did not evolve independently of emotions; human reason does not, in my opinion, exist independently of them. Any comments? I don't recall seeing this topic discussed. Has it been? If not, is it about time to kick it around? Mark Terribile hou5d!mat ------------------------------ Date: 28 Sep 83 12:44:39-PDT (Wed) From: ihnp4!drux3!drufl!samir @ Ucb-Vax Subject: Re: the Halting problem. Article-I.D.: drufl.674 I agree with mark. An interesting book to read regarding conciousness is "The origin of conciousness in the breakdown of bicamaral mind" by Julian Jaynes. Although I may not agree fully with his thesis, it did get me thinking and questioning about the usual ideas regarding conciousness. An analogy regarding conciousness, "emotions are like the roots of a plant, while conciousness is the fruit". Samir Shah AT&T Information Systems, Denver. drufl!samir ------------------------------ Date: 30 Sep 83 13:42:32 EDT From: BIESEL@RUTGERS.ARPA Subject: Recursion of reperesentations. Some of the more recent messages have questioned the possibility of producing programs which can "understand" and "create" human discourse, because this kind of "understanding" seems to be based upon an infinite kind of recursion. Stated very simply, the question is "how can the human mind understand itself, given that it is finite in capacity?", which implies that humans cannot create a machine equivalent of a human mind, since (one assumes) that underatnding is required before construction becomes possible. There are two rather simple objections to this notion: 1) Humans create minds every day, without understanding anything about it. Just some automatic biochemical machinery, some time, and exposure to other minds does the trick for human infants. 2) John von Neumann, and more recently E.F. Codd demostrated in a very general way the existence of universal constructors in cellular automata. These are configurations in cellular space which able to construct any configuration, including copies of themselves, in finite time (for finite configurations) No infinite recursion is involved in either case, nor is "full" understanding required. I suspect that at some point in the game we will have learned enough about what works (in a primarily empirical sense) to produce machine intelligence. In the process we will no doubt learn a lot about mind in general, and our own minds in particular, but we will still not have a complete understanding of either. Peolpe will continue to produce AI programs; they will gradually get better at various tasks; others will combine various approaches and/or programs to create systems that play chess and can talk about the geography of South America; occasionally someone will come up with an insight and a better way to solve a sub-problem ("subjunctive reference shift in frame-demon instantiation shown to be optimal for linearization of semantic analysis of noun phrases" IJCAI 1993); lay persons will come to take machine intelligence for granted; AI people will keep searching for a better definition of intelligence; nobody will really believe that machines have that indefinable something (call it soul, or whatever) that is essential for a "real" mind. Pete Biesel@Rutgers.arpa ------------------------------ Date: 29 Sep 83 14:14:29 EDT From: SOO@RUTGERS.ARPA Subject: Top-Down? Bottom-Up? [Reprinted from the Rutgers bboard.] I happen to read a paper by Michael A. Arbib about brain theory. The first section of which is "Brain Theory: 'Bottom-up' and 'Top-Down'" which I think will shed some light on our issue of top-down and bottom-up approach in machine learning seminar. I would like to quote several remarks from the brain theorist view point to share with those interesed: " I want to suggest that brain theory should confront the 'bottom-up' analyses of neural modellling no only with biological control theory but also with the 'top-down' analyses of artificial intelligence and congnitive psychology. In bottom-up analyses, we take components of known function, and explore ways of putting them together to synthesize more and more complex systems. In top-down analyses, we start from some complex functional behavior that interests us, and try to determine what are natural subsystems into which we can decompose a system that performs in the specified way. I would argue that progress in brain theory will depend on the cyclic interaction of these two methodologies. ..." " The top-down approach complement bottom-up studies, for one cannot simply wait until one knows all the neurons are and how they are connected to then simulate the complete system. ..." I believe that the similar philosophy applies to the machine learning study too. For those interested, the paper can be found in COINS techical report 81-31 by M. A. Arbib "A View of Brain Theory" Von-Wun, ------------------------------ Date: Fri, 30 Sep 83 14:45:55 PDT From: Rik Verstraete Subject: Parallelism and Physiology I would like to comment on your message that was printed in AIList Digest V1#63, and I hope you don't mind if I send a copy to the discussion list "self-organization" as well. Date: 23 Sep 1983 0043-PDT From: FC01@USC-ECL Subject: Parallelism I thought I might point out that virtually no machine built in the last 20 years is actually lacking in parallelism. In reality, just as the brain has many neurons firing at any given time, computers have many transistors switching at any given time. Just as the cerebellum is able to maintain balance without the higher brain functions in the cerebrum explicitly controlling the IO, most current computers have IO controllers capable of handling IO while the CPU does other things. The issue here is granularity, as discussed in general terms by E. Harth ("On the Spontaneous Emergence of Neuronal Schemata," pp. 286-294 in "Competition and Cooperation in Neural Nets," S. Amari and M.A. Arbib (eds), Springer-Verlag, 1982, Lecture Notes in Biomathematics # 45). I certainly recommend his paper. I quote: One distinguishing characteristic of the nervous system is thus the virtually continuous range of scales of tightly intermeshed mechanisms reaching from the macroscopic to the molecular level and beyond. There are no meaningless gaps of just matter. I think Harth has a point, and applying his ideas to the issue of parallel versus sequential clarifies some aspects. The human brain seems to be parallel at ALL levels. Not only is a large number of neurons firing at the same time, but also groups of neurons, groups of groups of neurons, etc. are active in parallel at any time. The whole neural network is a totally parallel structure, at all levels. You pointed out (correctly) that in modern electronic computers a large number of gates are "working" in parallel on a tiny piece of the problem, and that also I/O and CPU run in parallel (some systems even have more than one CPU). However, the CPU itself is a finite state machine, meaning it operates as a time-sequence of small steps. This level is inherently sequential. It therefore looks like there's a discontinuity between the gate level and the CPU/IO level. I would even extend this idea to machine learning, although I'm largely speculating now. I have the impression that brains not only WORK in parallel at all levels of granularity, but also LEARN in that way. Some computers have implemented a form of learning, but it is almost exclusively at a very high level (most current AI on learning work is at this level), or only at a very low level (cf. Perceptron). A spectrum of adaptation is needed. Maybe the distinction between the words learning and self-organization is only a matter of granularity too. (??) Just as people have faster short term memory than long term memory but less of it, computers have faster short term memory than long term memory and use less of it. These are all results of cost/benefit tradeoffs for each implementation, just as I presume our brains and bodies are. I'm sure most people will agree that brains do not have separate memory neurons and processing neurons or modules (or even groups of neurons). Memory and processing is completely integrated in a human brain. Certainly, there are not physically two types of memories, LTM and STM. The concept of LTM/STM is only a paradigm (no doubt a very useful one), but when it comes to implementing the concept, there is a large discrepancy between brains and machines. Don't be so fast to think that real computer designers are ignorant of physiology. Indeed, a lot of people I know in Computer Science do have some idea of physiology. (I am a CS major with some background in neurophysiology.) Furthermore, much of the early CS emerged from neurophysiology, and was an explicit attempt to build artificial brains (at a hardware/gate level). However, although "real computer designers" may not be ignorant of physiology, it doesn't mean that they actually manage to implement all the concepts they know. We still have a long way to go before we have artificial brains... The trend towards parallelism now is more like the human social system of having a company work on a problem. Many brains, each talking to each other when they have questions or results, each working on different aspects of a problem. Some people have breakdowns, but the organization keeps going. Eventually it comes up with a product, although it may not really solve the problem posed at the beginning, it may have solved a related problem or found a better problem to solve. Again, working in parallel at this level doesn't mean everything is parallel. Another copyrighted excerpt from my not yet finished book on computer engineering modified for the network bboards, I am ever yours, Fred All comments welcome. Rik Verstraete PS: It may sound like I am convinced that parallelism is the only way to go. Parallelism is indeed very important, but still, I believe sequential processing plays an important role too, even in brains. But that's a different issue... ------------------------------ End of AIList Digest ******************** 3-Oct-83 17:44:23-PDT,19336;000000000001 Mail-From: LAWS created at 3-Oct-83 17:43:28 Date: Monday, October 3, 1983 5:38PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #70 To: AIList@SRI-AI AIList Digest Tuesday, 4 Oct 1983 Volume 1 : Issue 70 Today's Topics: Technology Transfer & Research Ownership - Clarification, AI at Edinburgh - Description ---------------------------------------------------------------------- Date: Mon 3 Oct 83 11:55:41-PDT From: David Rogers Subject: recent flame I would like to clarify my recent comments on the disclaimer published with the conference announcement for the "Intelligent Systems and Machines" conference to be given at Oakland University. I did not mean to suggest that the organizers of this particular conference are the targets of my criticism; indeed, I congratulate them for informing potential attendees of their obligations under the law. I sincerely apologize for not making this obvious in my original note. I also realize that most conferences will have to deal with this issue in the future, and meant my message not as a "call to action", but rather, as a "call to discussion" of the proper role of goverment in AI and science in general. I believe that we should follow these rules, but should also participate in informed discussion of their long-range effect and direction. Apologies and regards, David Rogers DRogers@SUMEX-AIM.ARPA ------------------------------ Date: Friday, 30-Sep-83 14:17:58-BST From: BUNDY HPS (on ERCC DEC-10) Reply-to: bundy@rutgers.arpa Subject: Does Edinburgh AI exist? A while back someone in your digest asked whether the AI dept at Edinburgh still exists. The short answer is yes it flourishes. The long answer is contained in the departmental description that follows. Alan Bundy ------------------------------ Date: Friday, 30-Sep-83 14:20:00-BST From: BUNDY HPS (on ERCC DEC-10) Reply-to: bundy@rutgers.arpa Subject: Edinburgh AI Dept - A Description THE DEPARTMENT OF ARTIFICIAL INTELLIGENCE AT EDINBURGH UNIVERSITY Artificial Intelligence was recognised as a separate discipline by Edinburgh University in 1966. The Department in its present form was created in 1974. During its existence it has steadily built up a programme of undergraduate and post-graduate teaching and engaged in a vigorous research programme. As the only Department of Artificial Intelligence in any university, and as an organisation which has made a major contribution to the development of the subject, it is poised to play a unique role in the advance of Information Technology which is seen to be a national necessity. The Department collaborates closely with other departments within the University in two distinct groupings. Departments concerned with Cognitive Science, namely A.I., Linguistics, Philosophy and Psychology all participate in the School of Epistemics, which dates from the early 70's. A new development is an active involvement with Computer Science and Electrical Engineering. The 3 departments form the basis of the School of Information Technology. A joint MSc in Information Technology began in 1983. A.I. are involved in collaborative activities with other institutions which are significant in that they involve the transfer of people, ideas and software. In particular this involves MIT (robotics), Stanford (natural language), Carnegie-Mellon (the PERQ machine) and Grenoble (robotics). Relationships with industry are progressing. As well as a number of development contracts, A.I. have recently had a teaching post funded by the software house Systems Designers Ltd. There, however, is a natural limit to the extent to which a University Department can provide a service to industry: consequently a proposal to create an Artificial Intelligence Applications Institute has been put forward and is at an advanced stage of planning. This will operate as a revenue earning laboratory, performing a technology transfer function on the model of organisations like the Stanford Research Institute or Bolt Beranek and Newman. Research in A.I. A.I. is a new subject so that there is a very close relationship between teaching at all levels, and research. Artificial Intelligence is about making machines behave in ways which exhibit some of the characteristics of intelligence, and about how to integrate such capabilities into larger coherent systems. The vehicle for such studies has been the digital computer, chosen for its flexibility. A.I. Languages and Systems. The development of high level programming languages has been crucial to all aspects of computing because of the consequent easing of the task of communicating with these machines. Artificial Intelligence has given birth to a distinctive series of languages which satisfy different design constraints to those developed by Computer Scientists whose primary concern has been to develop languages in which to write reliable and efficient programming systems to perform standard computing tasks. Languages developed in the Artificial Intelligence field have been intended to allow people readily to try out ideas about how a particular cognitive process can be mechanised. Consequently they have provided symbolic computation as well as numeric, and have allowed program code and data to be equally manipulable. They are also highly interactive, and often integrated with a sophisticated text editor, so that the iteration time for trying out a new idea can be rapid. Edinburgh has made a substantial contribution to A.I. programming languages (with significant cross fertilisation to the Computer Science world) and will continue to do so. POP-2 was designed and developed in the A.I. Department by Popplestone and Burstall. The development of Prolog has been more complex. Kowalski first formulated the crucial idea of predicate logic as a programming language during his period in the A.I. Department. Prolog itself was designed and first implemented in Marseille, as a result of Kowalski's interaction with a research group there. This was followed by a re-implementation at Edinburgh, which demonstrated its potential as a practical tool. To date the A.I. Department have supplied implementations of A.I. languages to over 200 laboratories around the world, and are involved in an active programme of Prolog systems development. The current development in languages is being undertaken by a group supported by the SERC, led by Robert Rae, and supervised by Dr Howe. The concern of the group is to provide language support for A.I. research nationwide, and to develop A.I. software for a single user machine, the ICL PERQ. The major goal of this project is to provide the superior symbolic programming capability of Prolog, in a user environment of the quality to be found in modern personal computers with improved interactive capabilities. Mathematical Reasoning. If Artificial Intelligence is about mechanising reasoning, it has a close relationship with logic which is about formalising mathematical reasoning, and with the work of those philosophers who are concerned with formalising every-day reasoning. The development of Mathematical Logic during the 20th century has provided a part of the theoretical basis for A.I. Logic provides a rigorous specification of what may in principle be deduced - it says little about what may usefully be deduced. And while it may superficially appear straightforward to render ordinary language into logic, on closer examination it can be seen to be anything but easy. Nevertheless, logic has played a central role in the development of A.I. in Edinburgh and elsewhere. An early attempt to provide some control over the direction of deduction was the resolution principle, which introduced a sort of matching procedure called unification between parts of the axioms and parts of a theorem to be proved. While this principle was inadequate as a means of guiding a machine in the proof of significant theorems, it survives in Prolog whose equivalent of procedure call is a restricted form of resolution. A.I. practioners still regard the automation of mathematical reasoning to be a crucial area in A.I., but have moved from earlier attempts to find uniform procedures for an efficient search of the space of possible deductions to the creation of systems which embody expert knowledge about specific domains. For example if such a system is trying to solve a (non linear) equation, it may adopt a strategy of using the axioms of algebra to bring two instances of the unknown closer together with the "intention" of getting them to coalesce. Work in mathematical reasoning is under the direction of Dr Bundy. Robotics. The Department has always had a lively interest in robotics, in particular in the use of robots for assembly. This includes the use of vision and force sensing, and the design of languages for programming assembly robots. Because of the potential usefulness of fast moving robots, the Department has undertaken a study of their dynamics behaviour, design and control. The work of the robot group is directed by Mr Popplestone. A robot command language RAPT is under development: this is intended to make it easy for non-computer experts to program an assembly robot. The idea is that the assembly task should be programmed in terms of the job that is to be done and how the objects are to be fitted together, rather than in terms of how the manipulator should be moved. This SERC funded work is steered by a Robot Language Working Party which consists of industrialists and academics; the recently formed Tripartite Study Group on Robot Languages extends the interest to France and Germany. An intelligent robot needs to have an internal representation of its world which is sufficiently accurate to allow it to predict the results of planned actions. This means that, among other things, it needs a good representation of the shapes of bodies. While conventional shape modelling techniques permit a hypothetical world to be represented in a computer they are not ideal for robot applications, and the aim at Edinburgh is to combine techniques of shape modelling with techniques used in A.I. so that the advantages of both may be used. This will include the ability to deal effectively with uncertainty. Recently, in collaboration with GEC, the robotics group have begun to consider how the techniques of spatial inference which have been developed can be extended into the area of mechanical design, based on the observation that the essence of any design is the relationship between part features, rather than the specific quantitative details. A proposal is being persued for a demonstrator project to produce a small scale, but highly integrated "Design and Make" system on these lines. Work on robot dynamics, also funded by the SERC, has resulted in the development of highly efficient algorithms for simulating standard serial robots, and in a novel representation of spatial quantities, which greatly simplifies the mathematics. Vision and Remote Sensing. The interpretation of data derived from sensors depends on expectations about the structure of the world which may be of a general nature, for example that continuous surfaces occupy much of the scene, or specific. In manufacture the prior expectations will be highly specific: one will know what objects are likely to be present and how they are likely to be related to each other. One vision project in the A.I. Department is taking advantage of this in integrating vision with the RAPT development in robotics - the prior expectations are expressed by defining body geometry in RAPT, and by defining the expected inter-body relationships in the same medium. A robot operating in a natural environment will have much less specific expectations, and the A.I. Department collaborate with the Heriot Watt University to study the sonar based control of a submersible. This involves building a world representation by integrating stable echo patterns, which are interpreted as objects. Natural Language. A group working in the Department of A.I. and related departments in the School of Epistemics is studying the development of computational models of language production, the process whereby communicative intent is transformed into speech. The most difficult problems to be faced when pursuing this goal cover the fundamental issues of computation: structure and process. In the domain of linguistic modelling, these are the questions of representation of linguistic and real world knowledge, and the understanding of the planning process which underlies speaking. Many sorts of knowledge are employed in speaking - linguistic knowledge of how words sound, of how to order the parts of a sentence to communicate who did what to whom, of the meaning of words and phrases, and common sense knowledge of the world. Representing all of these is prerequisite to using them in a model of language production. On the other hand, planning provides the basis for approaching the issue of organizing and controlling the production process, for the mind seems to produce utterances as the synthetic, simultaneous resolution of numerous partially conflicting goals - communicative goals, social goals, purely linguistic goals - all variously determined and related. The potential for dramatic change in the study of human language which is made possible by this injection of dynamic concerns into what has heretofore been an essentially static enterprise is vast, and the A.I. Department sees its work as attempting to realise some of that potential. The study of natural language processing in the department is under the direction of Dr Thompson. Planning Systems. General purpose planning systems for automatically producing plans of action for execution by robots have been a long standing theme of A.I. research. The A.I. Department at Edinburgh had a very active programme of planning research in the mid 1970s and was one of the leading international centres in this area. The Edinburgh planners were applied to the generation of project plans for large industrial activities (such as electricity turbine overhaul procedures). These planners have continued to provide an important source of ideas for later research and development in the field. A prototype planner in use at NASA's Jet Propulsion Laboratory which can schedule the activities of a Voyager-type planetary probe is based on Edinburgh work. New work on planning has recently begun in the Department and is mainly concerned with the interrelationships between planning, plan execution and monitoring. The commercial exploitation of the techniques is also being discussed. The Department's planning work is under the direction of Dr Tate. Knowledge Based and Expert Systems. Much of the A.I. Department's work uses techniques often referred to as Intelligent Knowledge Based Systems (IKBS) - this includes robotics, natural language, planning and other activities. However, researchers in the Department of A.I. are also directly concerned with the creation of Expert Systems in Ecological Modelling, User Aids for Operating Systems, Sonar Data Interpretation, etc. Computers in Education. The Department has pioneered in this country an approach to the use of computers in schools in which children can engage in an active and creative interaction with the computer without needing to acquire abstract concepts and manipulative skills for which they are not yet ready. The vehicle for this work has been the LOGO language, which has a simple syntax making few demands on the typing skills of children. While LOGO is in fact equivalent to a substantial subset of LISP, a child can get moving with a very small subset of the language, and one which makes the actions of the computer immediately concrete in the form of the movements of a "turtle" which can either be steered around a VDU or in the form of a small mobile robot. This approach has a significant value in Special Education. For example in one study an autistic boy found he was able to communicate with a "turtle", which apparently acted as a metaphor for communicating with people, resulting in his being able to use language spontaneously for the first time. In another study involving mildly mentally and physically handicapped youngsters a touch screen device invoked procedures for manipulating pictorial materials designed to teach word attack skills to non-readers. More recent projects include a diagnostic spelling program for dyslexic children, and a suite of programs which deaf children can use to manipulate text to improve their ability to use language expressively. Much of the Department's Computers in Education work is under the direction Dr Howe. Teaching in the Department of A.I. The Department is involved in an active teaching programme at undergraduate and postgraduate level. At undergraduate level, there are A.I. first, second and third year courses. There is a joint honours degree with the Department of Linguistics. A large number of students are registered with the Department for postgraduate degrees. An MSc/PhD in Cognitive Science is provided in collaboration with the departments of Linguistics, Philosophy and Psychology under the aegis of the School of Epistemics. The Department contributes two modules on this: Symbolic Computation and Computational Linguistics. This course has been accepted as a SERC supported conversion course. In October 1983 a new MSc programme in IT started. This is a joint activity with the Departments of Computer Science and Electrical Engineering. It has a large IKBS content which is supported by SERC. Computing Facilities in the Department of A.I. Computing requirements of researchers are being met largely through the SERC DEC-10 situated at the Edinburgh Regional Computing Centre or residually through use of UGC facilities. Undergraduate computing for A.I. courses is supported by the EMAS facilities at ERCC. Postgraduate computing on courses is mainly provided through a VAX 11/750 Berkeley 4.1BSD UNIX system within the Department. Several groups in the Department use the ICL PERQ single user machine. A growth in the use of this and other single user machines is envisaged over the next few years. The provision of shared resources to these systems in a way which allows for this growth in an orderly fashion is a problem the Department wishes to solve. It is anticipated that several further multi-user computers will soon be installed - one at each site of the Department - to act as the hub of future computing provision for the research pursued in Artificial Intelligence. ------------------------------ End of AIList Digest ******************** 6-Oct-83 10:23:19-PDT,14367;000000000001 Mail-From: LAWS created at 6-Oct-83 10:00:51 Date: Thursday, October 6, 1983 9:55AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #71 To: AIList@SRI-AI AIList Digest Thursday, 6 Oct 1983 Volume 1 : Issue 71 Today's Topics: Humor - The Lightbulb Issue in AI, Reports - Edinburgh AI Memos, Rational Psychology, Halting Problem, Artificial Organisms, Technology Transfer, Seminar - NL Database Updates ---------------------------------------------------------------------- Date: 6 Oct 83 0053 EDT (Thursday) From: Jeff.Shrager@CMU-CS-A Subject: The lightbulb issue in AI. How many AI people does it take to change a lightbulb? At least 55: The problem space group (5): One to define the goal state. One to define the operators. One to describe the universal problem solver. One to hack the production system. One to indicate about how it is a model of human lightbulb changing behavior. The logical formalism group (16): One to figure out how to describe lightbulb changing in first order logic. One to figure out how to describe lightbulb changing in second order logic. One to show the adequecy of FOL. One to show the inadequecy of FOL. One to show show that lightbulb logic is non-monotonic. One to show that it isn't non-monotonic. One to show how non-monotonic logic is incorporated in FOL. One to determine the bindings for the variables. One to show the completeness of the solution. One to show the consistency of the solution. One to show that the two just above are incoherent. One to hack a theorm prover for lightbulb resolution. One to suggest a parallel theory of lightbulb logic theorm proving. One to show that the parallel theory isn't complete. ...ad infinitum (or absurdum as you will)... One to indicate how it is a description of human lightbulb changing behavior. One to call the electrician. The robotics group (10): One to build a vision system to recognize the dead bulb. One to build a vision system to locate a new bulb. One to figure out how to grasp the lightbulb without breaking it. One to figure out how to make a universal joint that will permit the hand to rotate 360+ degrees. One to figure out how to make the universal joint go the other way. One to figure out the arm solutions that will get the arm to the socket. One to organize the construction teams. One to hack the planning system. One to get Westinghouse to sponsor the research. One to indicate about how the robot mimics human motor behavior in lightbulb changing. The knowledge engineering group (6): One to study electricians' changing lightbulbs. One to arrange for the purchase of the lisp machines. One to assure the customer that this is a hard problem and that great accomplishments in theory will come from his support of this effort. (The same one can arrange for the fleecing.) One to study related research. One to indicate about how it is a description of human lightbulb changing behavior. One to call the lisp hackers. The Lisp hackers (13): One to bring up the chaos net. One to adjust the microcode to properly reflect the group's political beliefs. One to fix the compiler. One to make incompatible changes to the primitives. One to provide the Coke. One to rehack the Lisp editor/debugger. One to rehack the window package. Another to fix the compiler. One to convert code to the non-upward compatible Lisp dialect. Another to rehack the window package properly. One to flame on BUG-LISPM. Another to fix the microcode. One to write the fifteen lines of code required to change the lightbulb. The Psychological group (5): One to build an apparatus which will time lightbulb changing performance. One to gather and run subjects. One to mathematically model the behavior. One to call the expert systems group. One to adjust the resulting system so that it drops the right number of bulbs. [My apologies to groups I may have neglected. Pages to code before I sleep.] ------------------------------ Date: Saturday, 1-Oct-83 15:13:42-BST From: BUNDY HPS (on ERCC DEC-10) Reply-to: bundy@rutgers.arpa Subject: Edinburgh AI Memos If you want to receive a regular abstracts list and order form for Edinburgh AI technical reports then write (steam mail I'm afraid) to Margaret Pithie, Department of Artificial Intelligence, Forrest Hill, Edinburgh, Scotland. Give your name and address and ask to be put on the mailing list for abstracts. Alan Bundy ------------------------------ Date: 29 Sep 83 22:49:18-PDT (Thu) From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax Subject: Re: Rational Psychology - (nf) Article-I.D.: uiucdcs.3046 The book mentioned, Metaphors We Live By, was written by George Lakoff and Mark Johnson. It contains some excellent ideas and is written in a style that makes for fast, enjoyable reading. --Rick Dinitz uicsl!dinitz ------------------------------ Date: 28 Sep 83 10:32:35-PDT (Wed) From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax Subject: RE: Rational Psychology [and Reply] I must say its been exciting listening to the analysis of what "Rational Psychology" might mean or should not mean. Should I go read the actual article that started it all? Perish the thought. Is psychology rational? Someone said that all sciences are rational, a moot point, but not that relevant unless one wishes to consider Psychology a science. I do not. This does not mean that psychologists are in any way inferior to chemists or to REAL scientists like those who study physics. But I do think there is a difference IN KIND between these fields and psychology. Very few of us have any close intimate relationships with carbon compounds or inter- stellar gas clouds. (At least not since the waning of the LSD era.) But with psychology, anyone NOT in this catagory has no business in the field. (I presume we are talking Human psychology.) The way this difference might exert itself is quite hard to predict, tho in my brief foray into psychology it was not so hard to spot. The great danger is a highly amplified form of anthropomorphism which leads one to form technical opinions quite possibly unrelated to technical or theoretical analysis. In physics, there is a superficially similar process in which the scientist develops a theory which seems to be a "pet theory" and then sets about trying to show it true or false. The difference is that the physicist developed his pet theory from technical origins rather than from personal experience. There is no other origin for his ideas unless you speculate that people have a inborn understanding of psi-mesons or spin orbitals. Such theories MUST have developed from these ideas. In psychology, the theory may well have been developed from a big scary dog when the psychologist was two. THAT is a difference in kind, and I think that is why I will always be suspicious of psychologists. ----GaryFostel---- [I think that is precisely the point of the call for rational psychology. It is an attempt to provide a solid theoretical underpinning based on the nature of mind, intelligence, emotions, etc., without regard to carbon-based implementations or the necessity of explaining human psychoses. As such, rational psychology is clearly an appropriate subject for AIList and net.ai. Traditional psychology, and subjective attacks or defenses of it, are less appropriate for this forum. -- KIL] ------------------------------ Date: 2 Oct 83 1:42:26-PDT (Sun) From: ihnp4!ihuxv!portegys @ Ucb-Vax Subject: Re: the Halting problem Article-I.D.: ihuxv.565 I think that the answer to the halting problem in intelligent entities is that there must exist a mechanism for telling it whether its efforts are getting it anywhere, i.e. something that senses its internal state and says if things are getting better, worse, or whatever. Normally for humans, if a "loop" were to begin, it should soon be broken by concerns like "I'm hungry now, let's eat". No amount of cogitation makes that feeling go away. I would rather call this mechanism need than emotion, since I think that some emotions are learned. So then, needs supply two uses to intelligence: (1) they supply a direction for the learning which is a necessary part of intelligence, and (2) they keep the intelligence from getting bogged down in fruitless cogitation. Tom Portegys Bell Labs, IH ihuxv!portegys ------------------------------ Date: 3 Oct 83 20:22:47 EDT (Mon) From: Speaker-To-Animals Subject: Re: Artificial Organisms Why would we want to create machines equivelent to people when organisms already have a means to reproduce themselves? Because then we might be able to make them SMARTER than humans of course! We might also learn something about ourselves along the way too. - Speaker ------------------------------ Date: 30 Sep 83 1:16:31-PDT (Fri) From: decvax!genrad!mit-eddie!barmar @ Ucb-Vax Subject: November F&SF Article-I.D.: mit-eddi.774 Some of you may be interested in reading Isaac Asimov's article in the latest (November, I think) Magazine of Fantasy and Science Fiction. The article is entitled "More Thinking about Thinking", and is the Good Doctor's views on artificial intelligence. He makes a very good case for the idea that non-human thinking (i.e. in computers and dolphins) is likely to be very different, and perhaps superior to, human thinking. He uses an effective analogy to locomotion: artificial locomotion, namely the wheel, is completely unlike anything found in nature. -- Barry Margolin ARPA: barmar@MIT-Multics UUCP: ..!genrad!mit-eddie!barmar ------------------------------ Date: Mon, 3 Oct 83 23:17:18 EDT From: Brint Cooper (CTAB) Subject: Re: Alas, I must flame... I don't believe, as you assert, that the motive for clearing papers produced under DOD sponsorship is 'econnomic' but, alas, military. You then may justly argue the merits of non-export of things militarily important vs the benefuits which acaccrue to all of us by a free and open exchange. I'm not taking sides--yet., but am trying to see the issue clearly defined. Brint ------------------------------ Date: Tue, 4 Oct 83 8:16:20 EDT From: Earl Weaver (VLD/VMB) Subject: Flame on DoD No matter what David Rogers @ sumex-aim thinks, the DoD "review" of all papers before publishing is not to keep information private, but to make sure no classified stuff gets out where it shouldn't be and to identify any areas of personal opinion or thinking that could be construed to be official DoD policy or position. I think it will have very little effect on actually restricting information. As with most research organizations, the DoD researchers are not immune to the powers of the bean counters and must publish. ------------------------------ Date: Mon 3 Oct 83 16:44:24-PDT From: Sharon Bergman Subject: Ph.D. oral [Reprinted from the SU-SCORE bboard.] Computer Science Department Ph.D. Oral, Jim Davidson October 18, 1983 at 2:30 p.m. Rm. 303, Building 200 Interpreting Natural Language Database Updates Although the problems of querying databases in natural language are well understood, the performance of database updates via natural language introduces additional difficulties. This talk discusses the problems encountered in interpreting natural language updates, and describes an implemented system that performs simple updates. The difficulties associated with natural language updates result from the fact that the user will naturally phrase requests with respect to his conception of the domain, which may be a considerable simplification of the actual underlying database structure. Updates that are meaningful and unambiguous from the user's standpoint may not translate into reasonable changes to the underlying database. The PIQUE system (Program for Interpretation of Queries and Updates in English) operates by maintaining a simple model of the user, and interpreting update requests with respect to that model. For a given request, a limited set of "candidate updates"--alternative ways of fulfilling the request--are considered, and ranked according to a set of domain-independent heuristics that reflect general properties of "reasonable" updates. The leading candidate may be performed, or the highest ranking alternatives presented to the user for selection. The resultant action may also include a warning to the user about unanticipated side effects, or an explanation for the failure to fulfill a request. This talk describes the PIQUE system in detail, presents examples of its operation, and discusses the effectiveness of the system with respect to coverage, accuracy, efficiency, and portability. The range of behaviors required for natural language update systems in general is discussed, and implications of updates on the design of data models are briefly considered. ------------------------------ End of AIList Digest ******************** 10-Oct-83 10:29:48-PDT,13363;000000000001 Mail-From: LAWS created at 10-Oct-83 10:26:15 Date: Monday, October 10, 1983 10:16AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #72 To: AIList@SRI-AI AIList Digest Monday, 10 Oct 1983 Volume 1 : Issue 72 Today's Topics: Administrivia - AIList Archives, Music & AI - Request, NL - Semantic Chart Parsing & Simple English Grammar, AI Journals - Address of "Artificial Intelligence", Alert - IEEE Computer Issue, Seminars - Stanfill at Univ. of Maryland, Zadeh at Stanford, Commonsense Reasoning ---------------------------------------------------------------------- Date: Sun 9 Oct 83 18:03:24-PDT From: Ken Laws Reply-to: AIList-Request@SRI-AI Subject: AIList Archives The archives have grown to the point that I can no longer keep them available online. I will keep the last three month's issues available in archive.txt on SRI-AI. Preceding issues will be backed up on tape, and will require about a day's notice to recover. The tape archive will consist of quarterly composites (or smaller groupings, if digest activity gets any higher than it has been). The file names will be of the form AIL1N1.TXT, AIL1N19.TXT, etc. All archives will be in the MMAILR mailer format. The online archive may be obtained via FTP using anonymous login. Since a quarterly archive can be very large (up to 300 disk pages) it will usually be better to ask me for particuar issues than to FTP the whole file. -- Ken Laws ------------------------------ Date: Thu, 25 Aug 83 00:07:53 PDT From: uw-beaver!utcsrgv!nixon@LBL-CSAM Subject: AIList Archive- Univ. of Toronto [I previously put out a request for online archives that could be obtained by anonymous FTP. There were very few responses. Perhaps this one will be of use. -- KIL] Dear Ken, Copies of the AIList Digest are kept in directory /u5/nixon/AIList with file names V1.5, V1.40, etc. Our uucp site name is "utcsrgv". This is subject to change in the very near future as the AI group at the University of Toronto will be moving to a new computer. Brian Nixon. ------------------------------ Date: 4 Oct 83 9:23:38-PDT (Tue) From: hplabs!hao!cires!nbires!ut-sally!riddle @ Ucb-Vax Subject: Re: Music & AI, pointers wanted Article-I.D.: ut-sally.86 How about posting the results of the music/ai poll to the net? There have been at least two similar queries in recent memory, indicating at least a bit of general interest. [...] -- Prentiss Riddle {ihnp4,kpno,ctvax}!ut-sally!riddle riddle@ut-sally.UUCP ------------------------------ Date: 5 Oct 83 19:54:32-PDT (Wed) From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax Subject: Re: Re: NL argument between STLH and Per - (nf) Article-I.D.: uiucdcs.3132 I've heard of "syntactic chart parsing," but what is "semantic chart parsing?" It sounds interesting, and I'd like to hear about it. I'm also interested in seeing your paper. Please make arrangements with me via net mail. Rick Dinitz U. of Illinois ...!uicsl!dinitz ------------------------------ Date: 3 Oct 83 18:39:00-PDT (Mon) From: pur-ee!ecn-ec.davy @ Ucb-Vax Subject: WANTED: Simple English Grammar - (nf) Article-I.D.: ecn-ec.1173 Hello, I am looking for a SIMPLE set of grammar rules for English. To be specific, I'm looking for something of the form: SENT = NP + VP ... NP = DET + ADJ + N ... VP = ADV + V + DOBJ ... etc. I would prefer a short set of rules, something on the order of one or two hundred lines. I realize that this isn't enough to cover the whole English language, I don't want it to. I just want something which could handle "simple" sentences, such as "The cat chased the mouse", etc. I would like to have rules for questions included, so that something like "What does a hen weigh?" can be covered. I've scoured our libraries here, and have only found one book with a grammar for English in it, and it's much more complex than what I want. Any pointers to books/magazines or grammars themselves would be greatly appreciated. Thanks in advance (as the saying goes) --Dave Curry decvax!pur-ee!davy eevax.davy@purdue ------------------------------ Date: 6 Oct 83 17:21:29-PDT (Thu) From: ihnp4!cbosgd!cbscd5!lvc @ Ucb-Vax Subject: Address of "Artificial Intelligence" Article-I.D.: cbscd5.739 Here is the address of "Artificial Intelligence" if anyone is interested: Artificial Intelligence (bi-monthly $136 -- Ouch !) North-Holland Publishing Co., Box 211, 1000 AE Amsterdam, Netherlands. Editors D.G. Bobrow, P.J. Hayes Advertising, book reviews, circulation 1,100 Also avail. in microform from Microforms International Marketing Co. Maxwell House Fairview Park Elmsford NY 10523 Indexed: Curr. Cont. Larry Cipriani cbosgd!cbscd5!lvc [There is a reduced rate for members of AAAI. -- KIL] ------------------------------ Date: Sun 9 Oct 83 17:45:52-PDT From: Ken Laws Subject: IEEE Computer Issue Don't miss the October 1983 issue of IEEE Computer. It is a special issue on knowledge representation, and includes articles on learning, logic, and other related topics. There is also a short list of 30 expert system on p. 141. ------------------------------ Date: 8 Oct 83 04:18:04 EDT (Sat) From: Bruce Israel Subject: University of Maryland AI talk [Reprinted from the University of Maryland BBoard] The University of Maryland Computer Science Dept. is starting an informal AI seminar, meeting every other Thursday in Room 2330, Computer Science Bldg, at 5pm. The first meeting will be held Thursday, October 13. All are welcome to attend. The abstract for the talk follows. MAL: My AI Language Craig Stanfill Department of Computer Science University of Maryland College Park, MD 20742 In the course of writing my thesis, I implemented an AI language, called MAL, for manipulating symbolic expressions. MAL runs in the University of Maryland Franz Lisp Environment on a VAX 11/780 under Berkely Unix (tm) 4.1. MAL is of potential benefit in knowledge representation research, where it allows the development and testing of knowledge representations without build- ing an inference engine from scratch, and in AI education, where it should allow students to experiment with a simple AI programming language. MAL pro- vides for: 1. The representation of objects and queries as symbolic expressions. Objects are recursively constructed from sets, lists, and bags of atoms (as in QLISP). A powerful and efficient pattern matcher is provided. 2. The rule-directed simplification of expressions. Limited facilities for depth first search are provided. 3. Access to a database. Rules can assert and fetch simplifications of expressions. The database also employs a truth maintenance system. 4. The construction of large AI systems by the combination of simpler modules called domains. For each domain, there is a database, a set of rules, and a set of links to other domains. 5. A set of domains which are generally useful, especially for spatial rea- soning. This includes domains for solid and linear geometry, and for algebra. 6. Facilities which allow the user to customize MAL (to a degree). Calls to arbitrary LISP functions are supported, allowing the language to be easily extended. ------------------------------ Date: Thu 6 Oct 83 20:18:09-PDT From: Doug Lenat Subject: Colloquium Oct 11: ZADEH [Reprinted from the SU-SCORE bboard.] Professor Lotfi Zadeh, of UCB, will be giving the CS colloquium this Tuesday (10/11). As usual, it will be in Terman Auditorium, at 4:15 (preceded at 3:45 by refreshments in the 3rd floor lounge of Margaret Jacks Hall). The title and abstract for the colloquium are as follows: Reasoning With Commonsense Knowledge Commonsense knowledge is exemplified by "Glass is brittle," "Cold is infectious," "The rich are conservative," "If a car is old, it is unlikely to be in good shape," etc. Such knowledge forms the basis for most of human reasoning in everyday situations. Given the pervasiveness of commonsense reasoning, a question which begs for answer is: Why is commonsense reasoning a neglected area in classical logic? Because, almost by definition, commonsense knowledge is that knowledge which is not representable as a collection of well-formed formulae in predicate logic or other logical systems which have the same basic conceptual structure as predicate logic. The approach to commonsense reasoning which is described in the talk is based on the use of fuzzy logic -- a logic which allows the use of fuzzy predicates, fuzzy quantifiers and fuzzy truth-values. In this logic, commonsense knowledge is defined to be a collection of dispositions, that is propositions with suppressed fuzzy quantifiers. To infer from such knowledge, three basic syllogisms are developed: (1) the intersection/product syllogism; (2) the consequent conjunction syllogism; and (3) the antecedent conjunction syllogism. The use of these syllogisms in commonsense reasoning and their application to the combination of evidence in expert systems is discussed and illustrated by examples. ------------------------------ Date: Fri 7 Oct 83 09:42:30-PDT From: Christopher Schmidt Subject: "rich" = "conservative" ? [Reprinted from the SU-SCORE bboard.] Subject: Colloquium Oct 11: ZADEH The title and abstract for the colloquium are as follows: Reasoning With Commonsense Knowledge I don't think I've seen flames in response to abstracts before, but I get so sick of hearing "rich," "conservative," and "evil" used as synonyms. Commonsense knowledge is exemplified by [...] "The rich are conservative," [...]. In fact, in the U.S., 81% of people with incomes over $50,000 are registered Democrats. Only 47% with incomes under $50,000 are. (The remaining 53% are made up of "independents," &c..) The Democratic Party gets the majority of its funding from contributions of over $1000 apiece. The Republican Party is mostly funded by contributions of $10 and under. (Note: I'd be the last to equate Conservatism and the Republican Party. I am a Tory and a Democrat. However, more "commonsense knowledge" suggests that I can use the word "Republican" in place of "conservative" for the purpose of refuting the equation of "rich" and "conservative." Such knowledge forms the basis for most of human reasoning in everyday situations. This statement is so true that it is the reason I gave up political writing. Given the pervasiveness of commonsense reasoning, a question which begs for answer is: Why is commonsense reasoning a neglected area in classical logic? [...] Perhaps because false premeses tend to give rise to false conclusions? Just what we need--"ignorant systems." (:-) --Christopher ------------------------------ Date: Fri 7 Oct 83 10:22:37-PDT From: Richard Treitel Subject: Re: "rich" = "conservative" ? [Reprinted from the SU-SCORE bboard.] Why is logic a neglected area in commonsense reasoning? (to say nothing of political writing)? More seriously, or at least more historically, a survey was once taken of ecological and other pressure groups in England, asking them which had been the most and least effective methods they had used to convince governmental bodies. Right at the bottom of the list of "least effective" was Reasoned Argument. - Richard ------------------------------ Date: Fri, 7 Oct 83 10:36 PDT From: Vaughan Pratt Subject: Reasoned Argument [Reprinted from the SU-SCORE bboard.] [...] I think if "Breathing" had been on the least along with "Reasoned Argument" then the latter would only have come in second last. It is not that reasoned argument is ineffective but that it is on a par with breathing, namely something we do subconsciously. Consciously performed reasoning is only marginally reliable in mathematical circles, and quite unreliable in most other areas. It makes most people dizzy, much as consciously performed breathing does. -v ------------------------------ End of AIList Digest ******************** 10-Oct-83 16:26:48-PDT,17750;000000000001 Mail-From: LAWS created at 10-Oct-83 16:22:48 Date: Monday, October 10, 1983 4:17PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #73 To: AIList@SRI-AI AIList Digest Tuesday, 11 Oct 1983 Volume 1 : Issue 73 Today's Topics: Halting Problem, Conciousness, Rational Psychology ---------------------------------------------------------------------- Date: Thu 6 Oct 83 18:57:04-PDT From: PEREIRA@SRI-AI.ARPA Subject: Halting problem discussion This discussion assumes that "human minds" are at least equivalent to Universal Turing Machines. If they are restricted to computing smaller classes of recursive functions, the question dissolves. Sequential computers are idealized as having infinite memory because that makes it easier to study mathematically asymptotic behavior. Of course, we all know that a more accurate idealization of sequential computers is the finite automaton (for which there is no halting problem, of course!). The discussion on this issue seemed to presuppose that "minds" are the same kind of object as existing (finite!) computing devices. Accepting this presupposition for a moment (I am agnostic on the matter), the above argument applies and the discussion is shown to be vacuous. Thus fall undecidability arguments in psychology and linguistics... Fernando Pereira PS. Any silliness about unlimited amounts of external memory will be profitably avoided. ------------------------------ Date: 7 Oct 83 1317 EDT (Friday) From: Robert.Frederking@CMU-CS-A (C410RF60) Subject: AI halting problem Actually, this isn't a problem, as far as I can see. The Halting Problem's problem is: there cannot be a program for a Turing-equivalent machine that can tell whether *any* arbitrary program for that machine will halt. The easiest proof that a Halts(x) procedure can't exist is the following program: (due to Jon Bentley, I believe) if halts(x) then while true do print("rats") What happens when you start this program up, with itself as x? If halts(x) returns true, it won't halt, and if halts(x) returns false, it will halt. This is a contradiction, so halts(x) can't exist. My question is, what does this have to do with AI? Answer, not much. There are lots of programs which always halt. You just can't have a program which can tell you *for* *any* *program* whether it will halt. Furthermore, human beings don't want to halt, i.e., die (this isn't really a problem, since the question is whether their mental subroutines halt). So as long as the mind constructs only programs which will definitely halt, it's safe. Beings which aren't careful about this fail to breed, and are weeded out by evolution. (Serves them right.) All of this seems to assume that people are Turing-equivalent (without pencil and paper), which probably isn't true, and certainly hasn't been proved. At least I can't simulate a PDP-10 in my head, can you? So let's get back to real discussions. ------------------------------ Date: Fri, 7 Oct 83 13:05:16 CDT From: Paul.Milazzo Subject: Looping in humans Anyone who believes the human mind incapable of looping has probably never watched anyone play Rogue :-). The success of Rogomatic (the automatic Rogue-playing program by Mauldin, et. al.) demonstrates that the game can be played by deriving one's next move from a simple *fixed* set of operations on the current game state. Even in the light of this demonstration, Rogue addicts sit hour after hour mechanically striking keys, all thoughts of work, food, and sleep forgotten, until forcibly removed by a girl- or boy-friend or system crash. I claim that such behavior constitutes looping. :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) Paul Milazzo Dept. of Mathematical Sciences Rice University, Houston, TX P.S. A note to Rogue fans: I have played a few games myself, and understand the appeal. One of the Rogomatic developers is a former roommate of mine interested in part in overcoming the addiction of rogue players everywhere. He, also, has played a few games... ------------------------------ Date: 5 Oct 83 9:55:56-PDT (Wed) From: hplabs!hao!seismo!philabs!cmcl2!floyd!clyde!akgua!emory!gatech!owens @ Ucb-Vax Subject: Re: a definition of consciousness? Article-I.D.: gatech.1379 I was doing required reading for a linguistics class when I came across an interesting view of consciousness in "Foundations of the Theory of Signs", by Charles Morris, section VI, subsection 12, about the 6th paragraph (Its also in the International Encyclopedia of Unified Science, Otto Neurath, ed.). to say that Y experiences X is to define a relation E of which Y is the domain and X is the range. Thus, yEx says that it is true that y experiences x. E does not follow normal relational rules (not transitive or symmetric. I can experience joe, and joe can experience fred, but it's not nessesarily so that I thus experience fred.) Morris goes on to state that yEx is a "conscious experience" if yE(yEx) ALSO holds, otherwise it's an "unconscious experience". Interesting. Note that there is no infinite regress of yE(yE(yE....)) that is usually postulated as being a consequence of computer consciousness. However the function that defines E is defined, it only needs to have the POTENTIAL of being able to fit yEx as an x in another yEx, where y is itself. Could the fact that the postulated computer has the option of NOT doing the insertion be some basis for free will??? Would the required infinite regress of yE(yE(yE.... manifest some sort of compulsiveness that rules out free will?? (not to say that an addict of some sort has no free will, although it's worth thinking about). Question: Am I trivializing the problem by making the problem of consiousness existing or not being the ability to define the relation E? Are there OTHER questions that I haven't considered that would strengthen or weaken that supposition? No flames, please, since this ain't a flame. G. Owens at gatech CSNET. ------------------------------ Date: 6 Oct 83 9:38:19-PDT (Thu) From: ihnp4!ihuxr!lew @ Ucb-Vax Subject: towards a calculus of the subjective Article-I.D.: ihuxr.685 I posted some articles to net.philosophy a while back on this topic but I didn't get much of rise out of anybody. Maybe this is a better forum. (Then again, ...) I'm induced to try here by G. Owens article, "Re: definition of consciousness". Instead of trying to formulate a general characteristic of conscious experience, what about trying to characterize different types of subjective experience in terms of their physical correlates? In particular, what's the difference between seeing a color (say) and hearing a sound? Even more particularly, what's the difference between seeing red, and seeing blue? I think the last question provides a potential experimental test of dualism. If it could be shown that the subjective experience of a red image was constituted by an internal set of "red" image cells, and similarly for a blue image, I would regard this as a proof of dualism. This is assuming the "red" and "blue" cells to be physically equivalent. The choice between which were "red" and which were "blue" would have no physical basis. On the other hand, suppose there were some qualitative difference in the firing patterns associated with seeing red versus seeing blue. We would have a physical difference to hang our hat on, but we would still be left with the problem of forming a calculus of the subjective. That is, we would have to figure out a way to deduce the type of subjective experience from its physical correlates. A successful effort might show how to experience completely new colors, for example. Maybe our restriction to a 3-d color space is due to the restricted stimulation of subjective color space by three inputs. Any acid heads care to comment? These thoughts were inspired by Thomas Nagel's "What is it like to be a bat?" in "The Minds I". I think the whole subjective-objective problem is given short shrift by radical AI advocates. Hofstadter's critique of Nagel's article was interesting, but I don't think it addressed Nagel's main point. Lew Mammel, Jr. ihuxr!lew ------------------------------ Date: 6 Oct 83 10:06:54-PDT (Thu) From: ihnp4!zehntel!tektronix!tekecs!orca!brucec @ Ucb-Vax Subject: Re: Parallelism and Physiology Article-I.D.: orca.179 ------- Re the article posted by Rik Verstraete : In general, I agree with your statements, and I like the direction of your thinking. If we conclude that each level of organization in a system (e.g. a conscious mind) is based in some way on the next lower level, it seems reasonable to suppose that there is in some sense a measure of detail, a density of organization if you will, which has a lower limit for a given level before it can support the next level. Thus there would be, in the same sense, a median density for the levels of the system (mind), and a standard deviation, which I conjecture would be bounded in any successful system (only the top level is likely to be wildly different in density, and that lower than the median). Maybe the distinction between the words learning and self-organization is only a matter of granularity too. (??) I agree. I think that learning is simply a sophisticated form of optimization of a self-organizing system in a *very* large state space. Maybe I shouldn't have said "simply." Learning at the level of human beings is hardly trivial. Certainly, there are not physically two types of memories, LTM and STM. The concept of LTM/STM is only a paradigm (no doubt a very useful one), but when it comes to implementing the concept, there is a large discrepancy between brains and machines. Don't rush to decide that there aren't two mechanisms. The concepts of LTM and STM were developed as a result of observation, not from theory. There are fundamental functional differences between the two. They *may* be manifestations of the same physical mechanism, but I don't believe there is strong evidence to support that claim. I must admit that my connection to neurophysiology is some years in the past so I may be unaware of recent research. Does anyone out there have references that would help in this discussion? ------------------------------ Date: 7 Oct 83 15:38:14-PDT (Fri) From: harpo!floyd!vax135!ariel!norm @ Ucb-Vax Subject: Re: life is but a dream Article-I.D.: ariel.482 re Michael Massimilla's idea (not original, of course) that consciousness and self-awareness are ILLUSIONS. Where did he get the concept of ILLUSION? The stolen concept fallacy strikes again! This fallacy is that of using a concept while denying its genetic roots... See back issues of the Objectivist for a discussion of this fallacy.... --Norm on ariel, Holmdel, N.J. ------------------------------ Date: 7 Oct 83 11:17:36-PDT (Fri) From: ihnp4!ihuxr!lew @ Ucb-Vax Subject: life is but a dream Article-I.D.: ihuxr.690 Michael Massimilla informs us that consciousness and self-awareness are ILLUSIONS. This is like saying "It's all in your mind." As Nietzsche said, "One sometimes remains faithful to a cause simply because its opponents do not cease to be insipid." Lew Mammel, Jr. ihuxr!lew ------------------------------ Date: 5 Oct 83 1:07:31-PDT (Wed) From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax Subject: RE: Rational Psychology Article-I.D.: ncsu.2357 Someone's recent attempt to make the meaning of "Rational Psychology" seem trivial misses the point a number of people have made in commenting on the odd nature of the name. The reasoning was something like this: 1) rational "X" means the same thing in spite of what "X" is. 2) => rational psychology is a clear and simple thing 3) wake up guys, youre being dumb. Well, I think this line misses at least one point. The argument above is probably sound provided one accepts the initial premise, which I do not neccessarily accept. Another example of the logic may help. 1) Brute Force elaboration solve problems of set membership. E.g. just look at the item and compare it with every member of the set. This is a true statement for a wide range of possible sets. 2) Real Numbers are a kind of set. 3) Wake up Cantor, you're wasting (or have wasted) your time. It seems quite clear that in the latter example, the premise is naive and simply fails to apply to sets of infinite proportions. (Or more properly one must go to some effort to justify such use.) The same issue applies to the notion of Rational Psychology. Does it make sense to attempt to apply techniques which may be completely inadequate? Rational analysis may fail completely to explain the workings of the mind, esp when we are looking at the "non-analytic" capabilities that are implied by psychology. We are on the edge of a philosophical debate, with terms like "dual-ism" and "phsical-ism" etc marking out party lines. It may be just as ridiculous to some people to propose a rational study of psychology as it seems to most of us that one use finite analysis to deal with trans-finite cardinalities [or] as it seems to some people to propose to explain the mind via physics alone. Clearly, the people who expect rational analytic method to be fruitful in the field of psychology are welcome to coin a new name for themselve. But if they, or anyone else has really "Got it now" please write a dissertation on the subject and enter history along side Kant, St Thomas Aquinus, Kierkergard .... ----GaryFostel---- ------------------------------ Date: 4 Oct 83 8:54:09-PDT (Tue) From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!velu @ Ucb-Vax Subject: Rational Psychology - Gary Fostel's message Article-I.D.: umcp-cs.2953 Unfortunately, however, many pet theories in Physics have come about as inspirations, and not from the "technical origins" as you have stated! (What is a "technical origin", anyway????) As I see it, in any science a pet theory is a combination of insight, inspiration, and a knowledge of the laws governing that field. If we just went by known facts, and did not dream on, we would not have gotten anywhere! - Velu ----- Velu Sinha, U of MD, College Park UUCP: {seismo,allegra,brl-bmd}!umcp-cs!velu CSNet: velu@umcp-cs ARPA: velu.umcp-cs@UDel-Relay ------------------------------ Date: 6 Oct 83 12:00:15-PDT (Thu) From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax Subject: RE: Intuition in Physics Article-I.D.: ncsu.2360 Some few days ago I suggested that there was something "different" about psychology and tried to draw a distinction between the flash of insight or the pet theory in physics as compared to psychology. Well, someone else commented on the original, in a way that sugested I missed the mark in my original effort to make it clear. One more time: I presume that at birth, one's mind is not predisposed to one or another of several possible theories of heavy molecule collision (for example.) Further, I think it unlikely that personal or emotional interaction in one "pre-analytic" stage (see anything about developmental psych.) is is likely to bear upon one's opinions about those molecules. In fact I find it hard to believe that anything BUT technical learning is likely to bear on one's intuition about the molecules. One might want to argue that one's personality might force you to lean towards "aggressive" or overly complex theories, but I doubt that such effects will lead to the creation of a theory. Only a rather mild predisposition at best. In psychology it is entirely different. A person who is aggresive has lots of reasons to assume everyone else is as well. Or paranoid, or that rote learning is esp good or bad, or that large dogs are dangerous or a number of other things that bear directly on one's theories of the mind. And these biases are aquired from the process of living and are quite un-avoidable. This is not technical learning. The effect is that even in the face of considerable technical learning, one's intuition or "pet theories" in psychology might be heavily influenced in creation of the theory as well as selection, by one's life experiences, possibly to the exclusion of one's technical opinions. (Who knows what goes on in the sub-conscious.) While one does not encounter heavy molecules often in one's everyday life or one's childhood, one DOES encounter other people and more significantly one's own mind. It seems clear that intuition in physics is based upon a different sort of knowledge than intuition about psychology. The latter is a combination of technical AND everyday intuition while the former is not. ----GaryFostel---- ------------------------------ End of AIList Digest ******************** 11-Oct-83 16:05:41-PDT,14753;000000000001 Mail-From: LAWS created at 11-Oct-83 11:37:17 Date: Tuesday, October 11, 1983 11:25AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #74 To: AIList@SRI-AI AIList Digest Wednesday, 12 Oct 1983 Volume 1 : Issue 74 Today's Topics: Journals - AI Journal, Query - Miller's "Living Systems", Technology Transfer - DoD Reviews, Conciousness ---------------------------------------------------------------------- Date: Tue, 11 Oct 83 07:54 PDT From: Bobrow.PA@PARC-MAXC.ARPA Subject: AI Journal The information provided by Larry Cipriani about the AI Journal in the last issue of AINET is WRONG in a number of important particulars. Institutional subscriptions to the Artificial Intelligence Journal are $176 this year (not $136). Personal subscriptions are available for $50 per year for members of the AAAI, SIGART and AISB. The circulation is about 2,000 (not 1,100). Finally, the AI journal consists of eight issues this year, and nine issues next year (not bimonthly). Thanks Dan Bobrow (Editor-in-Chief) Bobrow@PARC ------------------------------ Date: Mon, 10 Oct 83 15:41 EDT From: David Axler Subject: Bibliographic Query Just wondering if anybody out there has read the book 'Living Systems' by James G. Miller (Mc Graw - Hill, 1977)., and, if so, whether they feel that Miller's theories have any relevance to present-day AI research. I won't even attempt to summarize the book's content here, as it's over 1K pages in length, but some of the reviews of it that I've run across seem to imply that it might well be useful in some AI work. Any comments? Dave Axler (Axler.Upenn-1100@UPenn@Udel-Relay) ------------------------------ Date: 7 Oct 1983 08:11-EDT From: TAYLOR@RADC-TOPS20 Subject: DoD "reviews" I must agree with Earl Weaver's comments on the DoD review of DoD sponsored publications with one additional comment...since I have "lived and worked" in that environment for more than six years. DoD has learned (through experience) that given enough unclassified material, much classified information can be deduced. I have seen documents whose individual paragraphs were unclassified, but when grouped to gether as a single document it provided too much sensitive information to leave unclassified. Roz (RTaylor@RADC-MULTICS) ------------------------------ Date: 4 Oct 83 19:25:13-PDT (Tue) From: ihnp4!zehntel!tektronix!tekcad!ricks @ Ucb-Vax Subject: Re: Conference Announcement - (nf) Article-I.D.: tekcad.66 > **************** CONFERENCE **************** > > "Intelligent Systems and Machines" > > Oakland University, Rochester Michigan > > April 24-25, 1984 > > ********************************************* > >AUTHORS PLEASE NOTE: A Public Release/Sensitivity Approval is necessary. >Authors from DOD, DOD contractors, and individuals whose work is government >funded must have their papers reviewed for public release and more >importantly sensitivity (i.e. an operations security review for sensitive >unclassified material) by the security office of their sponsoring agency. Another example of so called "scientists" bowing to governmental pressure to let them decide if the paper you want to publish is OK to publish. I think that this type of activity is reprehensible and as con- cerned scientists we should do everything in our power to stop this cen- sorship of research. I urge everyone to boycott this conference and any others like it which REQUIRE a Public Release/Sensitivty Approval (funny how the government tries to make censorship palatible with different words, isn't it). If we don't stop this now, we may be passing every bit of research we do under the nose of bureaucrats who don't know an expert system from an accounting package and who have the power to stop publication of anything they consider dangerous. I'm mad as hell and I'm not going to take it anymore!!!! Frank Adrian (teklabs!tekcad!franka) ------------------------------ Date: 6 Oct 83 6:13:46-PDT (Thu) From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!aplvax!eric @ Ucb-Vax Subject: Re: Alas, I must flame... Article-I.D.: aplvax.358 The "sensitivity" issue is not limited to government - most companies also limit the distribution of information that they consider "company private". I find very little wrong with the idea of "we paid for it, we should benefit from it". The simple truth is that they did underwrite the cost of the research. No one is forced to work under these conditions, but if you want to take the bucks, you have to realize that there are conditions attached to them. On the whole, DoD has been amazingly open with the disclosure of it CS research - one big example is ARPANET. True, they are now wanting to split it up, but they are still leaving half of it to research facilities who did not foot the bill for its development. Perhaps it can be carried to extremes (I have never seen that happen, but lets assume it that it can happen), they contracted for the work to be done, and it is theirs to do with as they wish. -- eric ...!seismo!umcp-cs!aplvax!eric ------------------------------ Date: 7 Oct 83 18:56:18-PDT (Fri) From: npois!hogpc!houti!ariel!vax135!floyd!cmcl2!csd1!condict@Ucb-Vax Subject: Re: the Halting problem. Article-I.D.: csd1.124 [Very long article.] Self-awareness is an illusion? I've heard this curious statement before and never understood it. YOUR self-awareness may be an illusion that is fooling me, and you may think that MY self-awareness is an illusion, but one thing that you cannot deny (the very, only thing that you know for sure) is that you, yourself, in there looking out at the world through your eyeballs, are aware of yourself doing that. At least you cannot deny it if it is true. The point is, I know that I have self-awareness -- by the very act of experiencing it. You cannot take this away from me by telling me that my experience is an illusion. That is a patently ludicrous statement, sillier even then when your mother (no offense -- okay, my mother, then) used to tell you that the pain was all in your head. Of course it is! That is exactly what the problem is! Let me try to say this another way, since I have never been able to get this across to someone who doesn't already believe it. There are some statements that are true by definition, for instance, the statement, "I pronounce you man and wife". The pronouncement happens by the very saying of it and cannot be denied by anyone who has heard it, although the legitimacy of the marriage can be questioned, of course. The self-awareness thing is completely internal, so you may sensibly question the statement "I have self-awareness" when it comes from someone else. What you cannot rationally say is "Gee, I wonder if I really am aware of being in this body and looking down at my hands with these two eyes and making my fingers wiggle at will?" To ask this ques- tion seriously of yourself is an indication that you need immediate psychiatric help. Go directly to Bellvue and commit yourself. It is as lunatic a question as asking yourself "Gee, am I really feeling this pain or is it only an illusion that I hurt so bad that I would happily throw myself in the trash masher to extinguish it?" For those of you who misunderstand what I mean by self-awareness, here is the best I can do at an explanation. There is an obvious sense in which my body is not me. You can cut off any piece of it that leaves the rest functioning (alive and able to think) and the piece that is cut off will not take part in any of my experiences, while the rest of the body will still contain (be the center for?) my self-awareness. You may think that this is just because my brain is in the big piece. No, there is something more to it than that. With a little imagination you can picture an android being constructed someday that has an AI brain that can be programmed with all the memories you have now and all the same mental faculties. Now picture yourself observing the android and noting that it is an exact copy of you. You can then imagine actually BEING that android, seeing what it sees, feeling what it feels. What is the difference between observing the android and being the android? It is just this -- in the latter case your self-awareness is centered in the android, while in the former it is not. That is what self-awareness, also called a soul, is. It is the one true meaning of the word "I", which does not refer to any particular collection of atoms, but rather to the "you" that is occupying the body. This is not a religous issue either, so back off, all you atheist and Christian fanatics. I'm just calling it a soul because it is the real "me", and I can imagine it residing in various different bodies and machines, although I would, of course, prefer some to others. This, then, is the reason I would never step into one of those teleporters that functions by ripping apart your atoms, then reconstructing an exact copy at a distant site. My self-awareness, while it doesn't need a biological body to exist, needs something! What guarantee do I have that "I", the "me" that sees and hears the door of the transporter chamber clang shut, will actually be able to find the new copy of my body when it is reconstructed three million parsecs away. Some of you are laughing at my lack of modernism here, but I can have the last laugh if you're stupid enough to get into the teleporter with me at the controls. Suppose it functions like this (from a real sci-fi story that I read): It scans your body, transmits the copying information, then when it is certain that the copy got through it zaps the old copy, to avoid the inconvenience of there being two of you (a real mess at tax time!). Now this doesn't bother you a bit since it all happens in micro-seconds and your self-awareness, being an illusion, is not to be consulted in the matter. But suppose I put your beliefs to the test by setting the controls so that the copy is made but the original is not destroyed. You get out of the teleporter at both ends, with the original you thinking that something went wrong. I greet you with: "Hi there! Don't worry, you got transported okay. Here, you can talk to your copy on the telephone to make sure. The reason that I didn't destroy this copy of you is because I thought you would enjoy doing it yourself. Not many people get to commit suicide and still be around to talk about it at cocktail parties, eh? Now, would you like the hari-kari knife, the laser death ray, or the nice little red pills?" You, of course, would see no problem whatsoever with doing yourself in on the spot, and would thank me for adding a little excitement to your otherwise mundane trip. Right? What, you have a problem with this scenario? Oh, it doesn't bother you if only one copy of you exists at a time, but if there are ever two, by some error, your spouse is stuck with both of you? What does the timing have to do with your belief in self-awareness? Relativity theory says that the order of the two events is indeterminate anyway. People who won't admit the reality of their own self-awareness have always bothered me. I'm not sure I want to go out for a beer with, much less date or marry someone who doesn't at least claim to have self-awareness (even if they're only faking). I get this image of me riding in a car with this non-self-aware person, when suddenly, as we reach a curve with a huge semi coming in the other direction, they fail to move the wheel to stay in the right lane, not seeing any particular reason to attempt to extend their own unimportant existence. After all, if their awareness is just an illusion, the implication is that they are really just a biological automaton and it don't make no never mind what happens to it (or the one in the next seat, for that matter, emitting the strange sounds and clutching the dashboard). The Big Unanswered Question then (which belongs in net.philosophy, where I will expect to see the answer) is this: "Why do I have self-awareness?" By this I do not mean, why does my body emit sounds that your body interprets to be statements that my body is making about itself. I mean why am *I* here, and not just my body and brain? You can't tell me that I'm not, because I have a better vantage point than you do, being me and not you. I am the only one qualified to rule on the issue, and I'll thank you to keep your opinion to yourself. This doesn't alter the fact that I find my existence (that is, the existence of my awareness, not my physical support system), to be rather arbitrary. I feel that my body/brain combination could get along just fine without it, and would not waste so much time reading and writing windy news articles. Enough of this, already, but I want to close by describing what happened when I had this conversation with two good friends. They were refusing to agree to any of it, and I was starting to get a little suspicious. Only, half in jest, I tried explaining things this way. I said: "Look, I know I'm in here, I can see myself seeing and hear myself hearing, but I'm willing to admit that maybe you two aren't really self-aware. Maybe, in fact, you're robots, everybody is robots except me. There really is no Cornell University, or U.S.A. for that matter. It's all an elaborate production by some insidious showman who constructs fake buildings and offices wherever I go and rips them down behind me when I leave." Whereupon a strange, unreadable look came over Dean's face, and he called to someone I couldn't see, "Okay, jig's up! Cut! He figured it out." (Hands motioning, now) "Get, those props out of here, tear down those building fronts, ... " Scared the pants off me. Michael Condict ...!cmcl2!csd1!condict New York U. ------------------------------ End of AIList Digest ******************** 12-Oct-83 13:51:27-PDT,17104;000000000001 Mail-From: LAWS created at 12-Oct-83 13:47:23 Date: Wednesday, October 12, 1983 10:41AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #75 To: AIList@SRI-AI AIList Digest Thursday, 13 Oct 1983 Volume 1 : Issue 75 Today's Topics: Music & AI - Poll Results, Alert - September CACM, Fuzzy Logic - Zadeh Syllogism, Administrivia - Usenet Submissions & Seminar Notices, Seminars - HP 10/13/83 & Rutgers Colloquium ---------------------------------------------------------------------- Date: 11 Oct 83 16:16:12 EDT (Tue) From: Randy Trigg Subject: music poll results Here are the results of my request for info on AI and music. (I apologize for losing the header to the first mail below.) - Randy ______________________________ Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci. He had a real nice program to imitate Debuse (experts could not tell its compositions from originals). ------------------------------ Date: 22 Sep 83 01:55-EST (Thu) From: Michael Aramini Subject: RE: AI and music At the AAAI conference, I was talking to someone from Atari (from Atari Cambridge Labs, I think) who was doing work with AI and music. I can't remember his name, however. He was working (with others) on automating transforming music of one genre into another. This involved trying to quasi-formally define what the characteristics of each genre of music are. It sounded like they were doing a lot of work on defining ragtime and converting ragtime to other genres. He said there were other people at Atari that are working on modeling the emotional state various characteristics of music evoke in the listener. I am sorry that I don't have more info as to the names of these people or how to get in touch with them. All that I know is that this work is being done at Atari Labs either in Cambridge, MA or Palo Alto, CA. ------------------------------ Date: Thu 22 Sep 83 11:04:22-EDT From: Ted Markowitz Subject: Music and AI Cc: TJM@COLUMBIA-20 Having an undergrad degree in music and working toward a graduate degree in CS, I'm very interested in any results you get from your posting. I've been toying with the idea of working on a music-AI interface, but haven't pinned down anything specific yet. What is your research concerned with? --ted ------------------------------ Date: 24 Sep 1983 20:27:57-PDT From: Andy Cromarty Subject: Music analysis/generation & AI There are 3 places that immediately come to mind: 1. There is a huge and well-developed (indeed, venerable) computer music group at Stanford. They currently occupy what used to be the old AI Lab. I'm sure someone else will mention them, but if not call Stanford (or send me another note and I'll find a net address you can send mail to for details.) 2. Atari Research is doing a lot of this sort of work -- generation, analysis, etc., both in Cambridge (Mass) and Sunnyvale (Calif.), I believe. 3. Some very good work has come out of MIT in the past few years. David Levitt is working on his PhD in this area there, having completed his masters in AI approaches to Jazz improvisation, if my memory serves, and I think William Paseman also wrote his masters on a related topic there. Send mail to LEVITT@MIT-MC for info -- I'm sure he'd be happyy to tell you more about his work. asc ------------------------------ Date: Wed 12 Oct 83 09:40:48-PDT From: Ken Laws Subject: Alert - September CACM The September CACM contains the following interesting items: A clever cover graphically illustrating the U.S. and Japanese approaches to the Fifth Generation. A Harper and Row ad (without prices) including Touretzky's LISP: A Gentle Introduction to Symbolic Computation and Eisenstadt and O'Shea's Artificial Intelligence: Tools, Techniques and Applications. [AIList would welcome reviews.] An editorial by Peter J. Denning on the manifest destiny of AI to succeed because the concept is easily grasped, credible, expected to succeed, and seen as an improvement. An introduction and three articles about the Fifth Generation, Japanese management, the Japanese effort, and MCC. A report on BELLE's slim victory in the 13th N.A. Computer Chess Championship. A note on the sublanguages (i.e., natural restricted languages) conference at NYU next January. A note on DOD's wholesale adoption of ADA. -- Ken Laws ------------------------------ Date: Wed 12 Oct 83 09:24:34-PDT From: Ken Laws Subject: Zadeh Syllogism Lotfi Zadeh used a syllogism yesterday that was new to me. To paraphrase slightly: Cheap apartments are rare and highly sought. Rare and highly sought objects are expensive. --------------------------------------------- Cheap apartments are expensive. I suppose any reasonable system will conclude that cheap apartments cannot exist, which may in fact be the case. -- Ken Laws ------------------------------ Date: Wed 12 Oct 83 10:20:57-PDT From: Ken Laws Reply-to: AIList-Request@SRI-AI Subject: Usenet Submissions It has come to my attention that I may be failing to distribute some Usenet-originated submissions back to Usenet readers. If this is true, I apologize. I have not been simply ignoring submissions; if you haven't heard from me, the item was distributed to the Arpanet. The problem involves the Article-I.D. field in Usenet- originated messages. The gateway software (maintained by Knutsen@SRI-UNIX) ignores digest items containing this keyword so that messages originating from net.ai will not be posted back to net.ai. Unfortunately, messages sent directly to AIList instead of to net.ai also contain this keyword. I have not been stripping it out, and so the submission have not been making it back to Usenet. I will try to be more careful in the future. Direct AIList contributors who want to be sure I don't slip should begin their submissions with a "strip ID field" comment. Even a "Dear Moderator," might trigger my editing instincts. I hope to handle direct submissions correctly even without prompting, but the visible distinction between the two message types is slight. -- Ken Laws ------------------------------ Date: Wed 12 Oct 83 10:04:03-PDT From: Ken Laws Reply-to: AIList-Request@SRI-AI Subject: Seminar Notices There have been a couple of net.ai requests lately that seminar notices be dropped, plus a strong request that they be continued. I would like to make a clear policy statement on this matter. Anyone who wishes to discuss it further may write to AIList-Request@SRI-AI; I will attempt to compile opinions or moderate the disscussion in a reasonable manner. Strictly speaking, AIList seldom prints "seminar notices". Rather, it prints abstracts of AI-related talks. The abstract is the primary item; the fact that the speaker is graduating or out "selling" is secondary; and the possibility that AIList readers might attend is tertiary. I try to distribute the notices in a timely fashion, but responses to my original query were two-to-one in favor of the abstracts even when the talk had already been given. The abstracts have been heavily weighted in favor of the Bay Area; some readers have taken this to be provincialism. Instead, it is simply the case that Stanford, Hewlett-Packard, and occasionally SRI are the only sources available to me that provide abstracts. Other sources would be welcome. In the event that too many abstracts become available, I will institute rigorous screening criteria. I do not feel the need to do so at this time. I have passed up database, math, and CS abstracts because they are outside the general AI and data analysis domain of AIList; others might disagree. I have included some borderline seminars because they were the first of a series; I felt that the series itself was worth publicizing. I can't please all of the people all of the time, but your feedback is welcome to help me keep on course. At present, I regard the abstracts to be one of AIList's strengths. -- Ken Laws ------------------------------ Date: 11 Oct 83 16:30:27 PDT (Tuesday) From: Kluger.PA@PARC-MAXC.ARPA Reply-to: Kluger.PA@PARC-MAXC.ARPA Subject: HP Computer Colloquium 10/13/83 Piero P. Bonissone Corporate Research and Development General Electric Corporation DELTA: An Expert System for Troubleshooting Diesel Electric Locomotives The a priori information available to the repair crew is a list of "symptoms" reported by the engine crew. More information can be gathered in the "running repair" shop, by taking measurements and performing tests provided that the two hour time limit is not exceeded. A rule based expert system, DELTA (Diesel Electric Locomotive Troubleshooting Aid) has been developed at the General Electric Corporate Research and Development Laboratories to guide in the repair of partially disabled electric locomotives. The system enforces a disciplined troubleshooting procedure which minimizes the cost and time of the corrective maintenance allowing detection and repair of malfunctions in the two hour window allotted to the service personnel in charge of those tasks. A prototype system has been implemented in FORTH, running on a Digital Equipment VAX 11/780 under VMS, on a PDP 11/70 under RSX-11M, and on a PDP 11/23 under RSX-11M. This system contains approximately 550 rules, partially representing the knowledge of a Senior Field Service Engineer. The system is provided with graphical/video capabilities which can help the user in locating and identifying locomotive components, as well as illustrating repair procedures. Although the system only contains a limited number of rules (550), it covers, in a shallow manner, a wide breadth of the problem space. The number of rules will soon be raised to approximately 1200 to cover, with increased depth, a larger portion of the problem space. Thursday, October 13, 1983 4:00 PM Hewlett Packard Stanford Division Labs 5M Conference room 1501 Page Mill Rd Palo Alto, CA 9430 ** Be sure to arrive at the building's lobby ON TIME, so that you may be escorted to the meeting room. ------------------------------ Date: 11 Oct 83 13:47:44 EDT From: LOUNGO@RUTGERS.ARPA Subject: colloquium [Reprinted from the RUTGERS bboard. Long message.] Computer Science Faculty Research Colloquia Date: Thursday, October 13, 1983 Time: 2:00-4:15 Place: Room 705, Hill Center, Busch Campus Schedule: 2:00-2:15 Prof. Saul Amarel, Chairman, Department of Computer Science Introductory Remarks 2:15-2:30 Prof. Casimir Kulikowski Title: Expert Systems and their Applications Area(s): Artificial intelligence 2:30-2:45 Prof. Natesa Sridharan Title: TAXMAN Area(s): Artificial intelligence (knowledge representation), legal reasoning 2:45-3:00 Prof. Natesa Sridharan Title: Artificial Intelligence and Parallelism Area(s): Artificial intelligence, parallelism 3:00-3:15 Prof. Saul Amarel Title: Problem Reformulations and Expertise Acquisition; Theory Formation Area(s): Artificial intelligence 3:15-3:30 Prof. Michael Grigoriadis Title: Large Scale Mathematical Programming; Network Optimization; Design of Computer Networks Area(s): Computer networks 3:30-3:45 Prof. Robert Vichnevetsky Title: Numerical Solutions of Hyperbolic Equations Area(s): Numerical analysis 3:45-4:00 Prof. Martin Dowd Title: P~=NP Area(s): Computational complexity 4:00-4:15 Prof. Ann Yasuhara Title: Notions of Complexity for Trees, DAGS, * and subsets of {0,1} Area(s): Computational complexity COFFEE AND DONUTS AT 1:30 ------- Mail-From: LAWS created at 12-Oct-83 09:11:56 Mail-From: LOUNGO created at 11-Oct-83 13:48:35 Date: 11 Oct 83 13:48:35 EDT From: LOUNGO@RUTGERS.ARPA Subject: colloquium To: BBOARD@RUTGERS.ARPA cc: pettY@RUTGERS.ARPA, lounGO@RUTGERS.ARPA ReSent-date: Wed 12 Oct 83 09:11:56-PDT ReSent-from: Ken Laws ReSent-to: ailist@SRI-AI.ARPA Computer Science Faculty Research Colloquia Date: Friday, October 14, 1983 Time: 2:00-4:15 Place: Room 705, Hill Center, Busch Campus Schedule: 2:00-2:15 Prof. Tom Mitchell Title: Machine Learning and Artificial Intelligence Area(s): Artificial intelligence 2:15-2:30 Prof. Louis Steinberg Title: An Artificial Intelligence Approach to Computer-Aided Design for VLSI Area(s): Artificial intelligence, computer-aided design, VLSI 2:30-2:45 Prof. Donald Smith Title: Debugging VLSI Designs Area(s): Artificial intelligence, computer-aided design, VLSI 2:45-3:00 Prof. Apostolos Gerasoulis Title: Numerical Solutions to Integral Equations Area(s): Numerical analysis 3:00-3:15 Prof. Alexander Borgida Title: Applications of AI to Information Systems Development Area(s): Artificial intelligence, databases, software engineering 3:15-3:30 Prof. Naftaly Minsky Title: Programming Environments for Evolving Systems Area(s): Software engineeging, databases, artificial intelligence 3:30-3:45 Prof. William Steiger title: Random Algorithms area(s): Analysis of algorithms, numerical methods, non-numerical methods 3:45-4:00 4:00-4:15 Computer Science Faculty Research Colloquia Date: Thursday, October 20, 1983 Time: 2:00-4:15 Place: Room 705, Hill Center, Busch Campus Schedule: 2:00-2:15 Prof. Thomaz Imielinski Title: Relational Databases and AI; Logic Programming Area(s): Dabtabases, artificial intelligence 2:15-2:30 Prof. David Rozenshtein Title: Nice Relational Databases Area(s): Databases, data models 2:30-2:45 Prof. Chitoor Srinivasan Title: Expert Systems that Reason About Action with Time Area(s): Artificial intelligence, knowledge-based systems 2:45-3:00 Prof. Gerald Richter Title: Numerical Solutions to Partial Differential Equations Area(s): Numerical analysis 3:00-3:15 Prof. Irving Rabinowitz Title: - To be announced - Area(s): Programming languages 3:15-3:30 Prof. Saul Levy Title: Distributed Computing Area(s): Computing, computer architecture 3:30-3:45 Prof. Yehoshua Perl Title: Sorting Networks, Probabilistic Parallel Algorithms, String Matching Area(s): Design and analysis of algorithms 3:45-4:00 Prof. Marvin Paull Title: Algorithm Design Area(s): Design and analysis of algorithms 4:00-4:15 Prof. Barbara Ryder Title: Incremental Data Flow Analysis Area(s): Design and analysis of algorithms, compiler optimization COFFEE AND DONUTS AT 1:30 ------------------------------ End of AIList Digest ******************** 13-Oct-83 10:29:40-PDT,13849;000000000001 Mail-From: LAWS created at 13-Oct-83 10:24:59 Date: Thursday, October 13, 1983 10:13AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #76 To: AIList@SRI-AI AIList Digest Thursday, 13 Oct 1983 Volume 1 : Issue 76 Today's Topics: Intelligent Front Ends - Request, Finance - IntelliGenetics, Fuzzy Logic - Zadeh's Paradox, Publication - Government Reviews ---------------------------------------------------------------------- Date: Thursday, 13-Oct-83 12:04:24-BST From: BUNDY HPS (on ERCC DEC-10) Reply-to: bundy@rutgers.arpa Subject: Request for Information on Intelligent Front Ends The UK government has set up a the Alvey Programme as the UK answer to the Japanese 5th Generation Programme. One part of that Programme has been to identify and promote research in a number of 'themes'. I am the manager of one such theme - on 'Intelligent Front Ends' (IFE). An IFE is defined as follows: "A front end to an existing software package, for example a finite element package, a mathematical modelling system, which provides a user-friendly interface (a "human window") to packages which without it, are too complex and/or technically incomprehensible to be accessible to many potential users. An intelligent front end builds a model of the user's problem through user-oriented dialogue mechanisms based on menus or quasi-natural language, which is then used to generate suitably coded instructions for the package." One of the theme activities is to gather information about IFEs, for instance: useful references and short descriptions of available tools. If you can supply such information then please send it to BUNDY@RUTGERS. Thanks in advance. Alan Bundy ------------------------------ Date: 12 Oct 83 0313 PDT From: Arthur Keller Subject: IntelliGenetics [Reprinted from the SU-SCORE bboard.] From Tuesday's SF Chronicle (page 56): "IntelliGenetics Inc., Palo Alto, has filed with the Securities and Exchange Commission to sell 1.6 million common shares in late November. The issue, co-managed by Ladenburg, Thalmann & Co. Inc. of New York and Freehling & Co. of Chicago, will be priced between $6 and $7 a share. IntelliGenetics provides artificial intelligence based software for use in genetic engineering and other fields." ------------------------------ Date: Thursday, 13-Oct-83 16:00:01-BST From: RICHARD HPS (on ERCC DEC-10) Reply-to: okeefe.r.a. Subject: Zadeh's apartment paradox The resolution of the paradox lies in realising that "cheap apartments are expensive" is not contradictory. "cheap" refers to the cost of maintaining (rent, bus fares, repairs) the apartment and "expensive" refers to the cost of procuring it. The fully stated theorem is \/x apartment(x) & low(upkeep(x)) => difficult_to_procure(x) \/x difficult_to_procure(x) => high(cost_of_procuring(x)) hence \/x apartment(x) & low(upkeep(x)) => high(cost_of_procuring(x)) where "low" and "high" can be as fuzzy as you please. A reasoning system should not conclude that cheap flats don't exist, but rather that the axioms it has been given are inconsistent with the assumption that they do. Sooner or later you are going to tell it "Jones has a cheap flat", and then it will spot the flawed axioms. [I can see your point that one might pay a high price to procure an apartment with a low rental. There is an alternate interpretation which I had in mind, however. The paradox could have been stated in terms of any bargain, specifically one in which upkeep is not a factor. One could conclude, for instance, that a cheap meal is expensive. My own resolution is that the term "rare" (or "rare and highly sought") must be split into subconcepts corresponding to the cause of rarity. When discussing economics, one must always reason separately about economic rarities such as rare bargains. The second assertion in the syllogism then becomes "rare and highly sought objects other than rare bargains are (Zadeh might add 'usually') expensive", or "rare and highly sought objects are either expensive or are bargains". -- Ken Laws ] ------------------------------ Date: Thu 13 Oct 83 03:38:21-CDT From: Werner Uhrig Subject: Re: Zadeh Syllogism Expensive apartments are not highly sought. Items not in demand are cheap. -> expensive apartments are cheap. or The higher the price, the lower the demand. The lower the demand, the lower the price. -> the higher the price , the lower the price. ergo ?? garbage in , garbage out! Why am I thinking of Reagonomics right now ???? Werner (UUCP: { ut-sally , ut-ngp } !utastro!werner ARPA: werner@utexas-20 PS: at this time of the day, one gets the urge to voice "weird" stuff ... ------- [The first form is as persuasive as the original syllogism. The second seems to be no more than a statement of negative feedback. Whether the system is stable depends on the nature of the implied driving forces. It seems we are now dealing with a temporal logic. An example of an unstable system is: The fewer items sold, the higher the unit price must be. The higher the price, the fewer the items sold. -------------------------------------------------------- Bankruptcy. -- KIL] ------------------------------ Date: Wed, 12 Oct 83 13:16 PDT From: GMEREDITH.ES@PARC-MAXC.ARPA Subject: Sensitivity Issue and Self-Awareness I can understand the concern of researcher people about censorship. However, having worked with an agency which spent time extracting information of a classified nature from unclassified or semi-secure sources, I have to say that people not trained in such pursuits are usually very poor judges of the difference between necessary efforts to curb flow of classified information and "censorship". I can also guarantee that this country's government is not the alone in knowing how to misuse the results of research carried out with the most noble of intents. Next, to the subject of self-awareness. The tendency of an individual to see his/her corporal self as distinct from the *I* experience or to see others as robots or a kind of illusion is sufficient to win a tag of 'schizophrenic' from any psychiatrist and various other negative reactions from those involved in other schools of the psychological community. Beyond that, the above tendencies make relating to 'real' world phenomena very difficult. That semi coming around the curve will continue to follow through on the illusion of having smashed those just recently discontinued illusions in the on-coming car. Guy ------------------------------ Date: Wed 12 Oct 83 00:07:15-PDT From: David Rogers Subject: Goverment Reviews of Basic Research I must disagree with Frank Adrian who commented in a previous digest that "I urge everyone to boycott this conference" and other conferences with this requirement. The progress of science should not be halted due to some government ruling, especially since an attempted boycott would have little positive and (probably) much negative effect. Assuming that all of the 'upstanding' scientists participated, is there any reason to think that the government couldn't find less discerning researchers more than happy to accept grant money? Eric (sorry, no last name) is preoccupied with the fact that government 'paid' for the research; aren't "we" the people the real owners, in that case? Or can there be real owners of basic knowledge: as I recall, the patent office has ruled that algorithms are unpatentable and thus inherently public domain. The control of ideas has been an elusive goal for many governments, but even so, it is rare for a government to try to claim ownership of an idea as a justification for restriction; outside of the military domain, this is seems to be a new one... As a scientist, I believe that the world and humanity will gain wisdom and insight though research, and eventually enable us to end war, hunger, ignorance, whatever. Other forces in the world have different, more short-term goals, for our work; this is fine, as long as the long-term reasons for scientific research are not sacrificed. Sure, they 'paid' for the results of our short-term goals, but we should never allow that to blind us to the real reason for working in AI, and *NO-ONE* can own that. So I'll take government money (if they offer me any after this diatribe!) and work on various systems and schemes, but I'll fight any attempt to nullify the long term goals I'm really working for. I feel these new restrictions are detrimental to the long-term goals of scientific search, but currently, I'm going with things here... we're the best in the world (sigh) and I plan on fighting to keep it that way. David Rogers DRogers@SUMEX-AIM.ARPA ------------------------------ Date: Wed, 12 Oct 83 10:26:28 EDT From: Morton A. Hirschberg Subject: Flaming Mad I have refrained from reflaming since I sent the initial conference announcement on "Intelligent Systems and Machines." First, the conference is not being sponsored by the US Government. Second, many papers may be submitted by those affected by the security release and it seemed necessary to include this as part of the announcement. Third, I attended the conference at Oakland earlier this year and it was a super conference. Fourth, you may bite your nose to spite your face if you as an individual do not want to submit a paper or attend but you are not doing much service to those sponsoring the conference who are true scientists by urging boycotts. Finally, below is a little of my own philosophy. I have rarely seen science or the application of science (engineering) benefit anyone anywhere without an associated cost (often called an investment). The costs are usually borne by the investors and if the end product is a success then costs are passed on to consumers. I can find few examples where discoveries in science or in the name of science have not benefited the discoverer and/or his heirs, or the investors. Many of our early discoveries were made by men of considerable wealth who could dally with theory and experimentation (and the arts) and science using their own resources. We may have gained a heritage but they gained a profit. What seems to constitute a common heritage is either something that has been around for so long that it is either in the public domain or is a romanticized fiction (e.g. Paul Muni playing Pasteur). Simultaneous discovery has been responsible for many theories being in the public domain as well as leading to products which were hotly contested in lawsuits. (e.g. did Bell really invent the telephone or Edison the movie camera?) Watson in his book "The Double Helix" gives a clear picture of what a typical scientist may really be and it is not Arrowsmith. I did not see Watson refuse his Noble because the radiologist did not get a prize. Government, and here for historical reasons we must also include state and church, has always had a role in the sciences. That role is one that governments can not always be proud of (Galileo, Rachael Carson, Sakharov). The manner in which the United States Government conducts business gives great latitude to scientists and to investors. When the US Government buys something it should be theirs just as when you as an individual buy something. As such it is then the purview of the US Government as to what to do with the product. Note the US Government often buys with limited rights of ownership and distribution. It has been my observation having worked in private industry, for a university, and now for the government that relations among the three has not been optimal and in many cases not mutually rewarding. This is a great concern of mine and many of my colleagues. I would like a role in changing relations among the three and do work toward that as a personal goal. This includes not referring to academicians as eggheads or charlatans; industrialists as grubby profiteers; and government employees as empty-headed bureaucrats. I recommend that young flamers try to maintain a little naivete as they mature but not so much that they are ignorant of reality. Every institution has its structure and by in large one works within the structure to earn a living or are free to move on or can work to change that structure. One possible change is for the US Government to conduct business the way the the Japanese do (at least in certain cases). Maybe AI is the place to start. I also notice that mail on the net comes across much harsher than it is intended to be. This can be overcome by being as polite as possible and being more verbose. In addition, one can read their mail more than once before flaming. Mort ------------------------------ End of AIList Digest ******************** 14-Oct-83 09:59:06-PDT,16789;000000000001 Mail-From: LAWS created at 14-Oct-83 09:44:01 Date: Friday, October 14, 1983 9:36AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #77 To: AIList@SRI-AI AIList Digest Friday, 14 Oct 1983 Volume 1 : Issue 77 Today's Topics: Natural Language - Semantic Chart Parsing & Macaroni & Grammars, Games - Rog-O-Matic, Seminar - Nau at UMaryland, Diagnostic Problem Solving ---------------------------------------------------------------------- Date: Wednesday, 12 October 1983 14:01:50 EDT From: Robert.Frederking@CMU-CS-CAD Subject: "Semantic chart parsing" I should have made it clear in my previous note on the subject that the phrase "semantic chart parsing" is a name I've coined to describe a parser which uses the technique of syntactic chart parsing, but includes semantic information right from the start. In a way, it's an attempt to reconcile Schank-style immediate semantic interpretation with syntactically oriented parsing, since both sources of information seem worthwhile. ------------------------------ Date: Wednesday, 12-Oct-83 17:52:33-BST From: RICHARD HPS (on ERCC DEC-10) Reply-to: okeefe.r.a. Subject: Natural Language There was rather more inflammation than information in the exchanges between Dr Pereira and Whats-His-Name-Who-Butchers- Leprechauns. Possibly it's because I've only read one or two [well, to be perfectly honest, three] papers on PHRAN and the others in that PHamily, but I still can't see why it is that their data structures aren't a grammar. Admittedly they don't look much like rules in an XG, but then rules in an XG don't look much like an ATN either, and no-one has qualms about calling ATNs grammars. Can someone please explain in words suitable for a 16-year-old child what makes phrasal analysis so different from XGs (Extraposition grammars, include DCGS in this) ATNs Marcus-style parsers template-matching so different that it is hailed as "solving" the parsing problem? I have written grammars for tiny fragments of English in DCG, ATN, and PIDGIN -styles [the adverbs get me every time]. I am not a linguist, and the coverage of these grammars was ludicrously small. So my claim that I found it vastly easier to extend and debug the DCG version [DCGs are very like EAGs] will probably be dismissed with the contempt it deserves. Dr Pereira has published his parser, and in other papers has published an XG interpreter. I believe a micro-PHRAN has been published, and I would be grateful for a pointer to it. Has anyone published a phrasal-analysis grimoire (if the term "grammar" doesn't suit) with say >100 "things" (I forget the right name for the data structures), and how can I get a copy? People certainly can accept ill-formed sentences. But they DO have quite definite notions of what is a well-formed sentence and what is not. I was recently in a London Underground station, and saw a Telecom poster. It was perfectly obvious that it was written by an Englishman trying to write in American. It finally dawned on me that he was using American vocabulary and English syntax. At first sight the poster read easily enough, and the meaning came through. But it was sufficiently strange to retain my attention until I saw what was odd about it. Our judgements of grammaticality are as sensitive as that. [I repeat, I am no linguist. I once came away from a talk by Gazdar saying to one of my fellow students, who was writing a parser: "This extraposition, I don't believe people do that."] I suggest that people DO learn grammars, and what is more, they learn them in a form that is not wholly unlike [note the caution] DCGs or ATNs. We know that DCGs are learnable, given positive and negative instances. [Oh yes, before someone jumps up and down and says that children don't get negative instances, that is utter rubbish. When a child says something and is corrected by an adult, is that not a negative instance? Of course it is!] However, when people APPLY grammars for parsing, I suggest that they use repair methods to match what they hear against what they expect. [This is probably frames again.] These repair methods range all the way from subconscious signal cleaning [coping with say a lisp] to fully conscious attempts to handle "Colourless Green ideas sleep furiously". [Maybe parentheses like this are handled by a repair mechanism?] If this is granted, some of the complexity required to handle say ellipsis would move out of the grammar and into the repair mechanisms. But if there is anything we know about human psychology, it is that people DO have repair mechanisms. There is a lot of work on how children learn mathematics [not just Brown & co], and it turns out that children will go to extraordinary lengths to patch a buggy hack rather than admit they don't know. So the fact that people can cope with ungrammatical sentences is not evidence against grammars. As evidence FOR grammars, I would like to offer Macaroni. Not the comestible, the verse form. Strictly speaking, Macaroni is a mixture of the vernacular and Latin, but since it is no longer popular we can allow any mixture of languages. The odd thing about Macaroni is that people can judge it grammatical or ungrammatical, and what is more, can agree about their judgements as well as they can agree about the vernacular or Latin taken separately. My Latin is so rusty there is no iron left, so here is something else. [Prolog is] [ho protos logos] [en programmation logiciel] English Greek French This of course is (NP copula NP) PP, which is admissible in all three languages, and the individual chunks are well-formed in their several languages. The main thing about Macaroni is that when two languages have a very similar syntactic class, such as NP, a sentence which starts off in one language may rewrite that category in the other language, and someone who speaks both languages will judge it acceptable. Other ways of dividing up the sentence are not judged acceptable, e.g. Prolog estin ho protos mot en logic programmation is just silly. S is very similar in most languages, which would account for the acceptability of complete sentences in another language. N is pretty similar too, and we feel no real difficulty with single isolated words from other languages like "chutzpa" or "pyjama" or "mana". When the syntactic classes are not such a good match, we feel rather more uneasy about the mixture. For example, "[ka ora] [teenei tangata]" and "[these men] [are well]" both say much the same thing, but because the Maaori nominal phrase and the English noun phrase aren't all that similar, "[teenei tangata] [are well]" seems strained. The fact that bilingual people have little or no difficulty with Macaroni is just as much a fact as the fact the people in general have little difficulty with mildly malformed sentences. Maybe they're the same fact. But I think the former deserves as much attention as the latter. Does anyone have a parser with a grammar for English and a grammar for [UK -> French or German; Canada -> French; USA -> Spanish] which use the same categories as far as possible? Have a go at putting the two together, and try it on some Macaroni. I suspect that if you have some genuinely bilingual speakers to assist you, you will find it easier to develo/correc the grammars together than separately. [This does not hold for non-related languages. I would not expect English and Japanese to mix well, but then I don't know any Japanese. Maybe it's worth trying.] ------------------------------ Date: Thu 13 Oct 83 11:07:26-PDT From: WYLAND@SRI-KL.ARPA Subject: Dave Curry's request for a Simple English Grammer I think the book "Natural Language Information Processing" by Naomi Sager (Addison-Wesley, 1981) may be useful. This book represents the results of the Linguistic String project at New York University, and Dr. Sager is its director. The book contains a BNF grammer set of 400 or so rules for parsing English sentences. It has been applied to medical text, such as radiology reports and narrative documents in patient records. Dave Wyland WYLAND@SRI ------------------------------ Date: 11 Oct 83 19:41:39-PDT (Tue) From: harpo!utah-cs!shebs @ Ucb-Vax Subject: Re: WANTED: Simple English Grammar - (nf) Article-I.D.: utah-cs.1994 (Oh no, here he goes again! and with his water-cooled keyboard too!) Yes, analysis of syntax alone cannot possibly work - as near as I can tell, syntax-based parsers need an enormous amount of semantic processing, which seems to be dismissed as "just pragmatics" or whatever. I'm not an "in" member of the NLP community, so I haven't been able to find out the facts, but I have a bad feeling that some of the well-known NLP systems are gigantic hacks, whose syntactic analyzer is just a bag hanging off the side, but about which all the papers are written. Mind you, this is just a suspicion, and I welcome any disproof... stan the l.h. utah-cs!shebs ------------------------------ Date: 7 Oct 83 9:54:21-PDT (Fri) From: decvax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!asa @ Ucb-Vax Subject: Re: WANTED: Simple English Grammar - (nf) Article-I.D.: rayssd.187 date: 10/7/83 Yesterday I sent a suggestion that you look at Winograd's new book on syntax. Upon reflection, I realized that there are several aspects of syntax not clearly stated therein. In particular, there is one aspect which you might wish to think about, if you are interested in building models and using the 'expectations' approach. This aspect has to do with the synergism of syntax and semantics. The particular case which occured to me is an example of the specific ways that Latin grammar terminology is innapropriate for English. In English, there is no 'present' tense in the intuitive sense of that word. The stem of the verb (which Winograd calls the 'infinitive' form, in contrast to the traditional use of this term to signify the 'to+stem' form) actually encodes the semantic concept of 'indefinite habitual' Thus, to say only 'I eat.' sounds peculiar. When the stem is used alone, we expect a qualifier, as in 'I eat regularly', or 'I eat very little', or 'I eat every day'. In this framework, there is a connection with the present, in the sense that the process described is continuous, has existed in the past, and is expected to continue in the future. Thus, what we call the 'present' is really a 'modal' form, and might better be described as the 'present state of a continuing habitual process'. If we wish to describe something related to our actual state at this time, we use what I think of as the 'actual present', which is 'I am eating'. Winograd hints at this, especially in Appendix B, in discussing verb forms. However, he does not go into it in detail, so it might help you understand better what's happening if you keep in mind the fact that there exist specific underlying semantic functions being implemented, which are in turn based on the ltype of information to be conveyed and the subtlety of the disinctions desired. Knowing this at the outset may help you decide the elements you wish to model in a simplified program. It will certainly help if you want to try the expectations technique. This is an ideal situation in which to try a 'blackboard' type of expert system, where the sensing, semantics, and parsing/generation engines operate in parallel. Good luck! A final note: if you would like to explore further a view of grammar which totally dispenses with the terms and concepts of Latin grammar, you might read "The Languages of Africa" (I think that's the title), by William Welmer. By the way! Does anyone out there know if Welmer ever published his fascinating work on the memory of colors as a function of time? Did it at least get stored in the archives at Berkeley? Asa Simmons rayssd!asa ------------------------------ Date: Thursday, 13 October 1983 22:24:18 EDT From: Michael.Mauldin@CMU-CS-CAD Subject: Total Winner @ @ @ @ @ @@@ @ @ @ @ @@ @@ @ @ @ @ @ @ @@@ @ @ @ @@@ @@@@ @@@ @ @@@ @ @@@@@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @@@@@ @ @ @@@@ @ @ @@@@@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @@@ @ @ @@@@ @@@@ @@@ @@@ @@ @ Well, thanks to the modern miracles of parallel processing (i.e. using the UUCPNet as one giant distributed processor) Rog-O-Matic became an honest member of the Fighter's guild on October 10, 1983. This is the fourth total victory for our Heuristic Hero, but the first time he has done so without using a "Magic Arrow". This comes only a year and two weeks after his first total victory. He will be two years old on October 19. Happy Birthday! Damon Permezel of Waterloo was the lucky user. Here is his announcement: - - - - - - - - Date: Mon, 10 Oct 83 20:35:22 PDT From: allegra!watmath!dapermezel@Berkeley Subject: total winner To: mauldin@cmu-cs-a It won! The lucky SOB started out with armour class of 1 and a (-1,0) two handed sword (found right next to it on level 1). Numerous 'enchant armour' scrolls were found, as well as a +2 ring of dexterity, +1 add strength, and slow digestion, not to mention +1 protection. Luck had an important part to play, as initial confrontations with 'U's got him confused and almost killed, but for the timely stumbling onto the stairs (while still confused). A scroll of teleportation was seen to be used to advantage once, while it was pinned between 2 'X's in a corridor. - - - - - - - - Date: Thu, 13 Oct 83 10:58:26 PDT From: allegra!watmath!dapermezel@Berkeley To: mlm@cmu-cs-cad.ARPA Subject: log Unfortunately, I was not logging it. I did make sure that there were several witnesses to the game, who could verify that it (It?) was a total winner. - - - - - - - - The paper is still available; for a copy of "Rog-O-Matic: A Belligerent Expert System", please send your physical address to "Mauldin@CMU-CS-A" and include the phrase "paper request" in the subject line. Michael Mauldin (Fuzzy) Department of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 (412) 578-3065, mauldin@cmu-cs-a. ------------------------------ Date: 13 Oct 83 21:35:12 EDT (Thu) From: Dana S. Nau Subject: University of Maryland Colloquium University of Maryland Department of Computer Science Colloquium Monday, October 24 -- 4:00 PM Room 2324 - Computer Science Building A Formal Model of Diagnostic Problem Solving Dana S. Nau Computer Science Dept. University of Maryland College Park, Md. Most expert computer systems are based on production rules, and to some readers the terms "expert computer system" and "production rule system" may seem almost synonymous. However, there are problem domains for which the usual production rule techniques appear to be inadequate. This talk presents a useful alternative to rule-based problem solving: a formal model of diagnostic problem solving based on a generalization of the set covering problem, and formalized algorithms for diagnostic problem solving based on this model. The model and the resulting algorithms have the following features: (1) they capture several intuitively plausible features of human diagnostic inference; (2) they directly address the issue of multiple simultaneous causative disorders; (3) they can serve as a basis for expert systems for diagnostic problem solving; and (4) they provide a conceptual framework within which to view recent work on diagnostic problem solving in general. Coffee and refreshments - Rm. 3316 - 3:30 ------------------------------ End of AIList Digest ******************** 14-Oct-83 14:40:23-PDT,16594;000000000001 Mail-From: LAWS created at 14-Oct-83 14:39:37 Date: Friday, October 14, 1983 2:25PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #78 To: AIList@SRI-AI AIList Digest Saturday, 15 Oct 1983 Volume 1 : Issue 78 Today's Topics: Philosophy - Dedekind & Introspection, Rational Psychology - Conectionist Models, Creativity - Intuition in Physics, Conference - Forth, Seminar - IUS Presentation ---------------------------------------------------------------------- Date: 10 Oct 83 11:54:07-PDT (Mon) From: decvax!duke!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax Subject: consciousness, loops, halting problem Article-I.D.: uvacs.983 With regard to loops and consciousness, consider Theorem 66 of Dedekind's book on the foundations of mathematics, "Essays on the Theory of Numbers", translated 1901. This is the book where the Dedekind Cut is invented to characterize irrational numbers. 64. Definition. A system S is said to be infinite when it is similar to a proper part of itself; in the contrary case S is said to be a finite system. 66. Theorem. There exist infinite systems. Proof. My own realm of thoughts, i.e. the totality S of all things, which can be objects of my thought, is infinite. For if s signifies an element of S, then is the thought s', that s can be object of my thought, itself an element of S. If we regard this as transform phi(s) of the element s then has the transformation phi of S, thus determined, the property that the transform S' is part of S; and S' is certainly proper part of S, because there are elements of S (e.g. my own ego) which are different from such thought s' and therefore are not contained in S'. Finally it is clear that if a, b are different elements of S, their transformation phi is a distinct (similar) transformation. Hence S is infinite, which was to be proved. For that matter, net.math seems to be in a loop. They were discussing the Banach-Tarski paradox about a year ago. Alex Colvin ARPA: mac.uvacs@UDel-Relay CS: mac@virginia USE: ...uvacs!mac ------------------------------ Date: 8 Oct 83 13:53:38-PDT (Sat) From: hplabs!hao!seismo!rochester!blenko @ Ucb-Vax Subject: Re: life is but a dream Article-I.D.: rocheste.3318 The statement that consciousness is an illusion does not mean it does not or cannot have a concrete realization. I took the remarks to mean simply that the entire mental machinery is not available for introspection, and in its place some top-level "picture" of the process is made available. The picture need not reflect the details of internal processing, in the same way that most people's view of a car does not bear much resemblance to its actual mechanistic internals. For those who may not already be aware, the proposal is not a new one. I find it rather attractive, admitting my own favorable predisposition towards the proposition that mental processing is computational. I still think this newsgroup would be more worthwhile if readers adopted a more tolerant attitude. It seems to be the case that there is nearly always a silly interpretation of someone's contribution; discovering that interpretation doesn't seem to be a very challenging task. Tom Blenko blenko@rochester decvax!seismo!rochester!blenko allegra!rochester!blenko ------------------------------ Date: 11 Oct 83 9:37:52-PDT (Tue) From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax Subject: Re: "Rational Psychology" Article-I.D.: rocheste.3352 This is in response to John Black's comments, to wit: > Having a theoretical (or "rational" -- terrible name with all the wrong > connotations) psychology is certainly desirable, but it does have to make > some contact with the field it is a theory of. One of the problems here is > that the "calculus" of psychology has yet to be invented, so we don't have > the tools we need for the "Newtonian mechanics" of psychology. The latest > mathematical candidate was catastrophe theory, but it turned out to be a > catastrophe when applied to human behavior. Perhaps Periera and Doyle have > a "calculus" to offer. This is an issue I (and I think many AI'ers) are particularly interested in, that is, the correspondence between our programs and the actual workings of the mind. I believe that an *explanatory* theory of behavior will not be at the functional level of correspondence with human behavior. Theories which are at the functional level are important for pinpointing *what* it is that people do, but they don't get a handle on *how* they do it. And, I think there are side-effects of the architecture of the brain on behavior that do not show up in functional level models. This is why I favor (my favorite model!) connectionist models as being a possible "calculus of Psychology". Connectionist models, for those unfamiliar with the term, are a version of neural network models developed here at Rochester (with related models at UCSD and CMU) that attempts to bring the basic model unit into line with our current understanding of the information processing capabilities of neurons. The units themselves are relatively stupid and slow, but have state, and can compute simple functions (not restricted to linear). The simplicity of the functions is limited only by "gentleman's agreement", as we still really have no idea of the upper limit of neuronal capabilities, and we are guided by what we seem to need in order to accomplish whatever task we set them to. The payoff is that they are highly connected to one another, and can compute in parallel. They are not allowed to pass symbol structures around, and have their output restricted to values in the range 1..10. Thus we feel that they are most likely to match the brain in power. The problem is how to compute with the things! We regard the outcome of a computation to be a "stable coalition", a set of units which mutually reinforce one another. We use units themselves to represent values of parameters of interest, so that mutually compatible values reinforce one another, and mutually exclusive values inhibit one another. These could be the senses of the words in a sentence, the color of a patch in the visual field, or the direction of intended eye movement. The result is something that looks a lot like constraint relaxation. Anyway, I don't want to go on forever. If this sparks discussion or interest references are available from the U. of R. CS Dept. Rochester, NY 14627. (the biblio. is a TR called "the Rochester Connectionist Papers"). gary cottrell (allegra or seismo)!rochester!gary or gary@rochester ------------------------------ Date: 10 Oct 83 8:00:59-PDT (Mon) From: harpo!eagle!mhuxi!mhuxj!mhuxl!mhuxm!pyuxi!pyuxn!rlr @ Ucb-Vax Subject: Re: RE: Intuition in Physics Article-I.D.: pyuxn.289 > I presume that at birth, ones mind is not predisposed to one or another > of several possible theories of heavy molecule collision (for example.) > Further, I think it unlikely that personal or emotional interaction in > one "pre-analytic" stage (see anything about developmental psych.) is > is likely to bear upon ones opinions about those molecules. In fact I > find it hard to believe that anything BUT technical learning is likely > to bear on ones intuition about the molecules. One might want to argue > that ones personality might force you to lean towards "aggressive" or > overly complex theories, but I doubt that such effects will lead to > the creation of a theory. Only a rather mild predisposition at best. > In psychology it is entirely different. A person who is agresive has > lots of reasons to assume everyone else is as well. Or paranoid, or > that rote learning is esp good or bad, or that large dogs are dangerous > or a number of other things that bear directly on ones theories of the > mind. And these biases are aquired from the process of living and are > quite un-avoidable. The author believes that, though behavior patterns and experiences in a person's life may affect their viewpoint in psychological studies, this does not apply in "technical sciences" (not the author's phrasing, and not mine either---I just can't think of another term) like physics. It would seem that flashes of "insight" obtained by anyone in a field involving discovery have to be based on both the technical knowledge that the person already has AND the entire life experience up to that point. To oversimplify, if one has never seen a specific living entity (a flower, a specific animal) or witnessed a physical event, or participated in a particular human interaction, one cannot base a proposed scientific model on these things, and these flashes are often based on such analogies to reality. ------------------------------ Date: 9 Oct 83 14:38:45-PDT (Sun) From: decvax!genrad!security!linus!utzoo!utcsrgv!utcsstat!laura @ Ucb-Vax Subject: Re: RE: Intuition in Physics Article-I.D.: utcsstat.1251 Gary, I don't know about why you think about physics, but I know something about why *I* think about physics. You see, i have this deep fondness for "continuous creation" as opposed to "the big bang". This is too bad for me, since "big bang" appears to be correct, or at any rate, "continuous creation" appears to be *wrong*. Perhaps what it more correct is "bang! sproiinngg.... bang!" or a series of bangs, but this is not the issue. these days, if you ask me to explain the origins of the universe, from a physical point of veiw I am going to discuss "big bang". I can do this. It just does not have the same emotional satisfaction to me as "c c" but that is too bad for me, I do not go around spreading antiquidated theories to people who ask me in good faith for information. But what if the evidence were not all in yet? What if there were an equal number of reasons to believe one or the other? What would I be doing? talking about continuous creation. i might add a footnote that there was "this other theory ... the big bang theory" but I would not discuss it much. I have that strong an emotional attatchment to "continuous creation". You can also read that other great issues in physics and astronomy had their great believers -- there were the great "wave versus particle" theories of light, and The Tycho Brahe cosmology versus the Kepler cosmology, and these days you get similar arguments ... In 50 years, we may all look back and say, well, how silly, everyone should have seen that X, since X is now patently obvious. This will explain why people believe X now, but not why people believed X then, or why people DIDN'T believe X then. Why didn't Tycho Brahe come up with Kepler's theories? It wasn't that Kepler was a better experiementer, for Kepler himself admits that he was a lousy experimenter and Brahe was reknowned for having the best instraments in the world, and being the most painstaking in measurements. it wasn't that they did not know each other, for Kepler worked with Brahe, and replaced him as Royal Astronomer, and was familiar with his work before he ever met Brahe... It wasn't that Brahe was religious and Kepler was not, for it was Kepler that was almost made a minister and studied very hard in Church schools (which literally brought him out of peasantry into the middle class) while Brahe, the rich nobleman, could get away with acts that the church frowned upon (to put if mildly). Yet Kepler was able to think in terms of Heliocentric, while Brahe, who came so...so..close balked at the idea and put the sun circling the earth while all the other planets circled the sun. Absolutely astonishing! I do not know where these differences came from. However, I have a pretty good idea why continuous creation is more emotionally satisfying for me than "big bang" (though these days I am getting to like "bang! sproing! bang!" as well.) As a child, i ran across the "c c" theory at the same time as i ran across all sorts of the things that interest me to this day. In particular, I recall reading it at the same time that I was doing a long study of myths, or creation myths in particular. Certain myths appealed to me, and certain ones did not. In particular, the myths that centred around the Judeao-Christian tradition (the one god created the world -- boom!) had almost no appeal to me those days, since I had utter and extreme loathing for the god in question. (this in turn was based on the discovery that this same wonderful god was the one that tortured and burned millions in his name for the great sin of heresy.) And thus, "big bang" which smacked of "poof! god created" was much less favoured by me at age 8 than continuous creation (no creator necessary). Now that I am older, I have a lot more tolerance for Yaveh, and I do not find it intollerable to believe in the Big Bang. However, it is not as satisfying. Thus I know that some of my beliefs which in another time could have been essential to my scientific theories and inspirations, are based on an 8-year-old me reading about the witchcraft trials. It seems likely that somebody out there is furthering science by discovering new theories based on ideas which are equally scientific. Laura Creighton utzoo!utcsstat!laura ------------------------------ Date: Fri 14 Oct 83 10:50:52-PDT From: WYLAND@SRI-KL.ARPA Subject: FORTH CONVENTION ANNOUNCEMENT 5TH ANNUAL FORTH NATIONAL CONVENTION October 14-15, 1983 Hyatt Palo Alto 4920 El Camino real Palo Alto, CA 94306 Friday 10/14: 12:00-5:00 Conference and Exhibits Saturday 10/15: 9:00-5:00 Conference and Exhibits 7:00 Banquet and Speakers This FORTH convention includes sessions on: Relational Data Base Software - an implementation FORTH Based Instruments - implementations FORTH Based Expert Systems - GE DELTA system FORTH Based CAD system - an implementation FORTH Machines - hardware implementations of FORTH Pattern Recognition Based Programming System - implementation Robotics Uses - Androbot There are also introductory sessions and sessions on various standards. Entry fee is $5.00 for the sessions and exhibits. The banquet features Tom Frisna, president of Androbot, as the speaker (fee is $25.00). ------------------------------ Date: 13 Oct 1983 1441:02-EDT From: Sylvia Brahm Subject: IUS Presentation [Reprinted from the CMU-C bboard.] George Sperling from NYU and Bell Laboratories will give a talk on Monday, October 17, 3:30 to 5:00 in Wean Hall 5409. Title will be Image Processing and the Logic of Perception. This talk is not a unification but merely the temporal juxta- position of two lines of research. The logic of perception invoves using unreliable, ambiguous information to arrive at a categorical decision. Critical phenomena are multiple stable states (in response to the same external stimulus) and path dependence (hysteresis): the description is potential theory. Neural models with local inhibitory interaction are the ante- cedents of contemporary relaxation methods. New (and old) examples are provided from binocular vision and depth perception, including a polemical demonstration of how the perceptual decision of 3D structure in a 2D display can be dominated by an irrelevant brightness cue. Image processing will deal with the practical problem of squeezing American Sign Language (ASL) through the telephone network. Historically, an image (e.g., TV @4MHz) has been valued at more than 10@+(3) speech tokens (e.g., telephone @3kHz). With image- processed ASL, the ratio is shown to be approaching unity. Movies to illustrate both themes will be shown. Appointments to speak with Dr. Sperling can be made by calling x3802. ------------------------------ End of AIList Digest ******************** 16-Oct-83 22:25:29-PDT,13078;000000000001 Mail-From: LAWS created at 16-Oct-83 22:23:07 Date: Sunday, October 16, 1983 10:13PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #79 To: AIList@SRI-AI AIList Digest Monday, 17 Oct 1983 Volume 1 : Issue 79 Today's Topics: AI Societies - Bledsoe Election, AI Education - Videotapes & Rutgers Mini-Talks, Psychology - Intuition & Conciousness ---------------------------------------------------------------------- Date: Fri 14 Oct 83 08:41:39-CDT From: Robert L. Causey Subject: Congratulations Woody! [Reprinted from the UTexas-20 bboard.] Woody Bledsoe has been named president-elect of the American Association of Artificial Intelligence. He will become president in August, 1984. According to the U.T. press release Woody said, "You can't replace the human, but you can greatly augment his abilities." Woody has greatly augmented the computer's abilities. Congratulations! ------------------------------ Date: 12 Oct 83 12:59:24-PDT (Wed) From: ihnp4!hlexa!pcl @ Ucb-Vax Subject: AI (and other) videotapes to be produced by AT&T Bell Laboratories Article-I.D.: hlexa.287 [I'm posting this for someone who does not have access to netnews. Send comments to the address below; electronic mail to me will be forwarded. - PCL] AT&T Bell Laboratories is planning to produce a videotape on artificial intelligence that concentrates on "knowledge representation" and "search strategies" in expert systems. The program will feature a Bell Labs prototype expert system called ACE. Interviews of Bell Labs developers will provide the content. Technical explanations will be made graphic with computer generated animation. The tape will be sold to colleges and industry by Hayden Book Company as part of a software series. Other tapes will cover Software Quality, Software Project Management and Software Design Methodologies. Your comments are welcome. Write to W. L. Gaddis, Senior Producer, Bell Laboratories, 150 John F. Kennedy Parkway, Room 3L-528, Short Hills, NJ 07078 ------------------------------ Date: 16 Oct 83 22:42:42 EDT From: Sri Subject: Mini-talks Recently two notices were copied from the Rutgers bboard to Ailist. They listed a number of "talks" by various faculty back to back. Those who wondered how a talk could be given in 10 minutes and those who wondered why a talk would be given in 10 minutes may be glad to know the purpose of the series. This is the innovative method that has been designed by the CS graduate students society for introducing to new graduate students and new faculty members the research interests of the CS faculty. Each talk typically outlined the area of CS and AI of interest to the faculty member, discussed research opportunities and the background (readings, courses) necessary for doing research in that area. I have participated in this mini-talk series for several years and have found it valuable to myself as a speaker. To be given about 10 min to say what I am interested in, does force me distill thoughts and to say it simply. The feedback from students is also positive. Perhaps you will hear some from some of the students too. ------------------------------ Date: 11 Oct 83 2:44:12-PDT (Tue) From: harpo!utah-cs!shebs @ Ucb-Vax Subject: Re: the Halting problem. Article-I.D.: utah-cs.1985 I share your notion (that human ability is limited, and that machines might actually go beyond man in "consciousness"), but not your confidence. How do you intend to prove your ideas? You can't just wait for a fantastic AI program to come along - you'll end up right back in the Turing Test muddle. What *is* consciousness? How can it be characterized abstractly? Think in terms of universal psychology - given a being X, is there an effective procedure (used in the technical sense) to determine whether that being is conscious? If so, what is that procedure? AI is applied philosophy, stan the l.h. utah-cs!shebs ps Re rational or universal psychology: a professor here observed that it might end up with the status of category theory - mildly interesting and all true, but basically worthless in practice... Any comments? ------------------------------ Date: 12 Oct 83 11:43:39-PDT (Wed) From: decvax!cca!milla @ Ucb-Vax Subject: Re: the Halting problem. Article-I.D.: cca.5880 Of course self-awareness is real. The point is that self-awareness comes about BECAUSE of the illusion of consciousness. If you were capable of only very primitive thought, you would be less self-aware. The greater your capacity for complex thought, the more you perceive that your actions are the result of an active, thinking entity. Man, because of his capacity to form a model of the world in his mind, is able to form a model of himself. This all makes sense from a purely physical viewpoint; there is no need for a supernatural "soul" to complement the brain. Animals appear to have some self-awareness; the quantity depends on their intelligence. Conceivably, a very advanced computer system could have a high degree of self-awareness. As with consciousness, it is lack of information -- how the brain works, random factors, etc. which makes self-awareness seem to be a very special quality. In fact, it is a very simple, unremarkable characteristic. M. Massimilla ------------------------------ Date: 12 Oct 83 7:16:26-PDT (Wed) From: harpo!eagle!mhuxi!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax Subject: RE: Physics and Intuition Article-I.D.: ncsu.2367 I intend this to be my final word on the matter. I intend it to be brief: as someone said, a bit more tolerance on this group would help. From Laura we have a wonderful story of the intermeshing of physics and religion. Well, I picked molecular physics for its avoidance of any normal life experiences. Cosmology and creation are not in that catagory quite so strongly because religion is an everyday thing and will lead to biases in cosmological theories. Clearly there is a continuum from things which are divorced from everyday experience to those that are very tightly connected to it. My point is that most "hard" sciences are at one end of the continuum while psychology is clearly way over at the other end, by definition. It is my position that the rather big difference between the way one can think about the two ends of the spectrum suggests that what works well at one end may well be quite inappropriate at the other. Or it may work fine. But there is a burden of proof that I hand off to the rational psychologists before I will take them more seriously than I take most psychologists. I have the same attitude towards cosmology. I find it patently ludicrous that so many people push our limited theories so far outside the range of applicability and expect the extrapolation to be accurate. Such extrapoloation is an interesting way to understand the failing of the theories, but to believe that DOES require faith without substantiation. I dislike being personal, but Laura is trying to make it seem black and white. The big bang has hardly been proved. But she seems to be saying it has. It is of course not so simple. Current theories and data seem to be tipping the scales, but the scales move quite slowly and will no doubt be straightened out by "new" work 30 years hence. The same is true of my point about technical reasoning. Clearly no thought can be entirely divorced from life experiences without 10 years on a mountain-top. Its not that simple. That doesn't mean that there are not definable differences between different ways of thinking and that some may be more suitable to some fields. Most psychologists are quite aware of this problem (I didn't make it up) and as a result purely experimental psychology has always been "trusted" more than theorizing without data. Hard numbers give one some hope that it is the world, not your relationship with a pet turtle speaking in your work. If anyone has anymore to say to me about this send me mail, please. I suspect this is getting tiresome for most readers. (its getting tiresome for me...) If you quote me or use my name, I will always respond. This network with its delays is a bad debate forum. Stick to ideas in abstration from the proponent of the idea. And please look for what someone is trying to say before assuming thay they are blathering. ----GaryFostel---- ------------------------------ Date: 14 Oct 83 13:43:56 EDT (Fri) From: Paul Torek Subject: consciousness and the teleporter From Michael Condict ...!cmcl2!csd1!condict This, then, is the reason I would never step into one of those teleporters that functions by ripping apart your atoms, then reconstructing an exact copy at a distant site. [...] In spite of the fact that consciousness (I agree with the growing chorus) is NOT an illusion, I see nothing wrong with using such a teleporter. Let's take the case as presented in the sci-fi story (before Michael Condict rigs the controls). A person disappears from (say) Earth and a person appears at (say) Tau Ceti IV. The one appearing at Tau Ceti is exactly like the one who left Earth as far as anyone can tell: she looks the same, acts the same, says the same sort of things, displays the same sort of emotions. Note that I did NOT say she is the SAME person -- although I would warn you not too conclude too hastily whether she is or not. In my opinion, *it doesn't matter* whether she is or not. To get to the point: although I agree that consciousness needs something to exist, there *IS* something there for it -- the person at Tau Ceti. On what grounds can anyone believe that the person at Tau Ceti lacks a consciousness? That is absurd -- consciousness is a necessary concomitant of a normal human brain. Now there IS a question as to whether the conscious person at Tau Ceti is *you*, and thus as to whether his mind is *your* mind. There is a considerable philosophical literature on this and very similar issues -- see *A Dialogue on Personal Identity and Immortality* by John Perry, and "Splitting Self-Concern" by Michael B. Green in *Pacific Philosophical Quarterly*, vol. 62 (1981). But in my opinion, there is a real question whether you can say whether the person at Tau Ceti is you or not. Nor, in my opinion, is that question really important. Take the modified case in which Michael Condict rigs the controls so that you are transported, yet remain also at Earth. Michael Condict calls the one at Earth the "original", and the one at Tau Ceti the "copy". But how do you know it isn't the other way around -- how do you know you (your consciousness) weren't teleported to Tau Ceti, while a copy (someone else, with his own consciousness) was produced at Earth? "Easy -- when I walk out of the transporter room at Earth, I know I'm still me; I can remember everything I've done and can see that I'm still the same person." WRONGO -- the person at Tau Ceti has the same memories, etc. I could just as easily say "I'll know I was transported when I walk out of the transporter room at Tau Ceti and realize that I'm still the same person." So in fairness, we can't say "You walk out of the transporter room at both ends, with the original you realizing that something went wrong." We have to say "You walk out of the transporter at both ends, with *the one at Earth* realizing something is wrong." But wait -- they can't BOTH be you -- or can they? Maybe neither is you! Maybe there's a continuous flow of "souls" through a person's body, with each one (like the "copy" at Tau Ceti (or is it at Earth)) *seeming* to remember doing the things that that body did before ... If you acknowledge that consciousness is rooted in the physical human brain, rather than some mysterious metaphysical "soul" that can't be seen or touched or detected in any way at all, you don't have to worry about whether there's a continuous flow of consciousnesses through your body. You don't have to be a dualist to recognize the reality of consciousness; in fact, physicalism has the advantage that it *supports* the commonsense belief that you are the same person (consciousness) you were yesterday. --Paul Torek, U of MD, College Park ..umcp-cs!flink ------------------------------ End of AIList Digest ******************** 20-Oct-83 09:32:48-PDT,19742;000000000001 Mail-From: LAWS created at 20-Oct-83 09:30:52 Date: Thursday, October 20, 1983 9:23AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #80 To: AIList@SRI-AI AIList Digest Thursday, 20 Oct 1983 Volume 1 : Issue 80 Today's Topics: Administrivia - Complaints & Seminar Abstracts, Implementations - Parallel Production System, Natural Language - Phrasal Analysis & Macaroni, Psychology - Awareness, Programming Languages - Elegance and Purity, Conferences - Reviewers needed for 1984 NCC, Fellowships - Texas ---------------------------------------------------------------------- Date: Tue 18 Oct 83 20:33:15-PDT From: Ken Laws Reply-to: AIList-Request@SRI-AI Subject: Complaints I have received copies of two complaints sent to the author of a course announcement that I published. The complaints alleged that the announcement should not have been put out on the net. I have three comments: First, such complaints should come to me, not to the original authors. The author is responsible for the content, but it is my decision whether or not to distribute the material. In this case, I felt that the abstract of a new and unique AI course was of interest to the academic half of the AIList readership. Second, there is a possibility that the complainants received the article in undigested form, and did not know that it was part of an AIList digest. If anyone is currently distributing AIList in this manner, I want to know about it. Undigested material is being posted to net.ai and to some bboards, but it should not be showing up in personal mailboxes. Third, this course announcement was never formally submitted to AIList. I picked the item up from a limited distribution, and failed to add a "reprinted from" or disclaimer line to note that fact. I apologize to Dr. Moore for not getting in touch with him before sending the item out. -- Ken Laws ------------------------------ Date: Tue 18 Oct 83 09:01:29-PDT From: Ken Laws Reply-to: AIList-Request@SRI-AI Subject: Seminar Abstracts It has been suggested to me that seminar abstracts would be more useful if they contained the home address (or net address, phone number, etc.) of the speaker. I have little control over the content of these messages, but I encourage those who compose them to include such information. Your notices will then be of greater use to the scientific community beyond just those who can attend the seminars. -- Ken Laws ------------------------------ Date: Mon 17 Oct 83 15:44:52-EDT From: Mark D. Lerner Subject: Parallel production systems. The parallel production system interpreter is running on the 15 node DADO prototype. We can presently run up to 32 productions, with 12 clauses in each production. The prototype has been operational since April 1983. ------------------------------ Date: 18 Oct 1983 0711-PDT From: MEYERS.UCI-20A@Rand-Relay Subject: phrasal analysis Recently someone asked why PHRAN was not based on a grammar. It just so happens .... I have written a parser which uses many of the ideas of PHRAN but which organizes the phrasal patterns into several interlocking grammars, some 'semantic' and some syntactic. The program is called VOX (Vocabulary Extension System) and attempts a 'complete' analysis of English text. I am submitting a paper about the concepts underlying the system to COLING, the conference on Computational Linguistics. Whether or not it is accepted, I will make a UCI Technical Report out of it. To obtain a copy of the paper, write: Amnon Meyers AI Project Dept. of Computer Science University of California, Irvine, CA 92717 ------------------------------ Date: Wednesday, 19 October 1983 10:48:46 EDT From: Robert.Frederking@CMU-CS-CAD Subject: Grammars; Greek; invective One comment and two meta-comments: Re: the validity of grammars: almost no one claims that grammatical phenomena don't exist (even Schank doesn't go that far). What the argument generally is about is whether one should, as the first step in understanding an input, build a grammatical tree, without any (or much) information from either semantics or the current conversational context. One side wants to do grammar first, by itself, and then the other stuff, whereas the other side wants to try to use all available knowledge right from the start. Of course, there are folks taking extreme positions on both sides, and people sometimes get a bit carried away in the heat of an argument. Re: Greek: As a general rule, it would be helpful if people who send in messages containing non-English phrases included translations. I cannot judge the validity of the Macaroni argument, since I don't completely understand either example. One might argue that I should learn Greek, but I think expecting me to know Maori grammatical classes is stretching things a bit. Re: invective: Even if the reference to Yahweh was meant as a childhood opinion which has mellowed with age, I object to statements of the form "this same wonderful god... tortured and burned..." etc. Perhaps it was a typo. As we all know, people have tortured and burnt other people for all sorts of reasons (including what sort of political/economic systems small Asian countries should have), and I found the statement offensive. ------------------------------ Date: Wednesday, 19 October 1983 13:23:59 EDT From: Robert.Frederking@CMU-CS-CAD Subject: Awareness As Paul Torek correctly points out, this is a metaphysical question. The only differences I have with his note are over the use of some difficult terms, and the fact that he clearly prefers the "physicalist" notion. Let me start by saying that one shouldn't try to prove one side or the other, since proofs clearly cannot work: awareness isn't subject to proof. The evidence consists entirely of internal experiences, without any external evidence. (Let me warn everyone that I have not been formally trained in philosophy, so some of my terms may be non-standard.) The fact that this issue isn't subject to proof does not make it trivial, or prevent it from being a serious question. One's position on this issue determines, I think, to a large extent one's view on many other issues, such as whether robots will eventually have the same legal stature as humans, and whether human life should have a special value, beyond its information handling abilities, for instance for euthanasia and abortion questions. (I certainly don't want to argue about abortion; personally, I think it should be legal, but not treated as a trivial issue.) At this point, my version of several definitions is in order. This is because several terms have been confused, due probably to the metaphysical nature of the problem. What I call "awareness" is *not* "self-reference": the ability of some information processing systems (including people) to discuss and otherwise deal with representations of themselves. It is also *not* what has been called here "consciousness": the property of being able to process information in a sophisticated fashion (note that chemical and physical reactions process information as well). "Awareness" is the internal experience which Michael Condict was talking about, and which a large number of people believe is a real thing. I have been told that this definition is "epiphenominal", in that awareness is not the information processing itself, but is outside the phenomena observed. Also, I believe that I understand both points of view; I can argue either side of the issue. However, for me to argue that the experience of "awareness" consists solely of a combination of information processing capabilities misses the "dualist" point entirely, and would require me to deny that I "feel" the experience I do. Many people in science deny that this experience has any reality separate from the external evidence of information processing capabilities. I suspect that one motivation for this is that, as Paul Torek seems to be saying, this greatly simplifies one's metaphysics. Without trying to prove the "dualist" point of view, let me give an example of why this view seems, to me, more plausible than the "physicalist" view. It is a variation of something Joseph Weizenbaum suggested. People are clearly aware, at least they claim to be. Rocks are clearly not aware (in the standard Western view). The problem with saying that computers will ever be aware in the same way that people are is that they are merely re-arranged rocks. A rock sitting in the sun is warm, but is not aware of its warmth, even though that information is being communicated to, for instance, the rock it is sitting on. A robot next to the rock is also warm, and, due to a skillful re-arrangement of materials, not only carries that information in its kinetic energy, but even has a temperature "sensor", and a data structure representing its body temperature. But it is no more aware (in the experiential sense) of what is going on than the rock is, since we, by merely using a different level of abstraction in thinking about it, can see that the data structure is just a set of states in some semiconductors inside it. The human being sitting next to the robot not only senses the temperature and records it somehow (in the same sense as the robot does), but experiences it internally, and enjoys it (I would anyway). This experiencing is totally undetectable to physical investigation, even when we (eventually) are able to analyze the data structures in the brain. An interesting side-note to this is that in some cultures, rocks, trees, etc., are believed to experience their existance. This is, to me, an entirely acceptable alternate theory, in which the rock and robot would both feel the warmth (and other physical properties) they possess. As a final point, when I consider what I am aware of at any given moment, it seems to include a visual display, an auditory sensation, and various bits of data from parts of my body (taste, smell, touch, pain, etc.). There are many things inside my brain that I am *not* aware of, including the preprocessing of my vision, and any stored memories not recalled at the moment. There is a sharp boundary between those things I am aware of and those things I am not. Why should this be? It isn't just that the high level processes, whatever they are, have access to only some structures. They *feel* different from other structures in the brain, whose information I also have access to, but which I have no feeling of awareness in. It would appear that there is some set of processing elements to which my awareness has access. This is the old mind-body problem that has plagued philosophers for centuries. To deny this qualitative difference would be, for me, silly, as silly as denying that the physical world really exists. In any event, whatever stand you take on this issue is based on personal preferences in metaphysics, and not on physical proof. ------------------------------ Date: 14 Oct 83 1237 PDT From: Dick Gabriel Subject: Elegance and Logical Purity [Reprinted from the Prolog Digest.] In the Lisp world, as you know, there are 2 Lisps that serve as examples for this discussion: T and Common Lisp. T is based on Scheme and, as such, it is relatively close to a `pure' Lisp or even a lambda-calculus-style Lisp. Common Lisp is a large, `user-convenient' Lisp. What are the relative successes of these two Lisps ? T appeals to the few, me included, while Common Lisp appeals to the many. The larger, user-convenient Lisps provide programmers with tools that help solve problems, but they don't dictate the style of the solutions. Think of it this way: When you go to an auto mechanic and you see he has a large tool chest with many tools, are you more or less confident in him than if you see he has a small tool box with maybe 5 tools ? Either way our confidence should be based on the skill of the mechanic, but we expect a skilfull mechanic with the right tools to be more efficient and possibly more accurate than the mechanic who has few tools, or who merely has tools and raw materials for making further tools. One could take RPLACA as an analog to a user-convenience in this situation. We do not need RPLACA: it messes up the semantics, and we can get around it with other, elegant and pure devices. However, RPLACA serves user convenience by providing an efficient means of accomplishing an end. In supplying RPLACA, I, the implementer, have thought through what the user is trying to do. No user would appreciate it if I suggested that I knew better than he what he is doing and to propose he replace all list structure that he might wish to use with side-effect with closures and to then hope for a smarter compiler someday. I think it shows more contempt of users' abilities to dictate a solution to him in the name of `elegance and logical purity' than for me to think through what he wants for him. I am also hesitant to foist on people systems or languages that are so elegant and pure that I have trouble explaining it to users because I am subject to being ``muddled about them myself.'' Maybe it is stupid to continue down the Lisp path, but Lisp is the second oldest lanuage (to FORTRAN), and people clamor to use it. Recall what Joel Moses said when comparing APL with Lisp. APL is perfect; it is like a diamond. But like a diamond you cannot add anything to it to make it more perfect, nor can you add anything to it and have it remain a diamond. Lisp, on the other hand, is like a ball of mud. You can add more mud to it, and it is still a ball of mud. I think user convenience is like mud. -rpg- ------------------------------ Date: Tuesday, 18 October 1983 09:32:25 EDT From: Joseph.Ginder at CMU-CS-SPICE Subject: Common Lisp Motivation [Reprinted from the Prolog Digest.] Being part of the Common Lisp effort, I would like to express an opinion about the reasons for the inclusion of so many "impurities" in Common Lisp that differs from that expressed by Fernando Pereira in the last Prolog Digest. I believe the reason for including much of what is now Common Lisp in the Common Lisp specification was an effort to provide common solutions to common problems; this is as opposed to making concessions to language limitations or people's (in)ability to write smart compilers. In particular, the reference to optimizing "inefficient copying into efficient replacement" does not seem a legitimate compiler optimization (in the general sense) -- this clearly changes program semantics. (In the absence of side effects, this would not be a problem, but note that some side effect is required to do IO.) For a good statement of the goals of the Common Lisp effort, see Guy Steele's paper in the 1982 Lisp and Functional Programming Conference Proceedings. Let me hasten to add that I agree with Pereira's concern that expediency not be promoted to principle. It is for this very reason that language features such as flavors and the loop construct were not included in the Common Lisp specification -- we determined not to standardize until concensus could be reached that a feature was both widely accepted and believed to be a fairly good solution to a common problem. The goal is not to stifle experimentation, but to promote good solutions that have been found through previous experience. In no sense do I believe anyone regards the current Common Lisp language as the Final Word on Lisp. Also, I have never interpreted Moses' diamond vs. mud analogy to have anything to do with authoritarianism, only aesthetics. Do others ? -- Joe Ginder ------------------------------ Date: 17 Oct 1983 07:38:44-PST From: jmiller.ct@Rand-Relay Subject: Reviewers needed for 1984 NCC The Program Committee for the 1984 National Computer Conference, which will be held in Las Vegas next July 9-12, is about to begin reviewing submitted papers, and we are in need of qualified people who would be willing to serve as reviewers. The papers would be sent to you in the next couple of weeks; the reviews would have to be returned by the end of December. Since NCC is sponsored by non-profit computer societies and is run largely by volunteers, it is not possible to compensate reviewers for the time and effort they contribute. However, to provide some acknowledgement of your efforts, your name will appear in the conference proceedings and, if you wish to attend NCC, we can provide you with advanced registration forms in hotels close to the convention center. We are also trying to arrange simplified conference registration for reviewers. As the chair of the artificial intelligence track, I am primarily concerned with finding people who would be willing to review papers on AI and/or human-computer interaction. However, I will forward names of volunteers in other areas to the appropriate chairs. If you would like to volunteer, please send me your: - name, - mailing address, - telephone number, - arpanet or csnet address (if any), and - subjects that you are qualified to review (it would be ideal if you could use the ACM categorization scheme) Either arpanet/csnet mail or US mail to my address below would be fine. Thanks for your help. James Miller Computer * Thought Corporation 1721 West Plano Parkway Plano, Texas 75075 JMILLER.CT @ RAND-RELAY ------------------------------ Date: Tue 11 Oct 83 10:44:08-CDT From: Gordon Novak Jr. Subject: $1K/mo Fellowships at Texas The Department of Computer Sciences at the University of Texas at Austin is initiating a Doctoral Fellows program, with fellowships available in Spring 1984 and thereafter. Recipients must be admitted to the Ph.D. program; November 1 is the applications deadline for Spring 1984. Applicants must have a B.A. or B.S. in Computer Science, or equivalent, a total GRE (combined verbal and quantitative) of at least 1400, and a GPA of at least 3.5 . Doctoral Fellows will serve as Teaching Assistants for two semesters, then will be given a fellowship (with no TA duties) for one additional year. The stipend will be $1000/month. Twenty fellowships per year will be available. The Computer Sciences Department at the University of Texas is ranked in the top ten departments by the Jones-Lindzey report. Austin is blessed with an excellent climate and unexcelled cultural and recreational opportunities. For details, contact Dr. Jim Bitner (CS.BITNER@UTEXAS-20), phone (512) 471-4353, or write to Computer Science Department, University of Texas at Austin, Austin, TX 78712. ------------------------------ End of AIList Digest ******************** 24-Oct-83 09:06:12-PDT,12200;000000000001 Mail-From: LAWS created at 24-Oct-83 09:04:27 Date: Monday, October 24, 1983 8:58AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #81 To: AIList@SRI-AI AIList Digest Monday, 24 Oct 1983 Volume 1 : Issue 81 Today's Topics: Lisp Machines & Fuzzy Logic - Request, Rational Psychology, Reports - AI and Robotics Overviews & Report Sources, Bibliography - Parallelism and Conciousness, Learning - Machine Learning Course ---------------------------------------------------------------------- Date: Sun, 23 Oct 83 16:00:07 EDT From: Ferd Brundick (LTTB) Subject: info on Lisp Machines We are about to embark on an ambitious AI project in which we hope to develop an Expert System. The system will be written in Lisp (or possibly Prolog) and will employ fuzzy logic and production rules. In my role as equipment procurer and novice Lisp programmer, I would like any information regarding Lisp machines, eg, what is available, how do the various machines compare, etc. If this topic has been discussed before I would appreciate pointers to the info. On the software side, any discussions regarding fuzzy systems would be welcomed. Thanks. dsw, fferd ------------------------------ Date: 26 Sep 83 10:01:56-PDT (Mon) From: ihnp4!drux3!drufl!samir @ Ucb-Vax Subject: Rational Psychology Article-I.D.: drufl.670 Norm, Let me elaborate. Psychology, or logic of mind, involves BOTH rational and emotional processes. To consider one exclusively defeats the purpose of understanding. I have not read the article we are talking about so I cannot comment on that article, but an example of what I consider a "Rational Psychology" theory is "Personal Construct Theory" by Kelly. It is an attractive theory but, in my opinion, it falls far short of describing "logic of mind" as it fails to integrate emotional aspects. I consider learning-concept formation-creativity to have BOTH rational and emotional attributes, hence it would be better if we studied them as such. I may be creating a dichotomy where there is none. (Rational vs. Emotional). I want to point you to an interesting book "Metaphors we live by" (I forget the names of Authors) which in addition to discussing many other ai-related (without mentioning ai) concepts discusses the question of Objective vs. Subjective, which is similar to what we are talking here, Rational vs. Emotional. Thanks. Samir Shah AT&T Information Systems, Denver. drufl!samir ------------------------------ Date: Fri 21 Oct 83 11:31:59-PDT From: Ken Laws Subject: Overview Reports I previously mentioned a NASA report described in IEEE Spectrum. I now have further information from NTIS. The one mentioned was the last of the following: An Overview of Artificial Intelligence and Robotics: Volume II - Robotics, NBSIR-82-2479, March 1982 PB83-217547 Price $13.00 An Overview of Expert Systems, NBSIR-82-2505, May 1982 (Revised October 1982) PB83-217562 Price $10.00 An Overview of Computer Vision, NBSIR-822582 (or possibly listed as NBSIR-832582), September 1982 PB83-217554 Price $16.00 An Overview of Computer-Based Natural Language Processing, NASA-TM-85635 NBSIR-832687 N83-24193 Price $10.00 An Overview of Artificial Intelligence and Robotics; Volume I - Artificial Intelligence, June 1983 NASA-TM-85836 Price $10.00 The ordering address is United States Department of Commerce National Technical Information Service 5285 Port Royal Road Springfield, VA 22161 -- Ken Laws ------------------------------ Date: Fri 21 Oct 83 11:38:42-PDT From: Ken Laws Subject: Report Sources The NTIS literature I have also lists some other useful sources: University Microfilms, Inc. 300 N. Zeeb Road Ann Arbor, MI 48106 National Translation Center SLA Translation Center, The John Crerar Library 35 West 33rd Street Chicago, IL 60616 Library of Congress, Photoduplicating Service Washington, D.C. 20540 American Institute of Aeronautics & Astronautics Technical Information Service 555 West 57th Street, 12th Floor New York, NY 10019 National Bureau of Standards Gaithersburg, MD 20234 U.S. Dept. of Energy, Div. of Technical Information P.O. Box 62 Oak Ridge, TN 37830 NASA Scientific and Technical Facility P.O. Box 8757 Balt/Wash International Airport Baltimore, MD 21240 -- Ken Laws ------------------------------ Date: Sun, 23 Oct 83 12:21:54 PDT From: Rik Verstraete Subject: Bibliography (parallelism and conciousness) David Rogers asked me if I could send him some of my ``favorite'' readings on the subject ``parallelism and conciousness.'' I searched through my list, and came up with several references which I think might be interesting to everybody. Not all of them are directly related to ``parallelism and conciousness,'' but nevertheless... Albus, J.S., Brains, Behavior, & Robotics, Byte Publications Inc. (1981). Arbib, M.A., Brains, Machines and Mathematics, McGraw-Hill Book Company, New York (1964). Arbib, M.A., The Metaphorical Brain, An Introduction to Cybernetics as Artificial Intelligence and Brain Theory, John Wiley & Sons, Inc. (1972). Arbib, M.A., "Automata Theory and Neural Models," Proceedings of the 1974 Conference on Biologically Motivated Automata Theory, pp. 13-18 (June 19-21, 1974). Arbib, M.A., "A View of Brain Theory," in Selforganizing Systems, The Emergence of Order, ed. F.E. Yates, Plenum Press, New York (1981). Arbib, M.A., "Modelling Neural Mechanisms of Visuomotors Coordination in Frogs and Toad," in Competition and Cooperation in Neural Nets, ed. Amari, S., and M.A. Arbib, Springer-Verlag, Berlin (1982). Barto, A.G. and R.S. Sutton, "Landmark Learning: An Illustration of Associative Search," Biological Cybernetics Vol. 42(1) pp. 1-8 (November 1981). Barto, A.G., R.S. Sutton, and C.W. Anderson, "Neuron-Like Adaptive Elements that can Solve Difficult Learning Control Problems," Coins Technical Report 82-20, Computer and Information Science Department, University of Massachusetts, Amherst, MA (1982). Begley, S., J. Carey, and R. Sawhill, "How the Brain Works," Newsweek, (February 7, 1983). Davis, L.S. and A. Rosenfeld, "Cooperating Processes for Low-Level Vision: A Survey," Aritificial Intelligence Vol. 17 pp. 245-263 (1981). Doyle, J., "The Foundations of Psychology," CMU-CS-82-149, Department of Computer Science, Carnegie-Mellon University, Pittsburgh, PA (February 18, 1982). Feldman, J.A., "Memory and Change in Connection Networks," Technical Report 96, Computer Science Department, University of Rochester, Rochester, NY (December 1981). Feldman, J.A., "Four Frames Suffice: A Provisionary Model of Vision and Space," Technical Report 99, Computer Science Department, University of Rochester, Rochester, NY (September 1982). Grossberg, S., "Adaptive Resonance in Development, Perception and Cognition," SIAM-AMS Proceedings Vol. 13 pp. 107-156 (1981). Harth, E., "On the Spontaneous Emergence of Neuronal Schemata," pp. 286-294 in Competition and Cooperation in Neural Nets, ed. Amari, S., and M.A. Arbib, Springer-Verlag, Berlin (1982). Hayes-Roth, B., "Implications of Human Pattern Processing for the Design of Artificial Knowledge Systems," pp. 333-346 in Pattern-Directed Inference Systems, ed. Waterman, D.A., and F. Hayes- Roth, Academic Press, New York (1978). Hofstadter, D.R., Godel, Escher, Bach: An Eternal Golden Braid, Vintage Books,, New York (1979). Hofstadter, D.R. and D.C. Dennett, The Mind's I, Basic Books, Inc., New York (1981). Holland, J.H., Adaption in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor (1975). Holland, J.H. and J.S. Reitman, "Cognitive Systems Based on Adaptive Algorithms," pp. 313-329 in Pattern-Directed Inference Systems, ed. Waterman, D.A., and F. Hayes-Roth, Academic Press, New York (1978). Kauffman, S., "Behaviour of Randomly Constructed Genetic Nets: Binary Element Nets," pp. 18-37 in Towards a Theoretical Biology, Vol 3: Drafts, ed. C.H. Waddington,Edinburgh University Press (1970). Kauffman, S., "Behaviour of Randomly Constructed Genetic Nets: Continuous Element Nets," pp. 38-46 in Towards a Theoretical Biology, Vol 3: Drafts, ed. C.H. Waddington, Edinburgh University Press (1970). Kent, E.W., The Brains of Men and Machines, Byte/McGraw-Hill, Peterborough, NH (1981). Klopf, A.H., The Hedonistic Neuron, Hemisphere Publishing Corporation, Washington (1982). Kohonen, T., "A Simple Paradigm for the Self-Organized Formation of Structured Feature Maps," in Competition and Cooperation in Neural Nets, ed. Amari, S., and M.A. Arbib, Springer-Verlag, Berlin (1982). Krueger, M.W., Artificial Reality, Addison-Wesley Publishing Company (1983). McCulloch, W.S. and W. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity," Bulletin of Mathematical Biophysics Vol. 5(4) pp. 115-133 (December 1943). Michalski, R.S., J.G. Carbonell, and T.M. Mitchell, Machine Learning, An Artificial Intelligence Approach, Tioga Publishing Co, Palo Alto, CA (1983). Michie, D., "High-Road and Low-Road Programs," AI Magazine, pp. 21-22 (Winter 1981-1982). Narendra, K.S. and M.A.L. Thathachar, "Learning Automata - A Survey," IEEE Transactions on Systems, Man, and Cybernetics Vol. SMC-4(4) pp. 323-334 (July 1974). Nilsson, N.J., Learning Machines: Foundations of Trainable Pattern- Classifying Systems, McGraw-Hill, New-York (1965). Palm, G., Neural Assemblies, Springer-Verlag (1982). Pearl, J., "On the Discovery and Generation of Certain Heuristics," The UCLA Computer Science Department Quarterly Vol. 10(2) pp. 121-132 (Spring 1982). Pistorello, A., C. Romoli, and S. Crespi-Reghizzi, "Threshold Nets and Cell-Assemblies," Information and Control Vol. 49(3) pp. 239-264 (June 1981). Truxal, C., "Watching the Brain at Work," IEEE Spectrum Vol. 20(3) pp. 52-57 (March 1983). Veelenturf, L.P.J., "An Automata-Theoretical Approach to Developing Learning Neural Networks," Cybernetics and Systems Vol. 12(1-2) pp. 179-202 (January-June 1981). ------------------------------ Date: 20 October 1983 1331-EDT From: Jaime Carbonell at CMU-CS-A Subject: Machine Learning Course [Reprinted from the CMU-AI bboard.] [I pass this on as a list of topics and people in machine learning. -- KIL] The schedule for the remaining classes in the Machine Learning course (WeH 4509, tu & thu at 10:30) is: Oct 25 - "Strategy Acquisition" -- Pat Langley Oct 27 - "Learning by Chunking & Macro Structures" -- Paul Rosenbloom Nov 1 - "Learning in Automatic Programming" -- Elaine Kant Nov 3 - "Language Acquisition I" -- John Anderson Nov 8 - "Discovery from Empirical Observations" -- Herb Simon Nov 10 - "Language Acquisition II" -- John Anderson or Brian McWhinney Nov 15 - "Algorithm Discovery" -- Elaine Kant or Allen Newell Nov 17 - "Learning from Advice and Instruction" -- Jaime Carbonell Nov 22 - "Conceptual Clustering" -- Pat Langley Nov 29 - "Learning to Learn" -- Pat Langley Dec 1 - "Genetic Learning Methods" -- Stephen Smith Dec 6 - "Why Perceptrons Failed" -- Geoff Hinton Dec 8 - "Discovering Regularities in the Environment" -- Geoff Hinton Dec 13 - "Trainable Stochastic Grammars" -- Peter Brown ------------------------------ End of AIList Digest ******************** 26-Oct-83 11:26:39-PDT,17439;000000000001 Mail-From: LAWS created at 26-Oct-83 10:42:41 Date: Wednesday, October 26, 1983 10:31AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #82 To: AIList@SRI-AI AIList Digest Wednesday, 26 Oct 1983 Volume 1 : Issue 82 Today's Topics: AI Hardware - Dolphin-Users Distribution List, AI Software - Inference Engine Toolkit for PCs, Metaphysics - Parallelism and Conciousness, Machine Learning - Readings, Seminars - CSLI & Speech Understanding & Term Rewriting & SYDPOL Languages ---------------------------------------------------------------------- Date: Tue 25 Oct 83 11:56:44-PDT From: Christopher Schmidt Subject: Dolphin-Users distribution list If there are AIList readers who would like to discuss lisp machines at a more detailed level than the credo of AIList calls for, let me alert them to the existence of the Dolphin-Users@SUMEX distribution list. This list was formed over a year ago to discuss problems with Xerox D machines, but it has had very little traffic, and I'm sure few people would mind if other lisp machines were discussed. If you would like your name added, please send a note to Dolphin-Requests@SUMEX. If you would like to contribute or ask a question about some lisp machine or problem, please do! --Christopher ------------------------------ Date: Wed 26 Oct 83 10:26:47-PDT From: Ken Laws Subject: Inference Engine Toolkit for PCs I have been requested to pass on some product availability data to AIList. I think I can do so without violating Arpanet regulations. I am uncomfortable about such notices, however, and will generally require that they pass through at least one "commercially disinterested" person before being published in AIList. I will perform this screening only in exceptional cases. The product is a document on a backward-chaining inference engine toolkit, including source code in FORTH. The inference engine uses a production language syntax which allows semantic inference and access to analytical subroutines written in FORTH. Source code is included for a forward-chaining tool, but the strategy is not implemented in the inference routines. The code is available on disks formatted for a variety of personal computers. For further details, contact Jack Park, Helion, Inc., Box 445, Brownsville, CA 95919, (916) 675-2478. The toolkit is also available from Mountain View Press, Box 4656, Mountain View, CA 94040. -- Ken Laws ------------------------------ Date: Tuesday, 25 October 1983, 10:28-EST From: John Batali Subject: Parallelism and Conciousness I'm interested in the reasons for the pairing of these two ideas. Does anyone think that parallelism and consciousness necessarily have anything to do with one another? ------------------------------ Date: Tue 25 Oct 83 12:22:45-PDT From: David Rogers Subject: Parallelism and Consciousness I cannot say that "parallelism and consciousness are necessarily related", for one can (at least) simulate a parallel process on a sequential machine. However, just because one has the ability to represent a process in a certain form does not guarantee that this is the most natural form to represent it in; e.g., FORTRAN and LISP are theoretically as powerful, but who wants to program an expert system in FORTRAN? Top-down programming of knowledge is not (in my opinion) an easy candidate for parallelism; one can hope for large speed-ups of execution speed, but rarely are the algorithms able to naturally utilize the ability of parallel systems to support interacting non-deterministic processes. (I'm sure I'll hear from some parallel logic programmer on that one). My candidate for developing parallelism and consciousness involves incorporating the non-determinism at the heart of the system, by using a large number of subcognitive processes operating in parallel; this is essentially Hofstadter's concept of consciousness being an epiphenomenon of the interacting structures, and not being explicitly programmed. The reason for the parallelism is twofold. First, I would assume that a system of interacting subcognitive structures would have a significant amount of "random" effort, while the more condensed logic based system would be more computationally more efficient. Thus, the parallelism is partially used to offset the added cost of the more fluid, random motion of the interacting processes. Second, the interacting processes would allow a natural interplay between events based on time; for example, infinite loops are easily avoided through having a process interrupt if too much time is taken. The blackboard architecture is also naturally represented in parallel, as a number of coordinating processes scribble on a shared data structure. Actually, in my mind, the blackboard structure has not been developed fully; I have the image of people at a party in my mind, with groups forming, ideas developed, groups breaking up and reforming. Many blackboards are active at once, and as interest is forgotten, they dissolve, then reform around other topics. Notice that this representation of a party has no simple sequential representation, nor would a simple top level rule base be able to model the range of activities the party can evolve to. How does "the party" decide what beer to buy, or how long to stay intact, or whether it will be fun or not? If I were to model a party, I'd say a parallel system of subcognitive structures would be almost the only natural way. As a final note, I find the vision of consciousness being analogous to people at a party simple and humorous. And somehow, I've always found God to clothe most truths in humor... am I the only one who has laughed at the beautiful simplicity of E=MC^2? David ------------------------------ Date: 22 Oct 83 19:27:33 EDT (Sat) From: Paul Torek Subject: re: awareness [Submitted by Robert.Frederkind@CMU-CS-SAD.] [Robert:] I think you've misunderstood my position. I don't deny the existence of awareness (which I called, following Michael Condict, consciousness). It's just that I don't see why you or anyone else don't accept that the physical object known as your brain is all that is necessary for your awareness. I also think you have illegitimately assumed that all physicalists must be functionalists. A functionalist is someone who believes that the mind consists in the information-processing features of the brain, and that it doesn't matter what "hardware" is used, as long as the "software" is the same there is the same awareness. On the other hand, one can be a physicalist and still think that the hardware matters too -- that awareness depends on the actual chemical properties of the brain, and not just the type of "program" the brain instantiates. You say that a robot is not aware because its information-storage system amounts to *just* the states of certain bits of silicon. Functionalists will object to your statement, I think, especially the word "just" (meaning "merely"). I think the only reason one throws the word "just" into the statement is because one already believes that the robot is unaware. That begs the question completely. Suppose you have a "soul", which is a wispy ghostlike thing inside your body but undetectable. And this "soul" is made of "soul-stuff", let's call it. Suppose we've decided that this "soul" is what explains your intelligent-appearing and seemingly aware behavior. But then someone comes along and says, "Nonsense, Robert is no more aware than a rock is, since we, by using a different level of abstraction in thinking about it, can see that his data-structure is *merely* the states of certain soul-stuff inside him." What makes that statement any less cogent than yours concerning the robot? So, I don't think dualism can provide any advantages in explaining why experiences have a certain "feel" to them. And I don't see any problems with the idea that the "feel" of an experience is caused by, or is identical with, or is one aspect of, (I haven't decided which yet), certain brain processes. --Paul Torek, umcp-cs!flink ------------------------------ Date: Monday, 24 October 1983 15:31:13 EDT From: Robert.Frederking@CMU-CS-CAD Subject: Re: awareness Sorry about not noticing the functionalist/physicalist distinction. Most of the people that I've discussed this with were either functionalists or dualists. The physicalist position doesn't bother me nearly as much as the functionalist one. The question seems to be whether awareness is a function of physical properties, or something that just happens to be associated with human brains -- that is, whether it's a necessary property of the physical structure of functioning brains. For example, the idea that your "soul" is "inside your body" is a little strange to me -- I tend to think of it as being similar to the idea of hyperdimensional mathematics, so that a person's "soul" might exist outside the dimensions we can sense, but communicate with their body. I think that physicalism is a reasonable hypothesis, but the differences are not experimentally verifiable, and dualism seems more reasonable to me. As far as the functionalist counter-argument to mine would go, the way you phrased it implies that I think that the "soul" explains human behavior. Actually, I think that *all* human behavior can be modeled by physical systems like robots. I suspect that we'll find physical correlates to all the information processing behavior we see. The thing I am describing is the internal experience. A functionalist certainly could make the counter-argument, but the thing that I believe to be important in this discussion is exactly the question of whether the "soul" is intrinsically part of the body, or whether it's made of "soul-stuff", not necessarily "located" in the body (if "souls" have locations), but communicating with it. As I implied in my previous post, I am concerned with the eventual legal and ethical implications of taking a functionalist point of view. So I guess I'm saying that I prefer either physicalism or dualism to functionalism, due to the side-effects that will occur eventually, and that to me dualism appears the most intuitively correct, although I don't think anyone can prove any of the positions. ------------------------------ Date: 24 Oct 1983 13:58:10-EDT From: Paul.Rosenbloom at CMU-CS-H Subject: ML Readings [Reprinted from the CMU-AI bboard.] The suggested readings for this Thursday's meeting of the machine learning course -- on chunking and macro-operators -- are: "Learning and executing generalized robot plans" by Fikes, Hart, and Nilsson (AIJ 1972); "Knowledge compilation: The general learning mechanism" by Anderson (proceedings of the 1983 machine learning workshop); and "The chunking of goal hierarchies: A generalized model of practice" by Rosenbloom and Newell (also in the proceedings of the 1983 machine learning workshop). These readings are now (or will be shortly) on reserve in the E&S library. ------------------------------ Date: Mon 24 Oct 83 20:09:30-PDT From: Doug Lenat Subject: CS Colloq 10/25 Terry Winograd & Brian Smith [Reprinted from the SU-Score bboard. Sorry this one is late, but it still may be valuable as the first mention of CSLI on AIList. -- KIL] CS Colloquium, Tuesday, Octobe 25, 4:15 Terman Auditorium Terry Winograd (CSD) and Brian Smith (Xerox PARC) Introducing the Center for the Study of Language and Information This summer a new institute was created at Stanford, made up of researchers from Stanford, SRI, Xerox, and Fairchild working in the study of languages, both natural and formal. Participants from Stanford will include faculty, students and research staff from the departments of Computer Science, Linguistics, and Philosophy. We will briefly describe the structure of the institute, and will present at some length the intellectual vision on which it is based and the content of the current research projects. ------------------------------ Date: 23 Oct 1983 22:14:30-EDT From: Gary.Bradshaw at CMU-RI-ISL1 Subject: Dissertation defense [Reprinted from the CMU-AI bboard.] I am giving my dissertation defense on Monday, October 31 at 8:30 a.m. in Baker Hall 336b. Committee members: Herbert Simon (chair), Raj Reddy, John Anderson, and Brian MacWhinney. The following is the talk abstract: LEARNING TO UNDERSTAND SPEECH SOUNDS: A THEORY AND MODEL Gary L. Bradshaw Current theories of speech perception postulate a set of innate feature detectors that derive a phonemic analysis of speech, even though a large number of empirical tests are inconsistent with the feature detector hypothesis. I will briefly describe feature detector theory and the evidence against it, and will then present an alternative learning theory of speech perception. The talk will conclude with a description of a computer implementation of the theory, along with learning and performance data for the system. ------------------------------ Date: 25 Oct 1983 1510-PDT From: GOGUEN at SRI-CSL Subject: rewrite rule seminar TENTATIVE PROGRAM FOR TERM REWRITING SEMINAR -------------------------------------------- FIRST TALK: 27 October 1983, Thursday, 3:30-5pm, Jean-Pierre Jouannaud, Room EL381, SRI This first talk will be an overview: basic mechanisms, solved & unsolved problems, and main applications of term rewriting systems. We will survey the literature, also indicating the most important results and open problems, for the following topics: 1. definition of rewriting 2. termination 3. For non-terminating rewritings: Church-Rosser properties, Sound computing strategies, Optimal computing strategies 4. For terminating rewritings: Church-Rosser properties, completion algorithm, inductive completion algorithm, narrowing process Three kind of term rewriting will be discussed: Term Rewriting Systems (TRS), Equational Term Rewriting Systems (ETRS) and Conditional Term Rewriting Systems (CTRS). -------------------------------------------------- Succeeding talks should be more technical. The accompanying bibliographical citations suggest important and readible references for each topic. Do we have any volunteers for presenting these topics? --------------------------------------------------- Second talk, details of terminating TRS: Knuth and Bendix; Dershowitz TCS; Jouannaud; Lescanne & Reinig, Formalization of Programming Concepts, Garmisch; Huet JACM; Huet JCSS; Huet & Hullot JACM; Fay CADE 78; Hullot CADE 80; Goguen CADE 80. Third and fourth talk, details of terminating ETRS: Jouannaud & Munoz draft; Huet JACM; Lankford & Ballantine draft; Peterson & Stickel JACM; Jouannaud & Kirchner POPL; Kirchner draft; Jouannaud, Kirchner & Kirchner ICALP. Fifth talk, details of turning the Knuth-Bendix completion procedure into a complete refutational procedure for first order built in theories, with applications to PROLOG: Hsiang thesis; Hsiang & Dershowitz ICALP; Dershowitz draft "Computing with TRW". Sixth and seventh talks, non-terminating TRS and CTRS: O'Donnel LNCS; Huet & Levy draft; Pletat, Engels and Ehrich draft; Bergstra & Klop draft. Eighth talk, terminating CTRS: Remy thesis. (More time may be needed for some talks.) ------------------------------ Date: 25 Oct 83 1407 PDT From: Terry Winograd Subject: next week's talkware - Nov 1 TUESDAY - K. Nygaard [Reprinted from the SU-SCORE bboard.] Date: Tuesday, Nov 1 *** NOTE ONE-TIME CHANGE OF DATE AND TIME *** Speaker: Kristen Nygaard (University of Oslo and Norwegian Computing Center) Topic: SYDPOL: System Development and Profession-Oriented Languages Time: 1:15-2:30 Place: Poly Sci Bldg. Room 268. ***NOTE NONSTANDARD PLACE*** A new project involving several universities and research centers in three Scandinavian countries has been establihed to create new methods of system development, using profession-oriented languages. They will design computer-based systems that will operate in work associated with professions (the initial application is in hospitals), focussing on the problem of facilitating cooperative work among professionals. One aspect of the research is the development of formal languages for describing the domains of interest and providing an interlingua for the systems and for the people who use them. This talk will focus on the language-design research, its goals and methods. ------------------------------ End of AIList Digest ******************** 27-Oct-83 15:13:41-PDT,11338;000000000001 Mail-From: LAWS created at 27-Oct-83 15:09:56 Date: Thursday, October 27, 1983 2:53PM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #83 To: AIList@SRI-AI AIList Digest Friday, 28 Oct 1983 Volume 1 : Issue 83 Today's Topics: AI Jargon - Definitions, Unification - Request, Rational Psychology - Definition, Conferences - Computers and the Law & FORTH Proceedings, Seminars - AI at ADL & Theorem Proving ---------------------------------------------------------------------- Date: 26 October 1983 1048-PDT (Wednesday) From: abbott at AEROSPACE (Russ Abbott) Subject: Definitions of AI Terms The IEEE is in the process of preparing a dictionary of computer terms. Included will be AI-related terms. Does anyone know of existing sets of definitions? In future messages I expect to circulate draft definitions for comment. ------------------------------ Date: 26 Oct 83 16:46:09 EDT (Wed) From: decvax!duke!unc!bts@Berkeley Subject: Unification Ken, I posted this to USENET a week ago. Since it hasn't shown up in the AIList, I suspect that it didn't make it to SRI [...]. [Correct, we must have a faulty connection. -- KIL] Bruce P.S. As an astute USENET reader pointed out, I perhaps should have said that a unifier makes the terms "syntactically equal". I thought it was clear from context. ===================================================================== From: unc!bts (Bruce Smith) Newsgroups: net.ai Title: Unification Query Article-I.D.: unc.6030 Posted: Wed Oct 19 01:23:46 1983 Received: Wed Oct 19 01:23:46 1983 I'm interested in anything new on unification algo- rithms. In case some readers don't know what I'm talking about, I'll give a short description of the problem and some references I know of. Experts-- the ones I'm really interested in reaching-- may skip to the last paragraph. Given a set of terms (in some language) containing variables, the unification problem is to find a 'unifier', that is, a substitution for the variables in those terms which would make the terms equal. Moreover, the unifier should be a 'most general unifier', that is, any other unif- iers should be extensions of it. Resolution theorem-provers and logic programming languages like Prolog depend on unification-- though the Prolog implementations I'm familiar with "cheat". (See Clocksin and Mellish's "Programming in Prolog", p. 219.) Unification seems to be a very active topic. The paper "A short survey on the state of the art in matching and unification problems", by Raulefs, Siekmann, Szabo and Unvericht, in the May 1979 issue of the SIGSAM Bulletin, contains a bibliography of over 90 articles. And, "An effi- cient unification algorithm", by Martelli and Montanari, in the April 1982 ACM Transactions on Programming Languages and Systems, gives a (very readable) discussion of the effi- ciency of various unification algorithms. A programming language has even been based on unification: "Uniform-- A language based on unification which unifies (much of) Lisp, Prolog and Act1" by Kahn in IJCAI-81. So, does anyone out there in network-land have a unifi- cation bibliography more recent that 1979? If it's on-line, would you please post it to USENET's net.ai? If not, where can we get a copy? Bruce Smith, UNC-Chapel Hill decvax!duke!unc!bts (USENET) bts.unc@udel-relay (other NETworks) ------------------------------ Date: Wednesday, 26-Oct-83 18:42:21-GMT From: RICHARD HPS (on ERCC DEC-10) Reply-to: okeefe.r.a. Subject: Rational Psychology If you were thinking of saying something about "Rational Psychology" and haven't read the article, PLEASE restrain yourself. It appeared in Volume 4 Issue 3 (Autumn 83) of "The AI Magazine", and is pages 50-54 of that issue. It isn't hard to get AI Magazine. AAAI members get it. I'm not a member, but DAI Edinburgh has a subscription and I read it in the library. I am almost tempted to join AAAI for the AI magazine alone, it is good value. The "Rational" in Rational Psychology modifies Psychology the same way Rational modifies Mechanics in Rational Mechanics or Thermodynamics in Rational Thermodynamics. It does NOT contrast with "the psychology of emotion" but with Experimental Psychology or Human Psychology. Here is a paragraph from the paper in question: " The aim of rational psychology is understanding, just as in any other branch of mathematics. Where much of what is labelled "mathematical psychology" consists of microscopic mathematical problems arising in the non-mathematical prosecution of human psychology, or in the exposition of informal theories with invented symbols substituting for equally precise words, rational psychology seeks to understand the structure of psychological concepts and theories by means of the most fit mathematical concepts and strict proofs, by suspiciously analyzing the informally developed notions to reveal their essence and structure, to allow debate on their interpretation to be phrased precisely, with consequences of choices seen mathematically. The aim is not simply to further informal psychology, but to understand it instead, not necessarily to solve problems as stated, but to see if they are proper problems at all by investigating their formulations. " There is nothing in this, or any other part of the paper, that would exclude the study of emotions from Rational Psychology. Indeed, unless or until we encounter another intelligent race, Rational Psychology seems to offer the only way to telling whether there are emotions that human beings cannot experience. My only criticism of Doyle's programme (note spelling, I am not talking about a computer program) is that I think we are as close to a useful Rational Psychology as Galileo was to Rational Mechanics or Carnot was to Rational Thermodynamics. I hope other people disagree with me and get cracking on it. Any progress at all in this area would be useful. ------------------------------ Date: Thu, 27 Oct 83 07:50:56 pdt From: ihnp4!utcsrgv!dave@Berkeley Subject: Computers and the Law Dalhousie University is sponsoring a computer conference under CONFER on an MTS system at Wayne State University in Michigan. The people in the conference include lawyers interested in computers as well as computer science types interested in law. Topics of discussion include computer applications to law, legal issues such as patents, copyrights and trade secrets in the context of computers, CAI in legal education, and AI in law. For those who aren't familiar with Confer, it provides a medium which is somewhat more structured than Usenet for discussions. People post "items", and "discussion responses" are grouped chronologically (and kept forever) under the item. All of the files are on one machine only. The conference is just starting up. Dalhousie has obtained a grant to fund everyone's participation, which means anyone who is interested can join for free. Access is through Telenet or Datapac, and the collect charges are picked up by the grant. If anyone is interested in joining this conference (called Law:Forum), please drop me a line. Dave Sherman The Law Society of Upper Canada Osgoode Hall Toronto, Ont. Canada M5H 2N6 (416) 947-3466 decvax!utzoo!utcsrgv!dave@BERKELEY (ARPA) {ihnp4,cornell,floyd,utzoo} !utcsrgv!dave (UUCP) ------------------------------ Date: Thu 27 Oct 83 10:22:48-PDT From: WYLAND@SRI-KL.ARPA Subject: FORTH Convention Proceedings I have been told that there will be no formal proceedings of the FORTH convention, but that articles will appear in "FORTH Dimensions", the magazine/journal of the FORTH Interest Group. This journal publishes technical articles about FORTH methods and techniques, algorithms, applications, and standards. It is available for $15.00/year from the following address: FORTH Interest Group P.O. Box 1105 San Carlos, CA 94070 415-962-8653 As you may know, Mountain View Press carries most of the available literature for FORTH, including the proceedings of the various technical conferences such as the FORTH Application Conferences at the University of Rochester and the FORML conferences. I highly reccommend them as a source of FORTH literature. Their address is: Mountain View Press, Inc. P.O. Box 4656 Mountain View, CA 94040 415-961-4103 I hope this helps. Dave Wyland WYLAND@SRI ------------------------------ Date: Wednesday, 26 October 1983 14:55 edt From: TJMartin.ADL@MIT-MULTICS.ARPA (Thomas J. Martin) Subject: Seminar Announcement PLACE: Arthur D. Little, Inc. Acorn Park (off Rte. 2 near Rte. 2/Rte. 16 rotary) Cambridge MA DATE: October 31, 1983 TIME: 8:45 AM, ADL Auditorium TOPIC: "Artificial Intelligence at ADL -- Activities, Progress, and Plans" SPEAKER: Dr. Karl M. Wiig, Director of ADL AI Program ABSTRACT: ADL'ss AI program has been underway for four months. A core group of staff has been recruited from several sections in the company and trained. Symbolics 3600 and Xerox 1100 machines have been installed and are now operational. The seminar will discuss research in progress at ADL in: expert systems, natural language, and knowledge engineering tools. ------------------------------ Date: Wed 26 Oct 83 20:11:52-PDT From: Doug Lenat Subject: CS Colloq, Tues 11/1 Jussi Ketonen [Reprinted from the SU-SCORE bboard.] CS Colloquium, Tuesday, November 1, 4:15pm Terman Auditorium (refreshments at 3:45 at the 3rd floor lounge of MJH) SPEAKER: Dr. Jussi Ketonen, Stanford University CS Department TITLE: A VIEW OF THEOREM-PROVING I'll be discussing the possibility of developing powerful expert systems for mathematical reasoning - a domain characterized by highly abbreviated symbolic manipulations whose logical complexity tends to be rather low. Of particular interest will be the proper role of meta theory, high-order logic, logical decision procedures, and rewriting. I will argue for a different, though equally important, role for the widely misunderstood notion of meta theory. Most of the discussion takes place in the context of EKL, an interactive theorem-proving system under development at Stanford. It has been used to prove facts about Lisp programs and combinatorial set theory. I'll describe some of the features of the language of EKL, the underlying rewriting system, and the algorithms used for high-order unification with some examples. ------------------------------ End of AIList Digest ******************** 28-Oct-83 09:11:12-PDT,13629;000000000001 Mail-From: LAWS created at 28-Oct-83 09:10:16 Date: Friday, October 28, 1983 8:59AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #84 To: AIList@SRI-AI AIList Digest Friday, 28 Oct 1983 Volume 1 : Issue 84 Today's Topics: Metaphysics - Split Consciousness, Halting Problem - Discussion, Intelligence - Recursion & Parallelism & Consciousness ---------------------------------------------------------------------- Date: 24 Oct 83 20:45:29-PDT (Mon) From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax Subject: Re: consciousness and the teleporter - (nf) Article-I.D.: uiucdcs.3417 See also the 17th and final essay by Daniel Dennett in his book Brainstorms [Bradford Books, 1978]. The essay is called "Where Am I," and investigates exactly this question of "split consciousness." ------------------------------ Date: Thu 27 Oct 83 23:04:47-MDT From: Stanley T. Shebs Subject: Semi-Summary of Halting Problem Discussion Now that the discussion on the Halting Problem etc has died down, I'd like to restate the original question, which seems to have been misunderstood. The question is this: consider a learning program, or any program that is self-modifying in some way. What must I do to prevent it from getting caught in an infinite loop, or a stack overflow, or other unpleasantnesses? For an ordinary program, it's no problem (heh-heh), the programmer just has to be careful, or prove his program correct, or specify its operations axiomatically, or . But what about a program that is changing as it runs? How can *it* know when it's stuck in a losing situation? The best answers I saw were along the lines of an operating system design, where a stuck process can be killed, or pushed to the bottom of an agenda, or whatever. Workable, but unsatisfactory. In the case of an infinite loop (that nastiest of possible errors), the program can only guess that it has created a situation where infinite loops can happen. The most obvious alternative is to say that the program needs an "infinite loop detector". Ted Jardine of Boeing tells a story where, once upon a time, some company actually tried to do this - write a program that would detect infinite loops in any other program. Of course, this is ludicrous; it's a version of the Halting Problem. For loops in a program under a given length, yes; arbitrary programs, no. So our self-modifying program can manage only a partial solution, but that's ok, because it only has to be able to analyze itself and its subprograms. The question now becomes: can a program of length n detect infinite loops in any program of length <= n ? I don't know; you can't just have it simulate itself and watch for duplicated states showing up, because the extra storage for the inbetween states would cause the program to grow! and you have violated the initial conditions for the question. Some sort of static analysis could detect special cases (like the Life blinkers mentioned by somebody), but I doubt that all cases could be done this way. Any theory types out there with the answer? Anyway, I *don't* think these are vacuous problems; I encountered them when working on a learning capability for my parser, and "solved" them by being very careful about rules that expanded the sentence, rather than reducing (really just context-sensitive vs context-free). Am facing it once again in my new project (a KR language derived from RLL), and this time there's no way to sidestep! Any new ideas would be greatly appreciated. Stan Shebs ------------------------------ Date: Wed, 26 Oct 1983 16:30 EDT From: BATALI%MIT-OZ@MIT-MC.ARPA Subject: Trancendental Recursion I've just joined this mailing list and I'm wondering about the recent discussion of "consciousness." While it's an interesting issue, I wonder how much relevance it has for AI. Thomas Nagel's article "What is it like to be a bat?" argues that consciousness might never be the proper subject of scientific inquiry because it is by its nature, subjective (to the max, as it were) and science can deal with only objective (or at least public) things. Whatever the merits of this argument, it seems that a more profitable object of our immediate quest might be intelligence. Now it may be the case that the two are the same thing -- or it may be that consciousness is just "what it is like" to be an intelligent system. On the other hand, much of our "unconscious" or "subconscious" reasoning is very intelligent. Consider the number of moves that a chess master doesn't even consider -- they are rejected even before being brought to consciousness. Yet the action of rejecting them is a very intelligent thing to do. Certainly someone who didn't reject those moves would have to waste time considering them and would be a worse (less intelligent?) chess player. Conversly it seems reasonable to suppose that one cannot be conscious unless intelligent. "Intelligent" like "strong" is a dispositional term, which is to say it indicates what an agent thus described might do or tend to do or be able to do in certain situations. Whereas it is difficult to give a sharp boundary between the intelligent and the non-intelligent, it is often possible to say which of two possible actions would be the more intelligent. In most cases, it is possible to argue WHY the action is the more intelligent. The argument will typically mention the goals of the agent, its abilities, and its knowldge about the world. So it seems that there is a fairly simple and common understanding of how the term is applied: An action is intelligent just in case it well satisfies some goals of the agent, given what the agent knows about the world. An agent is intelligent just in case it performs actions that are intelligent for it to perform. A potential problem with this is that the proposed account requires that the agent often be able to figure out some very difficult things on the way to generating an intelligent action: Which goal should I satisfy? What is the case in the world? Should I try to figure out a better solution? Each of these subproblems, constituitive of intelligence, seems to require intelligence. But there is a way out, and it might bring us back to the issue of consciousness. If the intelligent system is a program, there is no problem with its applying itself recursively to its subproblems. So the subproblems can also be solved intelligently. For this to work, though, the program must understand itself and understand when and how to apply itself to its subproblems. So at least some introspective ability seems like it would be important for intelligence, and the better the system was at introspective activities, the more intelligent it would be. The recent theses of Doyle and Smith seem to indicate that a system could be COMPLETELY introspective in the sense that all aspects of its operation could be accessible and modifiable by the program itself. But I don't know if it would be conscious or not. ------------------------------ Date: 26 Oct 1983 1537-PDT From: Jay Subject: Re: Parallelism and Conciousness Anything that can be done in parallel can be done sequentially. Parallel computations can be faster, and can be easier to understand/write. So if conciousness can be programmed, and if it is as complex as it seems, then perhaps parallelism should be exploited. No algorithm is inherently parallel. j' ------------------------------ Date: Thu 27 Oct 83 14:01:59-EDT From: RICKL%MIT-OZ@MIT-MC.ARPA Subject: Parallelism & Consciousness From: BUCKLEY@MIT-OZ Subject: Parallelism and Consciousness -- of what relevance is the issue of time-behavior of an algorithm to the phenomenon of intelligence, i.e., can there be in principle such a beast as a slow, super-intelligent program? gracious, isn't this a bit chauvinistic? suppose that ai is eventually successful in creating machine intelligence, consciousness, etc. on nano-second speed machines of the future: we poor humans, operating only at rates measured in seconds and above, will seem incredibly slow to them. will they engage in debate about the relevance of our time- behavior to our intelligence? if there cannot in principle be such a thing as a slow, super-intelligent program, how can they avoid concluding that we are not intelligent? -=*=- rick ------------------------------ Mail-From: DUGHOF created at 27-Oct-83 14:14:27 Date: Thu 27 Oct 83 14:14:27-EDT From: DUGHOF@MIT-OZ Subject: Re: Parallelism & Consciousness To: RICKL@MIT-OZ In-Reply-To: Message from "RICKL@MIT-OZ" of Thu 27 Oct 83 14:04:28-EDT About slow intelligence -- there is one and only one reason to have intelligence, and that is to survive. That is where intelligence came from, and that is what it is for. It will do no good to have a "slow, super-intelligent program", for that is a contradiction in terms. Intelligence has to be fast enough to keep up with the world in real time. If the superintelligent AI program is kept in some sort of shielded place so that its real-time environment is essentially benevolent, then it will develop a different kind of intelligence from one that has to operate under higher pressures, in a faster-changing world. Everybody has had the experience of wishing they'd made some clever retort to someone, but thinking of it too late. Well, if you always thought of those clever remarks on the spot, you'd be smarter than you are. If things that take time (chess moves, writing good articles, developing good ideas) took less time, then I'd be smarter. Intelligence and the passage of time are not unrelated. You can slow your processor down and then claim that your program's intelligence is unaffected, even if it's running the same program. The world is marching ahead at the same speed, and "pure, isolated intelligence" doesn't exist. ------------------------------ Date: Thu 27 Oct 83 14:57:18-EDT From: RICKL%MIT-OZ@MIT-MC.ARPA Subject: Re: Parallelism & Consciousness From: DUGHOF@MIT-OZ Subject: Re: Parallelism & Consciousness In-Reply-To: Message from "RICKL@MIT-OZ" of Thu 27 Oct 83 14:04:28-EDT About slow intelligence -- there is one and only one reason to have intelligence, and that is to survive.... It will do no good to have a "slow, super-intelligent program", for that is a contradiction in terms. Intelligence has to be fast enough to keep up with the world in real time. are you claiming that if we someday develop super-fast super-intelligent machines, then we will no longer be intelligent? this seems implicit in your argument, and seems itself to be a contradiction in terms: we *were* intelligent until something faster came along, and then after that we weren't. or if this isn't strong enough for you -- you seem to want intel- ligence to depend critically on survival -- imagine that the super-fast super-intelligent computers have a robot interface, are malevolent, and hunt us humans to extinction in virtue of their superior speed & reflexes. does the fact that we do not survive mean that we are not intelligent? or does it mean that we are intelligent now, but could suddenly become un-intelligent without we ourselves changing (in virtue of the world around us changing)? doubtless survival is important to the evolution of intelligence, & that point is not really under debate. however, to say that whether something is or is not intelligent is a property dependent on the relative speed of the creatures sharing your world seems to make us un-intelligent as machines and programs get better, and amoebas intelligent as long as they were the fastest survivable thing around. -=*=- rick ------------------------------ Date: Thu, 27 Oct 1983 15:26 EDT From: STRAZ%MIT-OZ@MIT-MC.ARPA Subject: Parallelism & Consciousness Hofstadter: About slow intelligence -- there is one and only one [...] Lathrop: doubtless survival is important to the evolution of intelligence, & that point is not really under debate. Me: No, survival is not the point. It is for the organic forms that evolved with little help from outside intelligences, but a computer that exhibits a "slow, super-intelligence" in the protective custody of humans can solve problems that humans might never be able to solve (due to short attention span, lack of short-term memory, tedium, etc.) For example, a problem like where to best put another bridge/tunnel in Boston is a painfully difficult thing to think about, but if a computer comes up with a good answer (with explanatory justifications) after thinking for a month, it would have fulfilled anyone's definition of slow, superior intelligence. ------------------------------ Date: Thu, 27 Oct 1983 23:35 EDT From: MINSKY%MIT-OZ@MIT-MC.ARPA Subject: Parallelism & Consciousness That's what you get for trying to define things too much. ------------------------------ End of AIList Digest ******************** 31-Oct-83 09:45:38-PST,18547;000000000001 Mail-From: LAWS created at 31-Oct-83 09:44:18 Date: Monday, October 31, 1983 9:18AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #85 To: AIList@SRI-AI AIList Digest Monday, 31 Oct 1983 Volume 1 : Issue 85 Today's Topics: Intelligence ---------------------------------------------------------------------- Date: Fri 28 Oct 83 13:43:21-EDT From: RICKL%MIT-OZ@MIT-MC.ARPA Subject: Re: Parallelism & Consciousness From: MINSKY@MIT-OZ That's what you get for trying to define things too much. what do i get for trying to define what too much?? though obviously, even asking that question is trying to define your intent too much, & i'll only get more of whatever i got for whatever it was i got it for. -=*=- ------------------------------ Date: 28 Oct 1983 12:02-PDT From: ISAACSON@USC-ISI Subject: Re: Parallelism & Consciousness From Minsky: That's what you get for trying to define things too much. Coming, as it does, out of the blue, your comment appears to negate the merits of this discussion. The net effect might simply be to bring it to a halt. I think that it is, inadvertent though it might be, unkind to the discussants, and unfair to the rest of us who are listening in. I agree. The level of confusion is not insignificant and immediate insights are not around the corner. However, in my opinion, we do need serious discussion of these issues. I.e., questions of subcognition vs. cognition; parallelism, "autonomy", and epiphenomena; algorithmic programability vs. autonomy at the subcognitive and cognitive levels; etc. etc. Perhaps it would be helpful if you give us your views on some of these issues, including your views on a good methodology to discussing them. -- JDI ------------------------------ Date: 30 Oct 83 13:27:11 EST (Sun) From: Don Perlis Subject: Re: Parallelism & Consciousness From: BUCKLEY@MIT-OZ -- of what relevance is the issue of time-behavior of an algorithm to the phenomenon of intelligence, i.e., can there be in principle such a beast as a slow, super-intelligent program? From: RICKL%MIT-OZ@mit-mc gracious, isn't this a bit chauvinistic? suppose that ai is eventually successful in creating machine intelligence, consciousness, etc. on nano-second speed machines of the future: we poor humans, operating only at rates measured in seconds and above, will seem incredibly slow to them. will they engage in debate about the relevance of our time- behavior to our intelligence? if there cannot in principle be such a thing as a slow, super-intelligent program, how can they avoid concluding that we are not intelligent? -=*=- rick It seems to me that the issue isn't the 'appearance' of intelligence of one being to another--after all, a very slow thinker may nonetheless think very effectively and solve a problem the rest of us get nowhere with. Rather I suggest that intelligence be regarded as effectiveness, namely, as coping with the environment. Then real-time issues clearly are significant. A supposedly brilliant algorithm that 'in principle' could decide what to do about an impending disaster, but which is destroyed by that disaster long before it manages to grasp that there is a disaster,or what its dimensions are, perhaps should not be called intelligent (at least on the basis of *that* event). And if all its potential behavior is of this sort, so that it never really gets anything settled, then it could be looked at as really out of touch with any grasp of things, hence not intelligent. Now this can be looked at in numerous contexts; if for instance it is applied to the internal ruminations of the agent, eg as it tries to settle Fermat's Last Theorem, and if it still can't keep up with its own physiology, ie, its ideas form and pass by faster than its 'reasoning mechanisms' can keep track of, then it there too will fail, and I doubt we would want to say it 'really' was bright. It can't even be said to be trying to settle Fermat's Last theorem, for it will not be able to keep that in mind. This is in a sense an internal issue, not one of relative speed to the environment. But considering that the internal and external events are all part of the same physical world, I don't see a significant difference. If the agent *can* keep track of its own thinking, and thereby stick to the task, and eventually settle the theorem, I think we would call it bright indeed, at least in that domain, although perhaps a moron in other matters (not even able to formulate questions about them). ------------------------------ Date: Sun 30 Oct 83 16:59:12-EST From: RICKL%MIT-OZ@MIT-MC.ARPA Subject: Re: Parallelism & Consciousness [...] From: Don Perlis It seems to me that the issue isn't the 'appearance' of intelligence of one being to another....Rather I suggest that intelligence be regarded as effectiveness, namely, as coping with the environment.... From this & other recent traffic on the net, the question we are really discussing seems to be: ``can an entity be said to be intelligent in and of itself, or can an entity only be said to be intelligent relative to some world?''. I don't think I believe in "pure, abstract intelligence, divorced from the world". However, a consequence of the second position seems to be that there should be possible worlds in which we would consider humans to be un-intelligent, and I can't readily think of any (can anyone else?). Leaving that question as too hard (at least for now), another question we have been chasing around is: ``can intelligence be regarded as survivability, (or more generally as coping with an external environment)?''. In the strong form this position equates the two, and this position seems to be too strong. Amoebas cope quite well and have survived for unimaginably longer than we humans, but are generally acknowledged to be un-intelligent (if anyone cares to dispute this, please do). Survivability and coping with the environment, alone, therefore fail to adequately capture our intuitions of intelligence. -=*=- rick ------------------------------ Date: 30 Oct 1983 18:46:48 EST (Sunday) From: Dave Mankins Subject: Re: Intelligence and Competition By the survivability/adaptability criteria the cockroach must be one of the most intelligent species on earth. There's obviously something wrong with those criteria. ------------------------------ Date: Fri 28 Oct 83 14:19:36-PDT From: Ken Laws Subject: Definition of Intelligence I like the idea that the intelligence of an organism should be measured relative to its goals (which usually include survival, but not in the case of "smart" bombs and kamikaze pilots). I don't think that goal-satisfaction criteria can be used to establish the "relative intelligence" of organisms with very different goals. Can a fruit fly be more intelligent than I am, no matter how well it satisfies its goals? Can a rock be intelligent if its goals are sufficiently limited? To illustrate this in another domain, let us consider "strength". A large bulldozer is stronger than a small one because it can apply more brute force to any job that a bulldozer is expected to do. Can we say, though, that a bulldozer is "stronger" than a pile driver, or vice versa? Put another way: If scissors > paper > rock > scissors ..., does it make any sense to ask which is "best"? I think that this is the problem we run into when we try to define intelligence in terms of goals. This is not to say that we can define it to be independent of goals, but goal satisfaction is not sufficient. Instead, I would define intelligence in terms of adaptability or learning capability in the pursuit of goals. An organism with hard- wired responses to its environment (e.g. a rock, a fruit fly, MACSYMA) is not intelligent because it does not adapt. I, on the other hand, can be considered intelligent even if I do not achieve my goals as long as I adapt to my environment and learn from it in ways that would normally enhance my chances of success. Whether speed of response must be included as a measure of intelligence depends on the goal, but I would say that, in general, rapid adaptation does indicate greater intelligence than the same response produced slowly. Multiple choice aptitude tests, however, exercise such limited mental capabilities that a score of correct answers per minute is more a test of current knowledge than of ability to learn and adapt within the testing period. Knowledge relative to age (IQ) is a useful measure of learning ability and thus of intelligence, but cannot be used for comparing different species. I prefer unlimited-time "power" tests for measuring both competence and intelligence. The Turing test imposes a single goal on two organisms, namely the goal of convincing an observer at the other end of tty that he/it is the true human. This will clearly only work for organisms capable of typing at human speed and capable of accepting such a goal. These conditions imply that the organism must have a knowledge of human psychology and capabilities, or at least a belief (probably incorrect) that it can "fake" them. Given such a restricted situation, the nonhuman organism is to be judged intelligent if it can appropriately modify its own behavior in response to questioning at least as well as the human can. (I would claim that a nonadapting organism hasn't a chance of passing the test, and that this is just what the observer will be looking for.) I do not believe that a single test can be devised which can determine the relative intelligences of arbitrary organisms, but the public wants such a test. What shall we give them? I would suggest the following procedure: For two candidate organisms, determine a goal that both are capable of accepting and that we consider related to intelligence. For an interesting test, the goal must be such that neither organism is specially adapted or maladapted for achieving it. The goal might be absolute (e.g., learn 100 nonsense syllables) or relative (e.g., double your vocabulary). If no such goal can be found, the relative organisms cannot be ranked. If a goal is found, we can rank them along the dimension of the indicated behavior and we can infer a similar ranking for related behaviors (e.g., verbal ability). The actual testing for learning ability is relatively simple. How can we test a computer for intelligence? Unfortunately, a computer can be given a wide variety of sensors and effectors and can be made to accept almost any goal. We must test it for human-level adaptability in using all of these. If it cannot equal human ability nearly all measurable scales (e.g., game playing, verbal ability, numerical ability, learning new perceptual and motor skills, etc.), it cannot be considered intelligent in the human sense. I know that this is exceedingly strict, but it is the same test that I would apply to decide whether a child, idiot savant, or other person were intelligent. On the other hand, if I could not match the computer's numerical and memory capabilities, it has the right to judge me unintelligent by computer standards. The intelligence of a particular computer program, however, should be judged by much less stringent standards. I do not expect a symbolic algebra program to learn to whistle Dixie. If it can learn, without being programmed, a new form of integral faster than I can, or if it can find a better solution than I can in any length of time, then I will consider it an intelligent symbolic algebra program. Similar criteria apply to any other AI program. I have left open the question of how to measure adaptability, relative importance of differing goals, parallel satisfaction of multiple goals, etc. I have also not discussed creativity, which involves autonomous creation of new goals. Have I missed anything, though, in the basic concept of intelligence? -- Ken Laws ------------------------------ Date: 30 Oct 1983 1456-PST From: Jay Subject: Re: Parallelism & Consciousness From: RICKL%MIT-OZ@MIT-MC.ARPA ... the question we are really discussing seems to be: ``can an entity be said to be intelligent in and of itself, or can an entity only be said to be intelligent relative to some world?''. I don't think I believe in "pure, abstract intelligence, divorced from the world". ... another question we have been chasing around is: ``can intelligence be regarded as survivability, (or more generally as coping with an external environment)?''. [...] I believe intelligence to be the ability to cope with CHANGES in the enviroment. Take desert tortoises, although they are quite young compared to amobea, they have been living in the desert some thousands, if not millions of years. Does this mean they are intelligent? NO! put a freeway through their desert and the tortoises are soon dying. Increase the rainfall and they may become unable to compete with the rabbits (which will take full advantage of the increase in vegitation and produce an increase in rabbit-ation). The ability to cope with a CHANGE in the enviroment marks intellignece. All a tortoise need do is not cross a freeway, or kill baby rabbits, and then they could begin to claim intellignce. A similar argument could be made against intelligent amobea. A posible problem with this view is that biospheres can be counted intelligent, in the desert an increase in rainfall is handled by an increase in vegetation, and then in herbivores (rabbits) and then an increase in carnivores (coyotes). The end result is not the end of a biosphere, but the change of a biosphere. The biosphere has successfully coped with a change in its environment. Even more ludicrous, an argument could be made for an intelligent planet, or solar system, or even galaxy. Notice, an organism that does not change when its environment changes, perhaps because it does not need to, has not shown intelligence. This is, of course, not to say that that particular organism is un-intelligent. Were the world to become unable to produce rainbows, people would change little, if at all. My behavioralism is showing, j' ------------------------------ Date: Sun, 30 Oct 1983 18:11 EST From: JBA%MIT-OZ@MIT-MC.ARPA Subject: Parallelism & Consciousness From: RICKL%MIT-OZ at MIT-MC.ARPA However, a consequence of the second position seems to be that there should be possible worlds in which we would consider humans to be un-intelligent, and I can't readily think of any (can anyone else?). Read the Heinlein novel entitled (I think) "Have Spacesuit, Will Travel." Somewhere in there a race tries to get permission to kill humans wantonly, arguing that they're basically stupid. Of course, a couple of adolscent humans who happen to be in the neighborhood save the day by proving that they're smart. (I read this thing a long time ago, so I may have the story and/or title a little wrong.) Jonathan [Another story involves huge alien "energy beings" taking over the earth. They destroy all human power sources, but allow the humans to live as "cockroaches" in their energy cities. One human manages to convince an alien that he is intelligent, so the aliens immediately begin a purge. Who wants intelligent cockroaches? -- KIL] ------------------------------ Date: Sun 30 Oct 83 15:41:18-PST From: David Rogers Subject: Intelligence and Competition From: RICKL%MIT-OZ@MIT-MC.ARPA I don't think I believe in "pure, abstract intelligence, divorced from the world". However, a consequence of the second position seems to be that there should be possible worlds in which we would consider humans to be un-intelligent, and I can't readily think of any (can anyone else?). From: Jay ...Take desert tortoises, [...] Combining these two comments, I came up with this: ...Take American indians, although they are quite young compared to amoeba, they have been living in the desert some thousands of years. Does this mean they are intelligent? NO! Put a freeway (or some barbed wire) through their desert and they are soon dying. Increase cultural competition and they may be unable to compete with the white man (which will take full advantage of their lack of guns and produce an increase in white-ation). The ability to cope with CHANGE in the environment marks intelligence. I think that the stress on "adaptability" makes for some rather strange candidates for intelligence. The indians were developing a cooperative relationship with their environment, rather than a competitive one; I cannot help but think that our cultural stress on competition has biased us towards competitive definitions of intelligence. Survivability has many facets, and competition is only one of them, and may not even be a very large one. Perhaps before one judges intelligence on how systems cope with change, how about intelligence with how the systems cope with stasis? While it is popular to think about how the great thinkers of the past arose out of great trials, I think that more of modern knowledge came from times of relative calm, when there was enough surplus to offer a group of thinkers time to ponder. David ------------------------------ End of AIList Digest ******************** 31-Oct-83 10:09:00-PST,17734;000000000001 Mail-From: LAWS created at 31-Oct-83 10:07:06 Date: Monday, October 31, 1983 9:53AM From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V1 #86 To: AIList@SRI-AI AIList Digest Monday, 31 Oct 1983 Volume 1 : Issue 86 Today's Topics: Complexity Measures - Request, Obituary - Alfred Tarski, Seminars - Request for Synopses, Discourse Analysis - Representation, Review - JSL Review of GEB, Games - ACM Chess Results, Software Verification - VERUS System Offered, Conferences - FGCS Call for Papers ---------------------------------------------------------------------- Date: 24 October 1983 17:02 EDT From: Karl A. Nyberg Subject: writing analysis I am interested in programs that people might know of that give word distributions, sentence lengths, etc., so as to gauge the complexity of articles. I'd also like to know if anyone could point me to any models that specify that complexity in terms of these sorts of measurements. Let me know if any programs you might know of are particular to any text formatter, programming language, or operating system. Thanks. -- Karl -- [Such capabilities are included in recent versions of the Unix operating system. -- KIL] ------------------------------ Date: Sun 30 Oct 83 16:46:39-CST From: Lauri Karttunen Subject: Alfred Tarski [Reprinted from the UTexas-20 bboard.] Alfred Tarski, the father of model-theoretic semantics, died last Wednesday at the age of 82. ------------------------------ Date: Fri, 28 Oct 83 21:29:41 pdt From: sokolov%Coral.CC@Berkeley Subject: Re: talk announcements in net.ai Ken, I would like to submit this message as a suggestion to the AIlist readership: This message concerns the rash of announcements of talks being given around the country (probably the world, if we include Edinburgh). I am one of those people that like to know what is going on elsewhere, so I welcome the announcements. Unfortunately, my appetite is only whetted by them. Therefore, I would like to suggest that, WHENEVER possible, summaries of these talks should be submitted to the net. I realize that this isn't always practical, nevertheless, I would like to encourage people to submit these talk reviews. Jeff Sokolov Program in Cognitive Science and Department of Psychology UC Berkeley sokolov%coral@berkeley ...!ucbvax!ucbcoral:sokolov ------------------------------ Date: 29 Oct 83 1856 PDT From: David Lowe Subject: Representation of reasoning I have recently written a paper that might be of considerable interest to the people on this list. It is about a new form of structuring interactions between many users of an interactive network, based on an explict representation of debate. Although this is not a typical AI problem, it is related to much AI work on the representation of language or reasoning (for example, the representation of a chain of reasoning in expert systems). The representation I have chosen is based on the work of the philosopher Stephen Toulmin. I am also sending a version of this message to HUMAN-NETS, since one goal of the system is to create a lasting, easily-accessed representation of the interactions which occur on discussion lists such as HUMAN-NETS or AIList. A copy of the paper can be accessed by FTP from SAIL (no login required). The name of the file is PAPER[1,DLO]. You can also send me a message (DLO @ SAIL) and I'll mail you a copy. If you send me your U.S. mail address, I'll physically mail you a carefully typeset version. Let me know if you are interested, and I'll keep you posted about future developments. The following is an abstract: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ THE REPRESENTATION OF DEBATE AS A BASIS FOR INFORMATION STORAGE AND RETRIEVAL By David Lowe Computer Science Department Stanford University, Stanford, CA 94305 Abstract Interactive computer networks offer the potential for creating a body of information on any given topic which combines the best available contributions from a large number of users. This paper describes a system for cooperatively structuring and evaluating information through well-specified interactions by many users with a common database. A working version of the system has been implemented and examples of its use are presented. At the heart of the system is a structured representation for debate, in which conclusions are explicitly justified or negated by individual items of evidence. Through debates on the accuracy of information and on aspects of the structures themselves, a large number of users can cooperatively rank all available items of information in terms of significance and relevance to each topic. Individual users can then choose the depth to which they wish to examine these structures for the purposes at hand. The function of this debate is not to arrive at specific conclusions, but rather to collect and order the best available evidence on each topic. By representing the basic structure of each field of knowledge, the system would function at one level as an information retrieval system in which documents are indexed, evaluated and ranked in the context of each topic of inquiry. At a deeper level, the system would encode knowledge in the structure of of the debates themselves. This use of an interactive system for structuring information offers many further opportunities for improving the accuracy, accessibility, currency, conciseness, and clarity of information. ------------------------------ Date: 28 Oct 83 19:06:50 EDT (Fri) From: Bruce T. Smith Subject: JSL review of GEB The most recent issue (Vol. 48, Number 3, September 1983) of the Journal of Symbolic Logic (JSL) has an interesting review of Hofstadter's book "Godel, Escher, Bach: an eternal golden braid.". (It's on pages 864-871, a rather long review for the JSL. It's by Judson C. Webb, a name unfamiliar to me, amateur that I am.) This is a pretty favorable review-- I know better than to start any debates over GEB-- but what I found most interesting was its emphasis on the LOGIC in the book. Yes, I know that's not all GEB was about, but it was unusual to read a discussion of it from this point of view. Just to let you know what to expect, Webb's major criticism is Hofstadter's failure, in a book on self-reference, to dis- cuss Kleene's fixed-point theorem, which fuses these two phenomena so closely together. The fixed-point theorem shows (by an adaptation of Godel's formal diagonalization) that the strangest ima- ginable conditions on functions have solutions computed by self-referential machines making essential use of their own Godel-numbers, provided only that the condi- tions are expressible by partial recursive functions. He also points out that Hofstadter didn't show quite how shocking Godel's theorems were: "In short, Godel discovered the experimental completeness of a system that seemed almost too weak to bother with, and the theoretical incompleteness of one that aimed only at experimental completeness." Enough. I'm not going to type the whole 7.5 pages. Go look for the newest issue of the JSL-- probably in your Mathematics library. For any students out there, membership in the Association for Symbolic Logic is only $9.00/yr and includes the JSL. Last year they published around 1000 pages. It's mostly short technical papers, but they claim they're going to do more expository stuff. The address to write to is The Association for Symbolic Logic P.O.Box 6248 Providence, RI 02940 ============================================ Bruce Smith, UNC-Chapel Hill ...!decvax!duke!unc!bts (USENET) bts.unc@CSnet-Relay (from other NETworks) ------------------------------ Date: 27 October 1983 1130-EDT From: Hans Berliner at CMU-CS-A Subject: ACM chess results [Reprinted from the CMU-C bboard.] The results of the ACM World computer CHess Championship are: CRAY BLITZ - 4 1/2 1st place BEBE - 4 2nd AWIT - 4 3rd NUCHESS - 3 1/2 4th CHAOS - 3 1/2 5th BELLE - 3 6th There were lots of others with 3 points. Patsoc finished with a scoreof 1.5 - 3.5. It did not play any micros and was usually outgunned by 10 mip mainframes. There was a lot of excitement in the last 3 rounds. in round 3 NUCHESS defeated Belle (the first time Belle had lost to a machine). In round 4 Nuchess drew Cray Blitz in a long struggle when they were both tied for the lead and remained so at 3 1/2 points after this round. The final round was really wild: BEBE upset NUCHESS (the first time it had ever beaten Nuchess) just when NUCHESS looked to have a lock on the tournament. CRAY Blitz won from Belle when the latter rejected a draw because it had been set to play for a win at all costs (Belle's only chance, but this setting was a mistake as CRAY BLITZ also had to win at all costs). In the end AWIT snuck into 3 rd place in all this commotion, without having every played any of the contenders. One problem with a Swiss pairing system used for tournaments where only a few rounds are possible is that it only brings out a winner. The other scores are very much dependent on what happens in the last round. Belle was using a new modification in search technique which based on the results could be thought of as a mistake. Probably it is not though, though possiby the implementation was not the best. In any case Thompson apparently thought he had to do something to improve Belle for the tournament. In any case, it was not a lost cause for Thompson. He shared this years Turing award with Ritchie for developing UNIX, received a certificate from the US chess federation for the first non-human chess master (for Belle), and a $16,000 award from the Common Wealth foundation for the invention award of the year (software) for his work on UNIX, C, and Belle. Lastly, it is interesting to note that this is the 4th world championship. They are held 3 years apart, and no program has won more than one of them. ------------------------------ Date: Mon, 17 Oct 83 10:41:19 CDT From: wagner@compion-vms Subject: Announcement: VERUS verification system offered Use of the VERUS Verification System Offered -------------------------------------------- VERUS is a software design specification and verification system produced by Compion Corporation, Urbana, Illinois. VERUS was designed for speed and ease of use. The VERUS language is an extension of of the first-order predicate calculus designed for a software engineering environment. VERUS includes a parser and a theorem prover. Compion now offers use of VERUS over the MILNET/ARPANET. Use is for a maximum of 4 weeks. Each user is provided with: 1. A unique sign-on to Compion's VAX 11/750 running VMS 2. A working directory 3. Hard-copy user manuals for the use period. If you are interested, contact Fran Wagner (wagner@compion-vms). Note that the new numerical address for compion-vms is 10.2.0.55. Please send the following information to help us prepare for you to use VERUS: your name organization U.S. mailing address telephone number network address whether you are on the MILNET or the ARPPANET whether you are familiar with VMS whether you have a DEC-supported terminal desired starting date and length of use We will notify you when you can log on and send you hard-copy user documents including a language manual, a user's guide, and a guide to writing state machine specifications. After the network split, VERUS will be available over the MILNET and, by special arrangement, over the ARPANET. __________ VERUS is a trademark of Compion Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. ------------------------------ Date: 26 Oct 1983 19:34:39-EDT From: mac%mit-vax @ MIT-MC Subject: FGCS Call for Papers CALL FOR PAPERS FGCS '84 International Conference on Fifth Generation Computer Systems, 1984 Institute for New Generation Computer Technology November 6-9, 1984 Tokyo, Japan The scope of technical sessions of this conference encompasses the technical aspects of new generation computer systems which are being explored particularly within the framework of logic programming and novel architectures. This conference is intended to promote interaction among researchers in all disciplines re- lated to fifth generation computer technology. The topics of in- terest include (but are not limited to) the following: PROGRAM AREAS Foundations for Logic Programs * Formal semantics/pragmatics * Computation models * Program analysis and complexity * Philosophical aspects * Psychological aspects Logic Programming Languages/Methodologies * Parallel/Object-oriented programming languages * Meta-level inferences/control * Intelligent programming environments * Program synthesis/understanding * Program transformation/verification Architectures for New Generation Computing * Inference machines * Knowledge base machines * Parallel processing architectures * VLSI architectures * Novel human-machine interfaces Applications of New Generation Computing * Knowledge representation/acquisition * Expert systems * Natural language understanding/machine translation * Graphics/vision * Games/simulation Impacts of New Generation Computing * Social/cultural * Educational * Economic * Industrial * International ORGANIZATION OF THE CONFERENCE Conference Chairman : Tohru Moto-oka, Univ of Tokyo Conference Vice-chairman : Kazuhiro Fuchi, ICOT Program Chairman : Hideo Aiso, Keio Univ Publicity Chairman : Kinko Yamamoto, JIPDEC Secretariat : FGCS'84 Secretariat, Institute for New Generation Computer Technology (ICOT) Mita Kokusai Bldg. 21F 1-4-28 Mita, Minato-ku, Tokyo 108, Japan Phone: 03-456-3195 Telex: 32964 ICOT PAPER SUBMISSION REQUIREMENTS Four copies of manuscripts should be submitted by April 15, 1984 to : Prof. Hideo Aiso Program chairman ICOT Mita Kodusai Bldg. 21F 1-4-28 Mita, Minato-ku Tokyo 108, Japan Papers are restricted to 20 double-spaced pages (about 5000 words) including figures. Each paper must contain a 200-250 word abstract. Papers must be written and prensented in English. Papers will be reviewed by international referees. Authors will be notified of acceptance by June 30, 1984, and will be given in- structions for final preparation of their papers at that time. Camera-ready papers for the proceedings should be sent to the Program Chairman prior to August 31, 1984. Intending authors are requested to return the attached reply card with tentative subjects. GENERAL INFORMATION Date : November 6-9, 1984 Venue : Keio Plaza Hotel, Tokyo, Japan Host : Institute for New Generation Computer Technology Outline of the Conference Program : General Sessions Keynote speeches Report of research activities on Japan's FGCS Project Panel discussions Technical sessions (Parallel sessions) Presentation by invited speakers Presentation of submitted papers Special events Demonstration of current research results Technical visit Official languages : English/Japanese Participants: 600 Further information: Conference information will be available in December, 1983. **** FGCS PROJECT **** The Fifth Generation Computer Systems (FGCS) Project, launched in April, 1982, is planned to span about ten years. It aims at realizing more user-friendly and intelligent computer systems which incorporate inference and knowledge base management func- tions based on innovative computer architecture, and at contri- buting thereby to future society. The Institute for New Genera- tion Computer Technology (ICOT) was established as the central research institute of the project. The ICOT Research Center be- gan its research activities in June, 1982 with the support of government, academia and industry. ------------------------------ End of AIList Digest ********************