but his first step: he proved he existed. > Please be kinder to Rene next time. He would not rest well if he > thought his method could be generally perceived to state the opposite > of what he meant. > David Rubin > {allegra|astrovax|princeton}!fisher!david He *believed* that the real world could be proved to exist, but he certainly didn't prove it logically. It's been a while since I read his "proof", but I seem to remember something like this: He proves he exists, as a thinking entity, because the one thing he can't deny is that he thinks. He also experiences the external world through his senses; this can either be real, he decides, or the action of a "deceiving demon" (maya, illusion). Fine so far. How does he then "prove" that the outside world is real as opposed to some "deception"? Because God is Good. He proves this quite logically, he simply has some very questionable axioms... This is similar to his thoughts on mind-body dualism. He reached the conclusion that the mind (soul?) and body were of separate substances, and therefore could not interact. But of course they did, and faced with a nice, rational conclusion, and "facts" that disagreed with it, he of course retained his conclusion, giving as explanation that the mind and body couldn't interact, except in the pineal gland. [Huh?] Kind of suggests that there's something wrong with mind-body dualism. [Interesting how these netnews discussions cross-fertilize. To net.ai'ers: Note that this says nothing against the existence of the mind, but indicates that maybe there is no real duality, (the universe is one...), or maybe no real body (hmm...)] John Owens ...!{ {duke mcnc}!ncsu!uvacs houxm brl-bmd scgvaxd }!edison!jso ------------------------------ Date: 31 Jul 84 2:44:54-EDT (Tue) From: hplabs!tektronix!uw-beaver!cornell!vax135!ukc!west44!gurr@Ucb-Vax.arpa Subject: Re: Should The Turing test be modified with the times? Article-I.D.: west44.276 I think that we're all missing something here - the Turing test was not designed to test how like a human a machine could be, but to test whether or not a machine could appear to think. Adding such facilities to the test such as a video link merely makes the test into an imitation game. This is not what the test was designed for. Personally, I think the test is totally inconclusive and irrelevant. It gives merely a subjective qualitative answer to a question which we cannot answer satisfactorily about other people, or even about ourselves (from some of the items on USENET, I'm sure some people don't think :-) !!!). mcvax "Hello shoes. I'm sorry \ but I'm going to have to ukc!west44!gurr stand in you again!" / vax135 Dave Gurr, Westfield College, Univ. of London, England. ------------------------------ Date: 27 Jul 84 08:42:24 PDT (Friday) From: Hoffman.es@XEROX.ARPA Subject: Re: Ph.D. and 'understanding' From H. E. Booker in a piece in "Science" magazine (maybe around summer 1973): "At the conclusion of an ideal undergraduate education, a man's brain works well. He is convinced, not that he knows everything or even that he knows everything in a particular field, but that he stands a reasonable chance of understanding anything that someone else has already understood. Any subject that he can look up in a book he feels that he too can probably understand. On the other hand, if he cannot look it up in a book, he is uncertain what to do next. This is where graduate education comes in. Unlike the recipient of a Bachelor's Degree, the recipient of a Doctor's Degree should have a reasonable confidence in his ability to face what is novel and to continue doing so throughout life." --Rodney Hoffman ------------------------------ Date: Fri, 27 Jul 1984 17:34 EDT From: HENRY%MIT-OZ@MIT-MC.ARPA Subject: Seminar - LISP Debugger [Forwarded from the MIT bboard by SASW@MIT-MC.] Steps Toward Better Debugging Tools For Lisp Henry Lieberman Thursday, 2 August 1984, 3 PM 7th floor playroom, 545 Technology Square Although contemporary Lisp systems are renown for their excellent debugging facilities, better debugging tools are still urgently needed. A basic flaw with the tools found in most implementations is that they are oriented towards inspection of specific pieces of program or data, and they offer little help in the process of localizing bugs within a large body of code. Among conventional tools, a stepper is the best aid for visualizing the operation of a procedure in such a way that a bug can be found without prior knowledge of its location. But steppers have not been popular, largely because they are often too verbose and difficult to control. We present a new stepper for Lisp, Zstep, which integrates a stepper with a real-time full-screen text editor to display programs and data. Zstep presents evaluation of a Lisp expression by visually replacing the expression by its value, conforming to an intuitive model of evaluation as a substitution process. The control structure of Zstep allows a user to "zoom in" on a bug, examining the program first at a very coarse level of detail, then at increasingly finer levels until the bug is located. Zstep keeps a history of evaluations, and can be run either forward or backward. Zstep borrows several techniques from the author's example-oriented programming environment, Tinker, including a novel approach to handling error conditions. A videotaped demonstration of Zstep will be shown. ------------------------------ Date: Wed 25 Jul 84 18:38:01-PDT From: Dikran Karagueuzian Subject: Workshop - Hardware Design Verification [Forwarded from the CSLI Newsletter by Laws@SRI-AI.] WORKSHOP ON HARDWARE DESIGN VERIFICATION The IFIP Working Groups 10.2 and 10.5. Program have issued a call for papers to be delivered at a workshop to be held on November 26 and 27, 1984, in Technical University of Darmstadt, F.R. Germany. The workshop is on hardware design verification and will cover all aspects of verification methods for hardware systems, including temporal logic, language issues, and application of AI techniques, as well as other areas. The workshop committee is chaired by Hans Eveking, Institut fuer Datentechnik, Technical University of Darmstadt, D-6100 Darmstadt, Fed. Rep. Germany, (49) (6151) 162075, and includes Stephen Crocker, Aerospace Corporation, P.O. Box 92957, Los Angeles, California 90009. ------------------------------ End of AIList Digest ******************** 1-Aug-84 09:19:06-PDT,8823;000000000000 Mail-From: LAWS created at 1-Aug-84 09:16:25 Date: Wed 1 Aug 1984 09:01-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #98 To: AIList@SRI-AI AIList Digest Wednesday, 1 Aug 1984 Volume 2 : Issue 98 Today's Topics: Expert Systems - Archaeology and PROSPECTOR, Image Processing - Request for Algorithms, Logic Programming - Public-Domain Theorem Provers, AI Languages - Frame-Based Languages, AI Hardware - Facom Alpha, LISP - Georgia Tech Lisp & Aztec C & Franz on P-E 3230, Seminar - Nonmonotonic Reasoning Using Dempster's Rule ---------------------------------------------------------------------- Date: 26 Jul 84 9:09:00-PDT (Thu) From: pur-ee!uiucdcs!uiucuxc!chandra @ Ucb-Vax.arpa Subject: Req: Info on Archaeological Expert - (nf) Article-I.D.: uiucuxc.28900002 Help!!! I am a Graduate student trying to build an Archeologist's assistant. This program is supposed to contain knowledge about human habitation patterns, anthroplogical aspects etc. This note is a request for any info on the application of a knowledge based programs to archeological surveying. I faintly remember having seen a reference on this topic long ago. I am currently thinking of using some of the ideas used in PROSPECTOR. Any Ideas, Comments, cues? Thanks, Navin Chandra (outside Illinois) Phone 1-800-872-2375 (extention 413) (in Illinois) Phone 1-800-252-7122 (extention 413) ------------------------------ Date: Sun 29 Jul 84 10:34:30-PDT From: Ken Laws Subject: Archaeology and PROSPECTOR PROSPECTOR is a pretty fair hierarchical inference system, but be advised that it provides no spatial reasoning mechanisms. The basic consultation mode asks questions about geologic conditions at a single position or "site". In map mode, it uses map data to provide the same information independently for every point on the map -- there is no spatial analysis or carry-over from one point to the next. You can add decisions based on criteria such as being "near a fault", but the reasoning mechanisms have no way of determining "nearness" automatically unless you provide a "nearness map"; neither can they reason about one site being nearer than its neighbors. These deficiencies could be fixed, but the existing PROSPECTOR is not a spatial reasoning system. -- Ken Laws ------------------------------ Date: 27 Jul 84 11:44:30-PDT (Fri) From: ihnp4!drutx!zir @ Ucb-Vax.arpa Subject: image processing Article-I.D.: drutx.763 I am trying to track down source code for image processing routines, such as digital filters, noise filters, dithering filters, shape recognition and visual database design. Any and all responses will be appreciated. I will post results in Net.sources if there is sufficient interest. Thanks for your time, Mark Zirinsky AT&TIS, Denver 31d48 (303) 538- 1063 ------------------------------ Date: 30 Jul 1984 13:36-PDT From: dietz%USC-CSE@USC-ECL.ARPA Subject: Public Domain Theorem Provers I'm trying to find out what's available in the public domain in the way of theorem proving programs and subroutine packages. If you have such please send a note to: Paul Dietz dietz%usc-cse@usc-ecl ------------------------------ Date: Sun, 29 Jul 84 22:01 EDT From: Tim Finin Subject: Frame-Based Languages I am investigating some implementation techniques for frame-base representation languages with inheritance. Most such languages do inheritance at "access time" and may or may not keep a local copy of the inherited data. I am trying to determine which languages and/or implementations of languages have instead done the inheritance at "definition time" by making some kind of explicit local copy or pointer to the inherited information. I am particularly interested in finding out if any languages have done this in a general way that would allow changes in the attributes of a generic object to be properly inherited by its current descendants. Tim ------------------------------ Date: Mon 30 Jul 84 18:23:17-EDT From: Wayne McGuire Subject: Facom Alpha MIS Week for 8/1/84 (p. 18) reports the following: ''Fujitsu Ltd. last week announced shipment next March of Japan's first Lisp machine, named Facom Alpha, claiming it is four times faster than the Symbolics 3600 in executing artificial intelligence programs such as expert systems. ''The Alpha, carrying a price tag of $90,930, was said to be a back-end processor connectable with a Fujitsu mainframe or the company's S-3000 super minicomputer. It runs 'Utilisp,' a local version of Lisp language developed by Tokyo University.'' What catches one's eye is the claim that the Facom Alpha is four times faster than the Symbolics 3600. Reading the popular computer press these days could easily give one the impression that Japan is about to trounce the U.S. in the development of both supercomputers and AI systems. Does anyone on AIList know whether this claim about the Facom Alpha's speed has any grounding in reality? -- Wayne McGuire -- ------------------------------ Date: Mon 30 Jul 84 17:34:23-CDT From: CMP.BARC@UTEXAS-20.ARPA Subject: Yet Another Lisp Dialect? I recently received a rather indirect inquiry concerning a Lisp dialect called "Georgia Tech Lisp". Could anyone out there provide or direct me to some information about this variant and its idiosyncrasies? Dallas Webster (CMP.BARC@UTexas-20) ------------------------------ Date: 31 Jul 84 10:09:44-PDT (Tue) From: hplabs!pesnta!lpi3230!steve @ Ucb-Vax.arpa Subject: Franz Lisp running on Perkin Elmer 3230 Unix Article-I.D.: lpi3230.142 Franz Lisp (Opus 38.79) is now running on a Perkin Elmer 3230 under their version 2.4 Unix (a V7 version). Soon after PE delivers their promised System 5.2, it will be ported to that system. For the many of you who have never heard of Perkin Elmer, They used to be called Interdata and an Interdata machine was the first machine to which Unix was ported in the mid seventies. The 3230 has about 90% of the speed of a VAX-780 for the price of a 750. For the few of you who actually HAVE a PE machine and want to use Franz Lisp, send me mail. We haven't yet decided under what terms to make it available. The port was too time consuming and expensive to just give it away, but we aren't in business and do not have the manpower to really market and support it. Maybe PE will distribute it on a third party basis at a reasonable cost. Steve Burbeck Linus Pauling Institute 440 Page Mill Road Palo Alto, CA 94306 (415)327-4064 hplabs!{analog,pesnta}!lpi3230!steve ------------------------------ Date: 28 Jul 1984 2132-CDT From: Usadacs at STL-HOST1.ARPA Subject: LISP in Aztec C, Public Domain Ref: AI Digest, V2 #90 "LISP in Aztec C", is avaliable from SIMTEL20 via FTP. MICRO: A.C. McIntosh, USADACS@STL-HOST1. ------------------------------ Date: Mon 30 Jul 84 15:14:35-PDT From: Juanita Mullen Subject: Seminar - Nonmonotonic Reasoning Using Dempster's Rule [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.] DATE: Friday, August 3, 1984 LOCATION: Chemistry Gazebo, between Physical & Organic Chemistry TIME: 12:05 SPEAKER: Matt Ginsberg Heuristic Programming Project Stanford University TOPIC: Non-monotonic Reasoning Using Dempster's Rule Rich's suggestion that the arcs of semantic nets be labeled so as to reflect confidence in the properties they represent is investigated in greater detail. If these confidences are thought of as ranges of acceptable probabilities, existing statistical methods can be used effectively to combine them. The framework developed also seems to be a natural one in which to describe higher levels of deduction, such as "reasoning about reasoning". ------------------------------ End of AIList Digest ******************** 2-Aug-84 11:04:57-PDT,12336;000000000000 Mail-From: LAWS created at 2-Aug-84 11:02:42 Date: Thu 2 Aug 1984 10:54-PDT From: AIList Moderator Kenneth Laws Reply-to: AIList@SRI-AI US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList Digest V2 #99 To: AIList@SRI-AI AIList Digest Thursday, 2 Aug 1984 Volume 2 : Issue 99 Today's Topics: AI Funding - Call for Questions, LISP - IBM 4341 Implementation?, Applications - Design and Test, Journal - Symbolic Computation, Book - Successful Dissertations and Theses by David Madsen, Intelligence - Turing Test & Understanding, Software Validation - Expert Systems, Seminar - Speech Recognition Using Lexical Information ---------------------------------------------------------------------- Date: 1 Aug 84 09:54 PDT From: stefik.pa@XEROX.ARPA Subject: Call for Questions: AAAI panel on SC DARPA's Strategic Computing initiative is going to be a major source of funding for AI research (as well as other Computer Science research) in the next several years. The project has been hailed as "just in time" by people concerned with the levels and directions of funding for research in Computer Science. It has also attracted the criticism of those who are worried about the effect of military goals on funding, or about dangers of trying to guide research too much. Next Friday morning at the AAAI conference in Austin, there will be a panel session during which several members of the DARPA staff will present goals, ideas, and scales of this program. The presentation will be followed by a question and answer period with me as moderator. Some of the questions will come "live" from the audience. Because the SC project will effect our research community in many ways, I would like to make sure that the questions address a broad enough range of issues. To this end I am now soliciting questions from the community. I will select a sampling of "sent-in" questions to try to provide a balance across issues of concern to the community -- anything from funding levels, to research objectives, to 5th generation comparisons, to the pace of the research, to expectations by the military, to statements that have appeared in the press, etc. Please send questions to me -- Stefik@Xerox.Arpa. Keep them short. I don't want to wade through long paragraphs in search of a coherent question. Think of questions that could fit easily on a 35 mm slide -- say 25 words or so. I expect to choose from these sent-in questions for about half of the Q/A period. Mark ------------------------------ Date: 30 Jul 84 9:50:07-PDT (Mon) From: ihnp4!mhuxl!ulysses!unc!mcnc!philabs!cmcl2!lanl-a!cib @ Ucb-Vax.arpa Subject: Query - LISP for IBM 4341? Article-I.D.: lanl-a.11272 I would be very grateful for information on LISP dialects for the IBM 4341, and sources thereof. Thank you. ------------------------------ Date: Thu 2 Aug 84 10:43:10-PDT From: Ken Laws Reply-to: AIList-Request@SRI-AI Subject: IEEE Design & Test The July issue of IEEE Computer Graphics mentions that IEEE Design & Test of Computers is seeking submissions for a special August 1985 issue on artificial intelligence techniques in design and test. They particularly solicit material on AI in design automation, CAD, and CAT, and on expert systems, automatic design systems, test generation and system diagnosis, natural-language CAD interfaces, and special-purpose hardware to support AI systems. Submit four copies by December 1 to Guest Editor Donald E. Thomas, ECE Department, Carnegie-Mellon University, Pittsburgh, PA 15213, (412) 578-3545. -- Ken Laws ------------------------------ Date: Thu 12 Jul 84 13:49:29-CDT From: Bob Boyer Subject: New Journal/Call for Papers The Journal of Symbolic Computation (published by Academic Press, London) will publish original articles on all aspects of the algorithmic treatment of symbolic objects (terms, formulae, programs, algebraic and geometrical objects). The emphasis will be on the mathematical foundation, correctness and complexity of new sequential and parallel algorithms for symbolic computation. However, the description of working software systems for symbolic computation and of general new design principles for symbolic software systems and applications of such systems for advanced problem solving are also within the scope of the journal. Manuscripts should be sent in triplicate to: B. Buchberger, Editor Journal of Symbolic Computation Johannes-Kepler-Universitat A4040 Linz, Austria Associate Editors: W. Bibel, J. Cannon, B. F. Caviness, J. H. Davenport, K. Fuchi, G. Huet, R. Loos, Z. Manna, J.Nievergelt, D. Yun. ------------------------------ Date: Wed 1 Aug 84 09:50:42-PDT From: C.S./Math Library Subject: Successful Dissertations and Theses by David Madsen [Forwarded from the Stanford bboard by Laws@SRI-AI.] Successful Dissertations and Theses; a guide to graduate student research from proposal to completion by David Madsen LB2369.M32 1983 c.3, is currently on the New Books Shelf in the Math/CS Library. HL ------------------------------ Date: 25 Jul 84 9:54:00-PDT (Wed) From: pur-ee!uiucdcs!ea!mwm @ Ucb-Vax.arpa Subject: Re: Should The Turing test be modified w - (nf) Article-I.D.: ea.500002 >What I am wondering is "should the Test be modified >to Our times?" I don't think so; at least not with the video link you mentioned. A key element in the Turing Imitation game was that it hid the handicaps suffered by the computer, leaving only the (possible) intelligence exposed. If you could modify it without subtracting that property, then I'd say yes. It just isn't clear that that can be done. >I can see it now, >over a crude link, we discover that we cannot tell the difference between >man and machine, then we hook up a video link, and the difference 'becomes >apparent.' If that were the case, it would seem that the "apparent difference" would be identical to the difference you get between a blind man and a sighted man. Are we therefore to conclude that the blind are only artificially intelligent? >--eugene miya > NASA Ames Research Center Subject: Seminar - Speech Recognition Using Lexical Information [Forwarded from the CSLI Newsletter by Laws@SRI-AI.] LEXICAL ACCESS USING PARTIAL INFORMATION By Daniel P. Huttenlocher, Massachusetts Institute of Technology, Friday, August 3, 2 p.m. in the Trailers' Conference Room next to Ventura Hall. ABSTRACT: Current approaches to speech recognition rely on classical pattern matching techniques which utilize little or no language knowledge. We have recently proposed a model of word recognition which uses speech-specific knowledge to access words on the basis of partial information. These partial descriptions serve to partition a large lexicon into small equivalence classes using sequential phonetic and prosodic constraints. The representation is attractive for speech recognition system because it allows all but a small number of word candidates to be excluded using only a crude description of the acoustic signal. For example, if the word ``splint'' is represented according to the broad phonetic string [fricative][stop][liquid][vowel][nasal][stop], there are only two matching words in the 20,000 word Webster's Pocket Dictionary, ``splint'' and ``sprint.'' Thus, a partial representation can both greatly reduce the space of possible word candidates, and be relatively insensitive to variability in the speech signal across utterance situations. This talk will discuss a set of studies examining the power of such partial lexic