Date: Fri 2 Sep 1988 23:42-EDT From: AIList Moderator Nick Papadakis Reply-To: AIList@mc.lcs.mit.edu Us-Mail: MIT LCS, 545 Tech Square, Rm# NE43-504, Cambridge MA 02139 Phone: (617) 253-6524 Subject: AIList Digest V8 #75 To: AIList@mc.lcs.mit.edu Status: R AIList Digest Saturday, 3 Sep 1988 Volume 8 : Issue 75 Queries: Newell's Knowledge Level Machine Translation Responses: Prolog, etc. (2) How do I learn about AI, Prolog, and/or Lisp (2) The "A/D->ROM->D/A" sigmoid idea by Antti ---------------------------------------------------------------------- Date: Thu, 01 Sep 88 16:36:12 GMT From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU Subject: Newell's Knowledge Level From: Andrew Basden, I.T. Institute, University of Salford, Salford. Please can anyone help clarify a topic? In 1982 Allen Newell published a paper, 'The Knowledge Level' (Artificial Intelligence, v.18, p.87-127), in which he proposed that there is a level of description above and separate from the Symbol Level. He called this the Knowledge Level. I have found it a very important and useful concept in both Knowledge Representation and Knowledge Acquisition, largely because it separates knowledge from how it is expressed. But to my view Newell's paper contains a number of ambiguities and apparent minor inconsistencies as well as an unnecessary adherence to logic and goal-directed activity which I would like to sort out. As Newell says, "to claim that the knowledge level exists is to make a scientific claim, which can range from dead wrong to slightly askew, in the manner of all scientific claims." I want to find a refinement of it that is a bit less askew. Surprisingly, in the 6 years since the idea was introduced there has been very little discussion about it in AI circles. In psychology circles likewise there has been little detailed discussion, and here the concepts are only similar, not identical, and bear different names. SCI and SSCI together give only 26 citations of the paper, of which only four in any way discuss the concepts, most merely using various concepts in Newell's paper to support their own statements. Even in these four there is little clarification or development of the idea of the Knowledge Level. So I am turning to the AILIST bulletin board. Has anyone out there any understanding of the Knowledge Level that can help in this process? Indeed, is Allen Newell himself listening to the board? Some of the questions I have are as follows: 1. Some (eg. Dennett) mention 3 levels, while Newell mentions 5. Who is 'right' - or rather, what is the relation between them? 2. Newell says that logic is at the Knowledge Level. Why? I would have put it, like mathematics, very firmly in the Symbol Level. 3. Why the emphasis on logic? Is it necessary to the concept, or just one form of it? What about extra-logical knowledge, and how does his 'logic' include non-monotonic logics? 4. The definition of the details of the Knowledge Level is in terms of the goals of a system. Is this necessary to the concept, or is it just one possible form of it? There is much knowledge that is not goal directed. Alexander et. al. and Clancey both question Newell's adherence to logic and goals, but do not discuss the case. Can anyone shed any light? I have further questions, which I will put directly to some of those who reply. Or (please tell me) should I put them on the board? And would anyone like a summary from me of my findings? Thank you, in advance. Andrew Basden Information Technology Institute, University of Salford, Salford, UK. JANET: abasden@uk.ac.salf.b Phone: (44) 61 736 5843 x510; Telex: 668680 (Sulib); Fax: (44) 61 745 7808 ------------------------------ Date: Fri, 2 Sep 88 15:59:50 PDT From: Lynn Gazis Subject: machine translation Could someone send me some good references on machine translation? Please send mail directly to me, as I often have trouble keeping up with the list. Lynn Gazis sappho@sri-nic.arpa ------------------------------ Date: 1 Sep 88 05:29:19 GMT From: quintus!ok@unix.sri.com (Richard A. O'Keefe) Subject: Re: Prolog, etc. In article <1034@mtund.ATT.COM> newton@mtund.ATT.COM (Newton Lee) writes: >In a previous article, Paul Fishwick writes: >> Does anyone know of a PD version of Prolog that will run under UNIX. >> It must come with source since we would like to able to use it on >> any UNIX machine (including Gould, SUN, VAX, etc.)? We currently have > >We use C-Prolog on our UNIX machines (VAX, MIPS, 3B20, UNIX PC, etc.) >It is based on the Prolog system written in IMP by Luis Damas (and >Lawrence Byrd) for the ICL 2900 computers. For more info, contact >Fernando Pereira, EdCAAD, Dept. of Architecture, University of Edinburgh. > >Newton Lee >AT&T Bell Laboratories C Prolog is not public domain and never has been. Fernando hasn't been at EdCAAD for about five years; he is currently at SRI Cambridge. EdCAAD is still the place to ask about C Prolog. You might find Stony Brook Prolog more what you're looking for. It's covered by a GNU-style "copyleft", but that shouldn't bother a .edu site. The contact is Saumya Debray: debray@arizona.edu. I'd be tempted to mention that Q------ Prolog is really great, more than worth the price, but it doesn't run on Goulds, so I shan't (:-). By "any UNIX machine", I hope Fishwick means "any 32-bit byte-addressed virtual-memory machine running V.2 or later or 4.1BSD or later". A 286 running Xenix is a UNIX machine, but don't expect porting C Prolog or SB Prolog to it to be trivial. ------------------------------ Date: 2 Sep 88 17:49:32 GMT From: aplcen!jhunix!apl_aimh@mimsy.umd.edu (Marty Hall) Subject: Re: Prolog, etc. In a previous article, fishwick@fish.cis.ufl.edu writes: >Does anyone know of a PD version of Prolog that will run under UNIX. >It must come with source ..... SB Prolog is a PD, compilable C&M Prolog with source included. They say that it runs on "Berkeley UNIX or related operating systems," I know that it compiles and runs fine on a Sun under 3.x. The University of Arizona will ship you 1600 bpi tar tapes for "distribution costs" of $20 in N. America, $40 overseas. I am unaware of anonymous ftp sites. SB-Prolog Distribution Department of Computer Science University of Arizona Tucson, AZ 85721 Regards- - Marty Hall ------- -- apl_aimh@jhunix.hcf.jhu.edu Artificial Intelligence Laboratory, MS 100/601 ...uunet!jhunix!apl_aimh AAI Corporation apl_aimh@jhunix.bitnet PO Box 126 (301) 683-6455 Hunt Valley, MD 21030 ------------------------------ Date: 1 Sep 88 11:37:31 GMT From: pur-phy!sawmill!mdbs!kbc@ee.ecn.purdue.edu (Kevin Castleberry) Subject: Re: How do I learn about AI, Prolog, and/or Lisp > Microsoft has a Lisp for MS-DOS (supposedly it is Common > Lisp, but again, I haven't played with it). Is this true? Microsoft has a lisp? Technical Support for mdbs products: KMAN (a relational db environment), GURU (an expert system development environment), MDBS III (a post-relational high performance dbs) (Our products run in VMS, UNIX, OS/2 and MSDOS.) is available by emailing to: support@mdbs.uucp or {rutgers,ihnp4,decvax,ucbvax}!pur-ee!mdbs!support The mdbs BBS can be reached at: (317) 447-6685 300/1200/2400 baud, 8 bits, 1 stop bit, no parity Kevin Castleberry (kbc) Director of Customer Services Micro Data Base Systems Inc. P.O. Box 248 Lafayette, IN 47902 (317) 448-6187 For sales call: (800) 344-5832 ------------------------------ Date: 2 Sep 88 19:08:26 GMT From: uhccux!todd@humu.nosc.mil (Todd Ogasawara) Subject: Re: How do I learn about AI, Prolog, and/or Lisp In article <984@mdbs.UUCP> kbc@mdbs.UUCP (Kevin Castleberry) writes: >> Microsoft has a Lisp for MS-DOS (supposedly it is Common >> Lisp, but again, I haven't played with it). >Is this true? Microsoft has a lisp? Yes, Microsoft has a Lisp they license from a firm in Honolulu called Soft WareHouse. Soft WareHouse sells the same product under the name muLISP-87. muLISP itself is NOT a Common Lisp. However, it comes with a support library (source code in Lisp included) that adds the Common Lisp functions to muLISP. They also have an optional incremental compiler (I think this option is $100 or so, I haven't bought it myself). muLISP is no replacement for a big expensive Lisp workstation. But, if you want a small, inexpensive, relatively speedy full Lisp development, I recommend you look at this package. It is small and fast enough to use on my 4.77MHz 8088-based Toshiba T-1000 when I feel like doing some Lisp programming away from my office in the shade of a tree. Soft WareHouse also has an interesting license. It reads "the software shall be run on at most five (5) computers residing in a single building or facility, under the control of END USER." Pretty reasonable, I think. -- Todd Ogasawara, U. of Hawaii Faculty Development Program UUCP: {uunet,ucbvax,dcdwest}!ucsd!nosc!uhccux!todd ARPA: uhccux!todd@nosc.MIL BITNET: todd@uhccux INTERNET: todd@uhccux.UHCC.HAWAII.EDU <==I'm told this rarely works ------------------------------ Date: Thu, 1 Sep 88 16:17:04 CDT From: lugowski@ngstl1.csc.ti.com Subject: response to the "A/D->ROM->D/A" sigmoid idea by Antti Concerning the "analog/digital --> ROM --> digital/analog" neural sigmoids: Over here in Texas, Gary Frazier (central research labs, Texas Instruments) and I (ai laboratory, same) have played with a very similar idea for over a year now. We would have loved to have kept it to ourselves a bit longer in order to quietly work out its implications, writing a nice understated little paper about what it buys and what it doesn't, but -- sigh -- Antti's note from the prettier end of Europe forces our hand: 1. Consider not using ROM in favor of RAM. This allows you to learn the sigmoid, if you're so inclined, or otherwise mess with it in real-time. 2. Leave off the A/D and D/A conversions (for speed's sake) if there's a way to compute the thing in analog (often there is). 3. Consider other functions, rather different from sigmoids and consider other uses other than neural summation for network node activities. 4. Expect interesting system properties to emerge from this rather innocent looking hardware move. More on this in our forthcoming paper. Some clues for those who want to think this through in the interim: (1) implementations for neural darwinism?, (2) more bang for the hyper"plane" buck?, (3) faster convergence than pure gradient descent in weight space? Well, we could always turn out to be totally off base on this, but here's the goods just in case we're not. Comments? Anyone else tinkering thusly? -- Marek Lugowski AI Lab, DSEG, Texas Instruments P.O. Box 655936, M/S 154 Dallas, Texas 75265 lugowski@resbld.csc.ti.com ------------------------------ End of AIList Digest ********************