--------- Date: 22 May 86 17:57:28 GMT From: tektronix!tekgen!stever@ucbvax.berkeley.edu (Steven D. Rogers) Subject: RE: LIFE references Another more general book that mentions the game of Life in the broader context of games and life: Laws of the Game, How the Principles of Nature Govern Change by Manfred Eigen, and Ruthild Winkler, Harper Colophon Books l981 It was sort of advertised as a "Godel, Escher, Bach" of games. I don't think it quite made that level, but it is an interesting book. ------------------------------ Date: 13 May 86 23:50:41 GMT From: ihnp4!alberta!tony@ucbvax.berkeley.edu (Tony Marsland) Subject: International Computer Chess Association Journal The current (March 1986) issue of the ICCA Journal has been received. Aside from the following three technical articles, there are reports on Ken Thompson's 5-piece endgame studies, showing that several endgames are won in more than 50 moves, plus the usual reviews and short articles. There is also an extensive study of most commercially available chess machines by a Swedish group. This list is the most accurate and scientific estimate of the relative playing strength of those programs. The major articles are "A review of game-tree pruning" by T.A. Marsland "An overview of machine learning in computer chess" by S.S. Skiena "A data base on data bases" by H.J. van den Herik and I.S. Herschberg Information on the availability of this journal has been posted before. ------------------------------ Date: 22 May 86 19:05:00 GMT From: pur-ee!uiucdcs!kadie@ucbvax.berkeley.edu Subject: $1,000,000 Prize This might be of general interest: /* May 17, 1986 by chen@uiucdcsb.CS.UIUC.EDU in uiucdcs:uiuc.ai */ /* ---------- "$1,000,000 for a program" ---------- */ The following was posted in net.game.go. In case you don't know about Go, it is an ancient oriental board game played between two players on a 19 by 19 grid. The best Go program so far is no better than an intellegent novice that has received only one week intensive training. /* May 14, 1986 by alex@sdcrdcf.UUCP in uiucdcsb:net.games.go */ /* ---------- "Million $ prize" ---------- */ I think this is a big news for the go community. The Chinese Wei Chi(go in Chinese) Association(TWCA) in Taipei, Taiwan and conjunction with one of Taiwan's largest computer company have put 2 million US dollar in trust as prize money of computer go games. The top standing prize is 1 million dollar for any computer go game defeating reigning junior champion in Taiwan. The prize offer is good for 15 years. (BTW, if you are wondering how they raise the prize money, take a look at all the cheap IBM PC clones around.) The prize money is much more interesting the Fredkin's prize. They are other prizes for the computer go champion, etc. The TWCA is the first organization offering prize money for computer-computer and computer-human competition, according to my and the computer go game pioneer Bruce, who appeared in TWCA first computer tournament last January. Bruce lost twice and did not place in top five. That tournament offered 2 to 3 thousand price money to the winner. His first loss was to a go game written in BASIC running on an Apple. Bruce was winning convincingly until the Apple games made a suicide move which is legal under Chinese rule but not under Japanese rule. Bruce's game went into loop. The judge allowed Bruce to fix his code on the spot as long as he can make the move before his time clock runs out. (They did not want Bruce to lose because he was the main attraction, and I believe they paid him some appearance fee.) But Bruce did not fix it right within the 30 minutes he had. I did not stick around for his second loss. Bruce's game was running on a 8MHz PC clone. If you are interested in entering the next competition which is in November, you better get the rule book on the Chinese rules, which differ slightly from Japanese in area like suicide moves and scoring. Last competition was restricted to personal computer, although I find big disparity in computer power between a MacIntosh and an Apple II. However, I don't think computing power is the main bottleneck right now. If there are enough people interested, I can get additional detail about the tournament. Also, a junior champion in Taiwan is about 1 dan in Chinese amateur rating, which is about 5-6 dan in US and Japanese amateur rating. Bruce's game was last rated to be 19Q in Japan human tournament. He said he may push it to 11-12Q by November. I think Bruce has got a good technique but his potential is limited by his knowledge of go. But at any rate, you have your work cut out for you. Alex Hwang /* End of text from uiucdcsb:net.games.go */ /* End of text from uiucdcs:uiuc.ai */ ------------------------------ Date: Thu 15 May 86 18:24:51-PDT From: John Myers Subject: IT*S Grammar Sir: I am writing to protest the continual misuse of the word "its" for the third person neuter posessive, when everyone knows its the contraction for "it is". I's hair stands on end everytime I see someone use it in they's sentence. Ive even heard one grammarian state that he's book says that personal pronouns all have a special posessive case form that doesnt use "apostrophe-S"--hes off he's rocker! Youre well aware whatll happen to you's reading material if this becomes common. Were going to have to keep we's guard up, until its clear that peopleve gotten this straight! Its dreadful!! Not only that, some peoplere even forming they's contractions with an apostrophe. When they have a word phrase such as "it is", and they want to write it's contraction, they's spelling is "it's"!! Ill never see where they couldve gotten such atrocious grammar from, when if theyre unsure of how to use "its", they only have to look it's meaning up in they's dictionary!! Instructor: "My word! Where's your grammar, boy?" Youth: "Watching soap on the TV." John Myers~~ ------------------------------ Date: Thu 15 May 86 18:56:44-PDT From: John Myers Subject: Etymology of Foo-Bar Item of interest: FUBAR was originally an acronym for "Fouled" Up Beyond All Recognition, stemming from the W.W.II era. It is related to SNAFU, and such short-lived acronyms as FUBIO, FUBISO, GFU, JANFU, MFU, SAMFU, SAPFU, SNEFU, SUSFU, TARFU, and TUIFU. Source: A Dictionary of Euphemisms & Other Doubletalk, Rawson. ------------------------------ Date: 13 May 86 15:41:39 GMT From: tektronix!uw-beaver!bullwinkle!rochester!rocksanne!sunybcs!ellie !colonel@ucbvax.berkeley.edu (Col. G. L. Sicherman) Subject: Re: Plan 5 for Inner Space > Answers: about nine months, plus a few years training. And hospitals are > charging on the order of $1000 now; but the care and feeding of the project > will cost more. You do get a tax break. Warning: the U.S. government no longer allows private ownership of these units. Possession is permitted but subject to a long-term time limitation which is determined on a case-by-case basis. "Well, Doctor Eccles, how are the men feeling? Any cases of frozen feet?" "Duh, you didn't order any cases of frozen feet." -- Col. G. L. Sicherman UU: ...{rocksvax|decvax}!sunybcs!colonel CS: colonel@buffalo-cs BI: csdsicher@sunyabva ------------------------------ End of AIList Digest ******************** From vtcs1::in% Wed May 28 14:31:06 1986 Date: Wed, 28 May 86 14:30:59 edt From: vtcs1::in% (LAWS@SRI-AI.ARPA) To: ailist@sri-ai.arpa Subject: AIList Digest V4 #131 Status: R AIList Digest Tuesday, 27 May 1986 Volume 4 : Issue 131 Today's Topics: Queries - Functional Programming and AI & Parallel Logic Programming & Information Modeling for Real-Time/Asynch Processes AI Tools - PROLOGs & Common LISPs & Common LISP Style Standards, Expert Systems - Economics of Development and Deployment ---------------------------------------------------------------------- Date: 21 May 86 13:14:00 EST From: "CUGINI, JOHN" Reply-to: "CUGINI, JOHN" Subject: Functional programming and AI Here's a (dumb?) question for assorted AI wizards: how (if at all) does functional programming support AI type applications? By "functional programming", I mean the ability of a language to treat functions (or some other embodiment of an algorithm) as a data object: something that can be passed from one routine to another, created or modified, and then applied, all at run-time. Lisp functions are an example, as is C_Prolog's ability to construct predicates from lists with the =.. operator, and the OPS5 "build" action. Do working AI programs really exploit these features a lot? Eg, do "learning" programs construct unforeseen rules, perhaps based on generalization from examples, and then use the rules? Or is functional programming just a trick that happens to be easy to implement in an interpreted language? Thanks for any thoughts on this... John Cugini Institute for Computer Sciences and Technology National Bureau of Standards ------------------------------ Date: 25 May 86 14:26:49 GMT From: wisdom.BITNET!jaakov@ucbvax.berkeley.edu (Jacob Levy) Subject: Parallel Logic Programming Dear fellow AIListers and PrologListers, I'm interested in obtaining the latest references you may have to articles concerned with Parallel Logic Programming languages. If you have recently written an article concerned with parallel execution of Prolog or about a committed-choice non-deterministic LP language, I'm interested to read it, or at least to receive a pointer to the article. By RECENT I mean articles which have been published in 1985 and 1986 or which are about to appear. I am interested in any and all sub-topics of the fields listed above. Thank you very much ahead of time for your response, Rusty Red (AKA Jacob Levy) BITNET: jaakov@wisdom ARPA: jaakov%wisdom.bitnet@wiscvm.ARPA CSNET: jaakov%wisdom.bitnet@csnet-relay UUCP: (if all else fails..) ..!ucbvax!jaakov%wisdom.bitnet ------------------------------ Date: 24 May 86 00:10:04 GMT From: amdcad!cae780!leadsv!rtgvax!ramin@ucbvax.berkeley.edu Subject: Information Modeling for Real-Time/Asynch processes Sorry about all the cross-postings but I'm trying for the widest circulation short of net.general (:-) I am looking for any pointers to literature/specifications/ideas for Modeling of asynchronous and/or real-time systems. These would be very high-level design specification tools to help model parallel real-time events and systems. Intuitively, at least I think the way to go is Temporal Logics (hence the net.philosophy posting...) however, that seems to be currently applied only to hardware design (CIRCAL et al). The problem with the standard dataflow diagram and associated descriptive systems is their failure to capture at least simultaneous (ideally, parallel) events. On the other hand, the rigor with which one would want to model such an event lends itself to creative Knowledge Representation techniques (hence net.ai and net.cog-eng...) and even possibly many-valued logics...? To put it in some more perspective, the model would be of some complicated industrial processes that up to now have been modeled in a synchronous i.e. serialized fashion. I would like to see if there are any references out there to attempts at asynchronous modeling. Would definitely repost (to where? (:-) if there are enough responses... Thanks much... ramin : alias: ramin firoozye' : USps: Systems Control Inc. : : uucp: ...!shasta \ : 1801 Page Mill Road : : ...!lll-lcc \ : Palo Alto, CA 94303 : : ...!ihnp4 \...!ramin@rtgvax : ^G: (415) 494-1165 x-1777 : ------------------------------ Date: 16 May 86 10:53:22 GMT From: allegra!mit-eddie!genrad!panda!husc6!harvard!seismo!mcvax!ukc!kcl-cs !glasgow!dunbar@ucbvax.berkeley.edu (Neil Dunbar) Subject: Re: looking for Prolog > I'm looking for a version of Prolog. The machines available to me > include an AT&T 7300 (Unix PC), AT&T 3B5, AT&T 3B2, Plexus P/60, Plexus > P/35, IBMPC, and AT&T 6300PC (IBMPC compatible). I've spoken with > someone from AT&T who suggests that Quintus may be porting to the 7300. > I've spoken with someone from Quintus who says there is no port and no > contract at this time. I've heard of something called C-Prolog, but > don't know for sure what it is. ... Don't Borland make a version of Prolog to run on the PC, Turbo Prolog? If you want a compiler there is the Arity compiler, again for MS-DOS systems, but it costs a few thousand (dollars or pounds, depending on which side of the Atlantic you're on). CProlog V1.2 is the current prolog interpreter system from the University of Edinburgh, running on our 11/780 under Unix. I don't know if it can be ported onto the machines you describe, but you never know, anything's possible. If you want to learn Prolog, try Clocksin & Mellish "Programming in Prolog", which is an excellent tutorial guide. Hope this helps, Neil Dunbar. ------------------------------ Date: Sat, 24 May 86 02:35:00 +0200 From: enea!zyx!jeg@seismo.CSS.GOV Subject: Re: Logic/Functional Languages? In article <8605200626.AA27699@ucbvax.Berkeley.EDU> you write: >Does anyone on the list know of available languages incorporating both >logic and functional programming (preferably in a Unix 4.2 environment >or possibly an IBM/PC)? ... Answer to the questions: 1.) Does anyone on the list know of available languages incorporating both logic and functional programming...? 2.) Some version of Prolog embedded within Common Lisp...? 3.) Has anyone produced any large applications with these hybrid systems? Are the benefits derived from the systems *significant* (over using, say, vanilla lisp or prolog)? Hewlett-Packard have informally introduced HP Prolog to some customers and the official introduction is scheduled to be sometime in August. HP Prolog is residing on top of HP Common Lisp and this development environment is therefore incorporating both Common Lisp and Prolog. Since I am affiliated with HP, the following information is biased and might sound like an advertisment, but I'll try to answer the third question without breaking to many ethical rules for the net. HP Development Environment is based on HP-UX (Unix V.2) and HP 9000 series 300, a 68020 based machine, with HP:s window system. Top level for the Development Environment: - A complete EMACS editor with some enhancements. - A general browser. Main features with the Development Environment are: - The high level of integration - The ability to use both Common Lisp and Prolog in the same process and on the same objects and to mix Common Lisp and Prolog code. HP Common Lisp has: - Interpreter and compiler - Objects package - Ability to call C/Pascal/Fortran - Debugger - Interrupt handler HP Prolog consists of two different environments: - A "Common Lisp compatible" S-expression syntax - Edinburgh C-Prolog syntax HP Prolog has: - Interpreter - Incremental compiler - Block optimzing compiler - Debugger Main features of HP Prolog are: - A much extended Prolog - Ability to mix Prolog and Common Lisp - Macros - Packages - Mode declarations - Declarative determinism - Integration in the environment - A well-designed and complete I/O system - Other minor features like strings, graphics etc. - An extended Definite Clause Grammar (DCG) - Respectable performance The Prolog system will soon be available with/without Common Lisp system on other vendors machines. Quite large applications on this system are currently under development. There is definitely a significant advantage of being able to mix Common Lisp and Prolog. Common Lisp and Prolog have both different advantages and complement instead of excluding each other. Jan-Erik Gustavsson, ZYX AB, Styrmansgatan 6, 114 54 Stockholm, Sweden Phone: + 46 - 8 - 65 32 05 ...mcvax!enea!zyx!jeg ------------------------------ Date: 18 May 86 00:52:32 GMT From: allegra!mit-eddie!genrad!panda!husc6!harvard!caip!lll-crg!seismo !mcvax!enea!kuling!martin@ucbvax.berkeley.edu (Erik Martin) Subject: Re: Common LISP style standards. In article <2784@jhunix.UUCP> ins_amrh@jhunix.UUCP writes: > > - How do you keep track of the side effects of destructive functions > such as sort, nconc, replaca, mapcan, delete-if, etc? Don't use them. I use destruction only when I need circular objects or when I need to speed up a program. In the latter case I write it strictly functional first and then substitute 'remove' with 'delete' and so on. This should not affect the semantics of the program if it is 'correctly' written from the beginning. But it's really a task for the compiler so You shouldn't need to think about it. > - When should you use macros vs. functions? I only use macros when I need a new syntax or a 'unusuall' evaluation of the arguments. (like FEXPR in Franz and MacLisp.) > - How do you reference global variables? Usually you enclose it > in "*"s, but how do you differentiate between your own vars and > Common LISP vars such as *standard-input*, *print-level*, etc? Always "*"s. No differentiation. > - Documentation ideas? An 'overview' description in the file header, more detailed on top of each function. Very few comments inline, use long function and variable names instead. Documentation strings in global variables and top level (user) functions. > - When to use DOLIST vs MAPCAR? Quite obvious. Use DOLIST when you want to scan through a list, i.e. just look at it. At the end of the list it returns NIL or the optional return form. You can also return something with en explicit RETURN. Use MAPCAR when you want to build a *new* list with a function applied to each element. > - DO vs LOOP? Write what you mean. If you mean 'repeat until doomsday' (without any variables bound) then use LOOP. > - Indentation/format ideas? Or do you always write it like the > pretty-printer would print it? A lot of white space in the code. The rest is very personal and hard to set up rules for. Nice editors usually have good ideas about how it should look. > - NULL vs ENDP, FIRST vs CAR, etc. Some would say "FIRST" is > more mnemonic, but does that mean you need to use > (first (rest (first X))) instead of (cadar X) ?? Again, write what you mean. If you mean 'is this the end of the list we are just working with?' then use ENDP, if you mean 'is this NIL (an empty list)?', use NULL, and if you mean 'is this false?' use NOT. Write FIRST if you mean the first element of a list, SECOND for the second, THIRD for the third...and combinations of these when appropriate. At some limit this gets very messy though, and C*R is better. But in that case you perhaps should write your own accessor functions. When working with cons'es I always use CAR and CDR. My general rule is : Write what you mean and leave the task of efficiency to the implementation and compiler. Per-Erik Martin -- Per-Erik Martin, Uppsala University, Sweden UUCP: martin@kuling.UUCP (...!{seismo,mcvax}!enea!kuling!martin) ------------------------------ Date: Sat, 24 May 86 08:23:25 est From: munnari!psych.uq.oz!ross@seismo.CSS.GOV (Ross Gayler) Subject: economics of expert systems - summary of replies A while back I put out a request for information on the economics of the development and deployment of expert systems. This is a summary of the replies I have received. I received around ten replies, most of which were of the 'please let me know' variety. Some of these went to some length to indicate that they felt this was an important area. It does seem that there is a need for this information and it either doesn't exist or somebody is not sharing it. There were three substantive replies which told of: 1 A company which attempted to develop three expert systems. One took twice as long to develop as the FORTRAN program it replaced, the second was too slow to be usable, and the other was abandoned for lack of an expert. 2 A successful family of expert systems that are widely used in-house. The point made here was that the development cost was an insignificant fraction of the cost of packaging the product for deployment and the continuing cost of training the users. 3 A pointer to the November 1985 IEEE Transactions on Software Engineering which was a special issue on "Artificial intelligence and software engineering". I found the articles by Doyle, Bobrow, Balzer, and Neches et al to be the most relevant to my needs. Doyle argues that the productivity advantage of the artificial intelligence approach comes from the tools and techniques used to construct the product, not from the ultimate form of the product itself. The other papers do not explicitly address the modelling of costs. However, an implicit model is discernible from the areas they choose to emphasize. I will send a request to the software engineering list and see if I can get any joy there. If not it looks like I might be forced to do some work for myself. What I would like is a predictive model which will give me the costs to implement and deploy an expert system or conventional system as functions of various features of the problem, the tools available, and the development and deployment environments. As I do not have any empirical data the best I can aim for is a set of statements on the qualitative shapes of the cost curves for various factors. Using these curves backwards would allow me to say what problem characteristics are a lot more conducive to an expert system solution being cheaper than a conventional solution. I will probably start with the cost models in Tom de Marco's book, "Controlling software projects" and try to identify expert systems analogues of the cost factors he identifies for conventional systems. If I manage to get anywhere with this I will let you know. Ross Gayler | ACSnet: ross@psych.uq.oz Division of Research & Planning | ARPA: ross%psych.uq.oz@seismo.css.gov Queensland Department of Health | CSNET: ross@psych.uq.oz GPO Box 48 | JANET: psych.uq.oz!ross@ukc Brisbane 4001 | UUCP: ..!seismo!munnari!psych.uq.oz!ross AUSTRALIA | Phone: +61 7 227 7060 ------------------------------ End of AIList Digest ******************** From vtcs1::in% Thu May 29 00:49:23 1986 Date: Thu, 29 May 86 00:49:17 edt From: vtcs1::in% (LAWS@SRI-AI.ARPA) To: ailist@sri-ai.arpa Subject: AIList Digest V4 #132 Status: R AIList Digest Wednesday, 28 May 1986 Volume 4 : Issue 132 Today's Topics: Queries - AI Survey & AI Applications in Simulation & Neural Networks, Brain Theory - Chaotic Neural Networks, Logic Programming - Functional Programming & Prolog Variables, AI Tools - VAX LISP on VMS and ULTRIX, Binding - Sussex Cognitive Studies, Literature - Object-Oriented Programming Book, Psychology - Doing AI Backwards ---------------------------------------------------------------------- Date: 23 May 86 15:32:39 GMT From: mcvax!ukc!reading!onion.cs.reading.AC.UK!scm@SEISMO (Stephen Marsh) Subject: A survey on AI I am currently doing a survey on the attitudes and beliefs of people working in the field of AI. It would be very much appreciated if you could take the time to save this notice, edit in your answers and post me back your reply. If there are any interesting results, I'll send them to the net sometime in the future. -Thanks 1. Do you, or have you, undertaken any research in the field of Artificial Intelligence?..... 2. In which country was the research undertaken?..... 3. For how long did your research continue?..... 4. If you are not currently working in the field of AI, when was the period of your research?..... 5. What area of research did your work cover? (eg IKBS)..... 6. Were you satisfied with the results of your research?.... 7. Did your research make you feel that in the long term AI was not going to succeed in creating an intelligent machine?.. 8. Do you find the progress of research in AI in the last 5 years?...... 10 years?..... 25 years?..... acceptable? 9. What do you consider the main objectives of AI?..... 10. Excluding financial pressures, do you consider that AI researchers should reconsider the direction of their work?..... 11. Do you consider that the current areas of research will eventually result in an 'intelligent' machine?..... 12. Do you consider that the current paradigm of humans producing cleverly-written computer programs can ever fulfil the initial aim of AI of producing an intelligent machine in the accepted sense of the word 'intelligent'?..... 13. Should a totally new approach to producing an intelligent machine be found, not based simply on sets of sophisticated programming techniques?..... scm@onion.cs.reading.ac.uk Steve Marsh Dept of Computer Science, PO Box 220, University of Reading, Whiteknights, READING ,UK. ------------------------------ Date: 23 May 86 05:12:27 GMT From: shadow.Berkeley.EDU!omid@ucbvax.berkeley.edu (Omid Razavi) Subject: AI applications in simulation I am interested in the applications of AI in simulation. Specially, I'd like to know if there are expert system environments today that would support simulation modeling and provide features similar to those of standard simulation languages such as GASP IV and SIMSCRIPT. Also, references to technical articles related to this subject is greatly appreciated. Omid Razavi omid@shadow.berkeley.edu ------------------------------ Date: 17 May 86 14:39:34 GMT From: hplabs!qantel!lll-lcc!lll-crg!seismo!mcvax!ukc!warwick!gordon@ucbvax .berkeley.edu Subject: Re: neural networks This may be a bit of a tangent, but I feel it might have some impact on the current discussion. The mathematical theory of chaotic systems is currently an active area of research. The main observation is that models of even very simple systems become chaotic in a very small space of time. The human brain is far from being a simple system, yet the transition to chaos rarely occurs. There must be a self-correcting element within the system itself, as it is often perturbed by myriad external stimuli. Is the positive feedback mentioned in article <837@mhuxt.UUCP> thought to be similar to the self-correcting mechanisms in the brain? Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon ------------------------------ Date: 23 May 86 14:51:53 GMT From: hplabs!hplabsc!kempf@ucbvax.berkeley.edu (Jim Kempf) Subject: Re: neural networks > The mathematical theory of chaotic systems ... > Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon Not having seen <837@mhuxt.UUCP>, I can't comment on the question. However, I do have some thoughts on the relation between chaos in dynamical systems and the brain. The "chaotic" dynamical behavior seen in many simple dynamical systems models is often restricted to a small region of the state space. By a kind of renormalization procedure, this small region might be topologically shrunk, so that, from a more macroscopic view, the chaotic region actually looks more like a point attractor. Another possibility is that complex systems like the brain are able to perform a kind of ensemble averaging to filter out chaos. Sorry if this sounds like speculation. Jim Kempf kempf@hplabs ------------------------------ Date: Tue, 27 May 86 18:10:25 PDT From: narain@rand-unix.ARPA Subject: Functional and Logic Programming Reply to Paul Fishwick regarding a language which incorporates both functional and logic programming, (AIList digest v.4 #124.): In a recent paper "A technique for doing lazy evaluation in logic" I describe a method of defining functions in a logic-based language such as Prolog. It is shown how we can keep Prolog fixed, but define functions in such a way that their interpretation by Prolog directly yields lazy evaluation. This contrasts with conventional approaches for doing lazy evaluation which keep the programming style fixed but modify the underlying interpreter. More generally the technique can be viewed as a natural and efficient method of combining functional and logic programming. The paper appeared in 1985 IEEE Symposium on Logic Programming, and a substantially expanded version of it is to appear in the Journal of Logic Programming. Sanjai Narain Rand Corp. ------------------------------ Date: 22 May 86 07:46:51 GMT From: amdcad!lll-crg!booter@ucbvax.berkeley.edu Subject: Prolog and Thank you WOW! I didn't realize so many folks out there have played with prolog. I received all sorts of replies, most very useful in explaining the instantiation of variables to values (I hope I worded it properly). PASCAL doesn't prepare you for it and I write LISP code by the grace of God(it just works, I dunno why!). A major problem I had was in the idea of reconsulting a file. I just kept loading copies of files in there and of course would get the same error message as it seemed to be reading the first one over and over. I have passed that phase now and am endeavoring to master the idea of using the "cut". You'd all be proud of me, I wrote a very simple version of the computer that talks back (called "doctor" or "eliza"). I still like LISP better, but at least I am no longer swearing at the terminal. Thank you all very much E ***** ------------------------------ Date: 27 May 86 15:48:00 EST From: "LOGIC::ROBBINS" Reply-to: "LOGIC::ROBBINS" Subject: VAX LISP is supported on both VMS and ULTRIX VAX LISP V2.0 (DEC's current release of Common Lisp) is supported on VMS and ULTRIX. I hope that this clears up any confusion resulting from two incorrect messages that appeared in this list recently concerning VAX LISP. Rich Robbins Digital Equipment Corporation 77 Reed Rd. HL02-3/E09 Hudson, MA 01749 Arpanet: Robbins@Hudson.Dec.Com ------------------------------ Date: Thu, 22 May 86 08:39:30 gmt From: Aaron Sloman Subject: Sussex Cognitive Studies mail address This is to confirm that the Sussex Cognitive Studies Netmail address has finally(?) settled down to UK.AC.SUSSEX.CVAXA. Arpanet users can try: aarons@cvaxa.sussex.ac.uk (UK uses the reverse of ARPA order) or, if that doesn't work: aarons%uk.ac.sussex.cvaxa@ucl-cs or aarons%uk.ac.sussex.cvaxa@cs.ucl.uk.ac or via UUCP: ...mcvax!ukc!cvaxa!aarons Other users at this address include Chris Mellish (chrism), Margaret Boden(maggieb), Ben du Boulay (bend), Jim Hunter (jimh), Gerald Gazdar(geraldg), John Gibson (johng), David Hogg (daveh), and the new POPLOG Project manager Alan Johnson (alanj). Aaron Sloman ------------------------------ Date: Tue 13 May 86 18:37:50-PDT From: Doug Bryan Subject: object-oriented programming books [Forwarded from the Stanford bboard by Laws@SRI-AI.] Brad Cox's book "Object-Oriented Programming: An Evolutionary Approach" is now out. The book is published by Addison Wesley. doug ------------------------------ Date: 18 May 86 05:39:39 GMT From: ernie.Berkeley.EDU!tedrick@ucbvax.berkeley.edu (Tom Tedrick) Subject: Doing AI backwards (from machine to man) More on Barry Kort's "Problem of the right-hand tail" (ie social persecution of those with high intelligence). Here is the way I look at the problem. In order to function in society, it is necessary for most individuals to operate in a more or less routine manner, performing certain acts in a repetitive manner. I have been trying to work backwards from models of computation, abstracting certain principles and results in order to obtain models with a wider application, including social behavior. This is somewhat the reverse direction from that taken by those working in Artificial Intelligence, who study intelligent behavior in order to find better ways for machines to function. I am studying how machines function in order to find better ways for humans to function. Anyway, most people in society functioning more or less automatically, they handle input in such a way that only information relevant to their particular problems is assimilated. Input is interpreted according to the pre-existing patterns in their minds. It is as if it was formatted input in fortran, anything that doesn't conform to certain patterns is interpreted nonsensically. The people in the "right-hand tail", IQ distribution-wise, are there primarily due to greater capacity for independent thought, abstract thought, capacity to reason for themselves (or so I claim). Thus these individuals are more likely to have original ideas which don't conform to the pre-existing patterns in the minds of the more average individuals. The average individual will become disturbed when presented with information which he cannot fit into his particular format. And with good reason, since his role is to function as an automaton, more or less, he would be less efficient if he spent time processing information unrelated to his tasks. So by presenting original information to the average individuals in society, the "rightie" is likely to be attacked for disturbing the status quo. To use the machine analogy, the "righties" are more like programmers, who alter the existing software, where the "non-righties" are like machines which execute the instructions they already have in storage. The analogy can be pushed in various ways. We can think of each individual as being both programmer and machine, the faculty of independent judgement and the self being the programmer or system analyst, while the brain is the computing agent to be programmed. The individual is constantly debugging and rewriting the code for his brain, by the choices he makes which become habits, and so on. Also, in interactive protocols where various individuals exchange information, each is tampering with the software of the other. I currently have been working out a strategy for dealing with those I live with who talk too much. It is like having a machine which keeps spewing out garbage every time you give it some input. My current strategy is to carry a little card saying "I am observing silence. I will answer questions in writing." This seems to work very well, it is as if this form of input goes through another channel which does not stimulate so much garbage in response. Or its like saying "the network is down today, so sorry." One last tangent. Note that in studying models of computation one of the primary costs is the cost of memory. We can turn this observation to good use in studying human behavior. For example, suppose your wife asks you to pick up some milk at the store after work. This seems a reasonable enough request, on the surface. But if you think of the cost in terms of memory, suppose short term memory is extremely limited and you have to keep the above request stored in short term memory all day. In effect you are reducing your efficiency in all the tasks you perform all day long, since you have less free space in your short term memory. Thus we see again how women have a brilliant gift for asking seemingly innocent favors which are really enormously costly. The subtle nature of the problem makes it difficult to pin down the real poison in their approach. [Anything held in short-term memory for five seconds automatically enters long-term memory as well. If the man chooses to keep refreshing it in STM, perhaps due to poor LTM retrieval strategies, he needs to take a course in memory techniques -- it's hardly the woman's fault. -- KIL] You can use various strategies in order to deal with this problem. One is to use some external form of storage (like writing it down in a datebook), and having a daemon which periodically wakes up and tells you to look in your external storage to see if anything important is there. Of course this also has its costs. By virtue of the relative newness of computer science, I think there will be opportunities for applying the lessons we have learned about machine behavior to other fields for some time to come. (Since it is only recently that the need for rigorous treatment of models of computation has induced us to really make some progress in understanding these things.) ------------------------------ End of AIList Digest ******************** From vtcs1::in% Thu May 29 00:49:34 1986 Date: Thu, 29 May 86 00:49:29 edt From: vtcs1::in% (LAWS@SRI-AI.ARPA) To: ailist@sri-ai.arpa Subject: AIList Digest V4 #133 Status: R AIList Digest Wednesday, 28 May 1986 Volume 4 : Issue 133 Today's Topics: Reviews - Spang Robinson Report, Volume 2 No. 5 & International Journal of Intelligent Systems, Logic Programming - Benchmarking KBES-Tools, Policy - Abstracts of Technical Talks, Seminars - Analogical and Inductive Reasoning (SU) & Reasoning about Semiconductor Fabrication (SU) & Levels of Knowledge in Distributed Computing (SU) ---------------------------------------------------------------------- Date: WED, 20 apr 86 17:02:23 CDT From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU Subject: Spang Robinson Report, Volume 2 No. 5 Summary of Spang Robinson Report, May 1986 Volume 2, 1986 __________________________________________________________________________ AI at Darpa, the U. S. Department of Defense's Advanced Research Projects Agency This year, DARPA will devote $60 million dollars to AI research. 26 million of this is for basic AI research not included in Strategic computing, 22 million is for technology base research in Strategic Computing and 25 million is for large prototype applications in Strategic computing. In 1985, 47.5 percent of the research went to industry with 40.7 to universities with the remainder going to government agencies and federal contract research institutes. Oak Ridge National Labs is developing a system to assist in the analysis of budgets. List of DARPA projects in AI Autonomous Land Vehicle project Integration - Martin Marietta Terrain Data Base - ETL Vision Based Navigation - University of Maryland ALV Route Planning Research - Hughes Laboratory Telepresence System - Vitalink Navy battle Management Force Requirements Expert System - TI Spatial Data Management System - CCA Combat Action Team - Naval Ocean Systems Center, CMU Fleet Command Center Battle Management - NOSC Commander's Display Technology - MIT Pilot's Associate (two teams) Team 1: Lockheed, General Electric, Goodyear Aerospace, Teknowledge, CMU, Search Technologies Defense Systems Team 2: McDonnel Aircraft, TI AirLand Battle Management System Technology definition - MIT Soldier-Machine Interface - Lockheed Natural Language Training Aid - Cognitive Systems AI Planning System - Advanced Decison Systems Message Fusion - LOGICON Knowledge Engineering - BDM Butterfly Benchmarking - BRL/ Los Alamos Labs Interpretation of Reconnaissance Images (SAIC, Advanced Decision Systems, TASC, MRJ, Mark Resources, Hughes Aircraft) Multiprocessor System Architectures Tree Machines - Columbia University Software Workbench - CMU Programmable Systolic Array - CMU ADA Compiler Systems - FCS, Inc Synchronous Multiprocessor Architecture - Georgia Tech High Performance Multiprocessor - University of California at Berkeley VLSI design - University of Southern Carolina Common Lisp Framework - USC-ISI Data Flow Emulation Facility - MIT Massive Memory Machine - Princeton University Connection Machine - Thinking Machines Natural Language (BBN, System Development Corporation, University of Massachussetts, University of Pennsylvania, USC-ISI, New York University, SRI) Expert System Technology (BBN, General Electric, Intellicorp, University of Massachusetts, Tecknowledge, Ohio State University, Stanford University) Speech Understanding "250 word speaker-independent system with a large vocabulary" was demonstrated in 1986 Real Time Speech - BBN Continuous Speech Understanding - CMU Auditory Modelling - Fairchild Acoustic Phonetic-Based Speech - Fairchild Speech Data Base - TI Acoustic Phonetics - MIT Tools for Speech Analysis - MIT Speech Data Base - MIT Robust Speech Recognition - Lincoln Labs Speech Co-Articulation - NBS Speaker Independence - SRI Computer Vision Optical Avoidance and Path Planning - Hughes Research Laboratory Parallel Algorithms - CMU Terrain Following - CMU Dynamic Image Interpretation - University of Massachusetts Target Motion and Tracking - USC Reasoning, Scene Analysis - Advanced Decision Systems Parallel Algorithms - MIT Spatial Representation Modelling- SRI Parallel Environments - University of Rochester Also: Compact Lisp Machine - Texas Instruments __________________________________________________________________________ Japan Watch ICOT is developing a new personal use Prolog work station called PSI-II which will be smaller and faster than the first version, PSI-I. PSI-II is targeted to cost $55,500. 60 PSI units have already been installed and the version 2.0 of the operating system has been replaced. Sega Enterprises will market in mid-April a Prolog-based personal computer for CAI for children in elementary school. Nippon Steel Corporation and Mitsubishi have been testing PROLOG for process control software. At the Information Processing Society of Japan's national convention, 30 percent of the papers were AI related. Fujitsu has a scheduling system for computers which will be used with a total of 140 CPU's and peripherals for software development in Fujitsu's Numazu Works. Mitsubishi Electric has announced an expert sytem for making estimates of machinery products NEC says it will use TMS or dependency-directed backtracking in its PECE system and it will be used in diagnosis. __________________________________________________________________________ Other: Tecknowledge announced revenue of 4 million and income of $180 thousand for third fiscal quarter. Symbolics has released version 7.0 of its LISP software. Kurzweill has raised seven million in its third round of venture capital. IBM has announced an expert system environment for MVS which is similar to their product running under VM. Battelle is developing a natural language interface for databases which is independent of domain and DBMS. It runs on a Xerox LISP machine and interfaces with a DBMS on a mainframe. They also have a package for PC's which links with a mainframe and is available in French and German Digitalk's Smalltalk environment, Methods, now can communicate with remote UNIX computers. A toolkit for design of voice or telephone application packages which interfaces with TI-Speech technology, has been announced by Denniston. Intermetrics is beta testing its Common LISP 370 for IBM mainframes. It includes interfaces with C and Fortran. A District Court found that ArtellIgence's OPS5+ product was developed by Computer Thought employees during their employment with Computer Thought. Compuater Thought has a Judgement and permanent injunction against ArtellIgence. MIT has started a project to explore the relationship between symbolic and numeric computing, called Mixed Computing. ------------------------------ Date: Fri 23 May 86 14:09:08-PDT From: C.S./Math Library Subject: Math/CS Library--New Journal-International Journal of Intelligent Systems [Forwarded from the Stanford bboard by Laws@SRI-AI.] We have just received volume 1, number 1, spring 1986 of the International Journal of Intelligent Systems. Ronald R. Yager is the editor and it is published by John Wiley and Sons. The editorial board include the following people: Hans Berliner, Ronald Brachman, Richard Duda, Marvin Minsky, Judea Pearl, Dimitri Poselov, Azriel Rosenfeld, Lotfi Zadeh, Jin Wen Zhang, and Hans Zimmerman along with others. The following articles are included in the first issue: Constructs And Phenomena Common To The Semantically- Rich Domains by Beth Adelson; An Intelligent Computer Vision System by Su-shing Chen; Hierarchical Representation Of Problem-Solving Knowledge In A Frame-Based Process Planning System by Dana S. Nau and Tien-Chien Chang; Toward General Theory of Reasoning With Uncertainty. 1. Nonspecificity and Fuzziness by Ronald R. Yager; and Review of Heuristics-Intelligent Strategies for Computer Problem Solving by Judea Pearl, Henri Farreny, and Henri Prade. Manuscripts should be submitted to the editor, Dr. Ronald R. Yager, International Journal of Intelligent Systems, Machine Intelligence Institute, Iona College, New Rochelle, New York 10801. The journal will be published quarterly and will keep a balance between the theoretical and applied, as well as provide a venue for experimental work. Harry LLull ------------------------------ Date: 29 Apr 1986 18:51-EDT From: VERACSD@USC-ISI.ARPA Subject: Benchmarking KBES-Tools [Forwarded from the Prolog Digest by Laws@SRI-AI.] I have come across some recent benchmarks from NASA (U.S. Gov't MEMORANDUM from the FM7/AI Section, April 3, 1986) which compared various KBES tools' (ART, OP, KEE & CLIPS) times for solving the MONKEY-AND-BANANA problem. (This toy problem is explained in detail along with OPS source in Brownston et. al.'s "Programming Expert Systems in OPS5".) Although the benchmarks include backward-chaining solutions to the problem in both KEE and ART (along with forward chaining counterparts), there is no PROLOG implementation in the comparison. I am very interested in a PROLOG comparison, and am in the process of implementing one. Unfortunately, I am not (yet) a competent PROLOG programmer and am currently learning my way around PROLOG on a DEC-20. Consequently, any advice/suggestions re implementing this benchmark and timing it effectively would be be useful & appreciated. (By the way, the time to beat is 1.2 secs. for a forward-chaining implementation using ART on a 3640 with 4MB main-memory.) I would be glad to share the results with anyone who offers assistance. (Or for that matter with whomever is interested.) ------------------------------ Date: Tue, 27 May 1986 20:52 EDT From: Dr. Alex Bykat Subject: Re: Abstracts of Technical Talks Published on AI-LIST In AIList V4 #120 Peter R.Spool writes: >Date: 9 May 86 10:24:22 EDT >From: PRSPOOL@RED.RUTGERS.EDU >Subject: Abstracts of Technical Talks Published on AI-LIST > > None of us surely, can attend all of the talks announced via the >AI-LIST. The abstracts which appear have served as a useful pointer for >me to current research in many different areas. I trust this has been >true for many of you as well. These abstracts could serve this secondary >purpose even better, if those people who post these abstracts to the >network, made an effort to include two addtional pieces of information >in them: > 1) A Computer Network address of the speaker. > 2) One or more references to any recently published material > with the same, or similar content to the talk. >I know that this information would help me enormously. I assume the >same is true of others. > Let me echo Peter's request. On a number of occasions I had to bother the speakers' hosts requesting precisely that kind of information. While many of the hosts respond graciously and promptly, no doubt they are busy enough without fending off such requests. A. Bykat Center of Excellence - Computer Applications University of Tennessee Chattanooga, TN 37402 Acknowledge-To: Dr. Alex Bykat [Unfortunately, the people who compose these seminar notices seldom read AIList. Those of you who wish to influence the notice formats should contact the authors directly. -- KIL] ------------------------------ Date: Mon 26 May 86 14:57:24-PDT From: Stuart Russell Subject: Seminar - Analogical and Inductive Reasoning (SU) PhD Orals Announcement Analogical and Inductive Reasoning Stuart J. Russell Department of Computer Science Stanford University Tuesday June 3rd 9.15 a.m. Building 370 Room 370 I show the need for the application of domain knowledge in analogical reasoning, and propose that this knowledge must take the form of a new class of rule called a "determination". By giving determinations a first-order definition, they can be used to make valid analogical inferences; I have thus been able to implement determination-based analogical reasoning as part of the MRS logic programming system. In such a system, analogical reasoning can be more efficient than rule-based reasoning for some tasks. Determinations appear to be a common form of regularity in the world, and form a natural stage in the acquisition of knowledge. My approach to the study of analogy can be extended to the general problem of the use of knowledge in induction, leading to the beginning of a domain-independent theory of inductive reasoning. If time permits, I will also show how the concept of determinations leads to a justification and quantitative analysis of analogy by similarity. ------------------------------ Date: Tue 27 May 86 14:56:47-PDT From: Christine Pasley Subject: Seminar - Reasoning about Semiconductor Fabrication (SU) CS529 - AI In Design & Manufacturing Instructor: Dr. J. M. Tenenbaum Title: Modeling and Reasoning about Semiconductor Fabrication Speakers: John Mohammed and Michael Klein From: Schlumberger Palo Alto Research and Shiva Multisystems Date: Wednesday, May 28, 1986 Time: 4:00 - 5:30 Place: Terman 556 Abstract for John Mohammed's talk: As part of a larger effort aimed at providing symbolic, computer-aided tools for semiconductor fabrication experts, we have developed qualitative models of the operations performed during semiconductor manufacture. By qualitativiely simulating a sequence of these models we generate a description of how a wafer is affected by the operations. This description encodes the entire history of processing for the wafer and causally relates the attributes that describe the structures on the wafer to the processing operations responsible for creating those structures. These causal relationships can be used to support many reasoning tasks in the semiconductor fabrication domain, including synthesis of new recipes, and diagnosis of failures in operating fabrication lines. Abstract for Michael Klein's talk: Current integrated circuit (IC) process computer-aided design (CAD) tools are most useful in verifying or tuning IC processes in the vicinity of an acceptable solution. However, these highly compute-intensive tools are often used too early and too often in the design cycle. Cameo, an expert CAD system, assists IC process designers in synthesizing photolithography step descriptions before using other CAD tools. Cameo has a modular knowledge base containing knowledge for all levels of the synthesis process, including heuristic knowledge as well as algorithms, formulas, graphs, and tables. It supports the parallel development of numerous design alternatives in an efficient manner and links to existing CAD tools such as IC process simulators. Visitors welcome! ------------------------------ Date: Tue, 27 May 86 17:52:01 pdt From: Vaughan Pratt Subject: Seminar - Levels of Knowledge in Distributed Computing (SU) Speaker: Rohit Parikh Date: Thursday, June 5, 1986 Time: 9:30-10:45 Place: MJ352 Title: Levels of Knowledge in Distributed Computing Abstract: It is well known that the notion of knowledge is a useful one for understanding distributed computing and in particular, synchronous and asynchronous communication can be distinguished by the possibility or impossibility of common knowledge being achieved. We show that knowledge of facts in distributed systems can be at various levels, these levels are partially ordered, and that a characterisation of these levels can be given which brings together knowledge, regular sets and well partial orderings (not the same as well founded partial orderings). ------------------------------ End of AIList Digest ******************** From vtcs1::in%<> Fri May 30 18:39:26 1986 Date: Fri, 30 May 86 18:39:21 edt From: vtcs1::in%<> (LAWS@SRI-AI.ARPA) To: ailist@sri-ai.arpa Subject: AIList Digest V4 #134 Status: R AIList Digest Friday, 30 May 1986 Volume 4 : Issue 134 Today's Topics: Query - MIT Research on Symbolic/Numeric Processing, AI Tools - Functional Programming and AI & Common LISP Style, References - Neural Networks & Lenat's AM, Linguistics - 'Xerox' vs. 'xerox', Psychology - Doing AI Backwards & Learning ---------------------------------------------------------------------- Date: Wed, 28 May 86 14:34:04 PDT From: SERAFINI%FAE@ames-io.ARPA Subject: MIT research on symbolic/numeric processing >>AIList Digest Volume 4 : Issue 133 >>From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU >>Subject: Spang Robinson Report, Volume 2 No. 5 >>MIT has started a project to explore the relationship >>between symbolic and numeric computing, called Mixed Computing. Does anybody have more info about this project? Reply to serafini%far@ames-io.ARPA Thanks. ------------------------------ Date: 29 May 86 11:32:00 edt From: Walter Hamscher Subject: Functional programming and AI Date: 21 May 86 13:14:00 EST From: "CUGINI, JOHN" Reply-to: "CUGINI, JOHN" Do working AI programs really exploit these features a lot? Eg, do "learning" programs construct unforeseen rules, perhaps based on generalization from examples, and then use the rules? Or is functional programming just a trick that happens to be easy to implement in an interpreted language? I think this is a slightly odd characterization of `functional programming.' Maybe I'm confused, but I always thought a `functional language' meant (in a nutshell) that there are no side effects. In contrast, the one important `side effect' you're talking about here is constructing a function at runtime and squirreling it away in a knowledge base, to be run later. In theory you could do the squirreling by passing around the whole state of the world and non-destructively modifying that datastucture as you go, but that's orthogonal to what you seem to be talking about (besides being painful). Whatever it's called -- this indistinguishability between code and data -- it's true that it's a ``trick,'' but I think it's an important one. In fact as I think about it now, every AI program I've ever seen _at_some_point_ passes functions around, sticks them in places like on property lists as demons, and/or mashes together portions of bodies of different functions and sticks the resulting lambda-expression somewhere to run later (Well, maybe Mycin didn't (but Teiresias did)). As far as learning programs that construct functions, it's all in the eyes of the interpreter. A rule that is going to be run by a rule interpreter counts as a kind of function (it's just not necessarily in LISP per se). So, since Tom Mitchell's LEX (for example) builds and modifies the bodies of heuristic rules which later get applied to the integration problem, it falls in this category. Tom Diettrich's EG does something like this too. I'm sure there are jillions of other examples but I'm not that deep into machine learning. And of course there's always AM (which by now should be familiar to all readers of AiList) which (among other things) did random structure modifications to LISP functions, then ran them to see what they did. For example, it might start with the following definition of EQUAL: (defun EQUAL (a b) (cond ((eq a b) t) ((and (consp a) (consp b)) (and (EQUAL (car a) (car b)) (EQUAL (cdr a) (cdr b)))) (t nil))) To generalize the function, it drops one of the conjunctions and changes its name (including the recursive call): (defun SOME-NEW-FUNCTION (a b) (cond ((eq a b) t) ((and (consp a) (consp b)) (SOME-NEW-FUNCTION (cdr a) (cdr b))) (t nil))) Lo and behold, SOME-NEW-FUNCTION is a new predicate meaning something like "same length list." So there's an existence proof at least. Walter Hamscher ------------------------------ Date: 15 May 86 17:42:18 GMT From: tektronix!uw-beaver!ssc-vax!bcsaic!michaelm@ucbvax.berkeley.edu (michael maxwell) Subject: Re: Common LISP style standards. In article <3787@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley Shebs) writes: >Sequence functions and mapping functions are generally preferable to >handwritten loops, since the Lisp wizards will probably have spent >a lot of time making them both efficient and correct (watch out though; >quality varies from implementation to implementation). I'm in a little different boat, since we're using Franz rather than Common Lisp, so perhaps the issues are a bit different when you're using Monster, I mean Common, Lisp... so at the risk of rushing in where angels etc.: A common situation we find ourselves in is the following. We have a long list, and we wish to apply some test to each member of the list. However, at some point in the list, if the test returns a certain value, there is no need to look further: we can jump out of processing the list right there, and thus save time. Now you can jump out of a do loop with "(return )", but you can't jump out of a mapc (mapcar etc.) with "return." So we wind up using "do" a lot of places where it would otherwise be natural to use "mapcar". I suppose I could use "catch" and "throw", but that looks so much like "goto" that I feel sinful if I use that solution... Any style suggestions? -- Mike Maxwell Boeing Artificial Intelligence Center ...uw-beaver!uw-june!bcsaic!michaelm ------------------------------ Date: 27 May 86 21:37:58 GMT From: ulysses!mhuxr!mhuxn!mhuxm!mhuxf!mhuxi!mhuhk!mhuxt!houxm!mtuxo!mtfmt !brian@ucbvax.berkeley.edu (B.CASTLE) Subject: Neural Networks For those interested in some historical references on neural network function, the following may be of interest : Dynamics: NUNEZ, P.L. (1981). ELECTRIC FIELDS OF THE BRAIN. The Neurophysics of EEG. Oxford University Press, NY. This book contains a pretty good overview of EEG, and also contains an interesting model of brain dynamics based on neural network connectivity. Learning: OJA, E. (1983). SUBSPACE METHODS OF PATTERN RECOGNITION. Research Studies Press, Ltd. Letchworth, Hertfordshire, England. (John Wiley and Sons, Inc., New York.) (For those with a PR background, and those having read and understood Kohonen). KOHONEN, T. (1977) - ASSOCIATIVE MEMORY. A System-Theoretical Approach. Springer-Verlag, Berlin. (1980) - CONTENT ADDRESSABLE MEMORIES. Springer- Verlag, Berlin. (1984) - SELF-ORGANIZATION AND ASSOCIATIVE MEMORY. Springer Series in Info. Sci. 8. Springer-Verlag, New York. These works provide a basic introduction to the nature of CAM systems (frame-based only), and the basic philosophy of self-organization in such systems. SUTTON, R.S. and A.G. BARTO (1981). "Toward A Modern Theory of Adaptive Networks: Expectation and Prediction." Psychological Review 88(2):135. This article provides an overview of the 'tuning' of synaptic parameters in self-organizing systems, and a reasonable bibliography. Classic: MINSKY, M. and S. PAPERT (1968). PERCEPTRONS. An Introduction to Computational Geometry. MIT Press, Cambridge, MA. This book should be read by all neural network enthusiasts. In a historical context, the Hopfield model is important insofar as it uses Monte Carlo methods to generate the network behavior. There are many other synchronous and asynchronous neural network models in the literature on neuroscience, biophysics, and cognitive psychology, as well as computer and electrical engineering. I have amassed a list of over a hundred books and articles, which I will be glad to distribute, if anyone is interested. However, keep in mind that the connection machines and chips are still very far from approaching neural networks in functional capability and diversity. brian castle @ att (MT 2D-217 middletown, nj, 07748) (...!allegra!orion!brian) (...!allegra!mtfmt!brian) ------------------------------ Date: Thu, 29 May 1986 01:07 EDT From: "David D. Story" Subject: Need Ref for "Automated Mathematician" by Doug Lenat Discussion of "Automated Mathematician" His thesis was in "Knowledge Based Systems on Artful Dumbness" - McGraw-Hill - 1982 ISBN 0-07-015557-7. Wrong again...Oh well, try this one. The price is 20 odd bucks.