Date: Sun 7 Feb 1988 23:39-PST From: AIList Moderator Kenneth Laws Reply-To: AIList@KL.SRI.COM Us-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025 Phone: (415) 859-6467 Subject: AIList V6 #28 - XLISP, Genetic Algorithms, Methodology To: AIList@KL.SRI.COM Status: RO AIList Digest Monday, 8 Feb 1988 Volume 6 : Issue 28 Today's Topics: Query - Knowledge Pro & CLOS Compilation & SWE References & CAI & Neural-Net Survey, AI Tools - XLISP 1.7 & Genetic Algorithms, Binding - Rick Riolo, Methodology - Diversity & Interviewing Experts ---------------------------------------------------------------------- Date: Sat, 6 Feb 88 10:23:59 EST From: Brady@UDEL.EDU Subject: Query - Knowledge Pro The product Knowledge Pro has been heavily advertised in AIExpert, and in other publications, but I have seen no product reviews. I would appreciate hearing about user experiences. Also, if any product reviews have been published, please tell me where. I will summarize and resend replies to the net. My intended use would be in computer aided instruction. Thank you in advance. ------------------------------ Date: 6 Feb 88 16:24:50 GMT From: pitt!cisunx!jasst3@cadre.dsl.pittsburgh.edu (Jeffrey A. Sullivan) Subject: CLOS Compilation Question I have recently gotten the PCL code from the xerox parcvax. In the defsys file, in the variable *pcl-files*, it lists a file called "low.lisp" which is not in the directories anywhere. What should be done about this? The PCL code will not compile without a low.lisp, even though I have the coral-low file which is my machine specific version. Should I rename coral-low.lisp to low.lisp (I don't think so; both are listed separately and one depends on the other)? Can someone either send me the low.lisp file or tell me what should be in it? I have created an empty low.lisp file and compilation has progressed (apparently) normally, but memory constraints keep the full PCL from finishing yet, so I don't know if that will work. Thanks in advance, -- .......................................................................... Jeff Sullivan University of Pittsburgh pitt!cisunx!jasst3 Intelligent Systems Studies Program jasper@PittVMS (BITNET) Graduate Student ------------------------------ Date: Sat, 6 Feb 88 21:57 EDT From: LEWIS%cs.umass.edu@RELAY.CS.NET Subject: request for refs on SWE for AI I'm currently taking a seminar on Software Engineering and AI. It's supposed to be balanced, but right now we've found many more papers on applying AI to software engineering than we have on software engineering applied to AI. Does anyone have some suggested papers on programming techniques, language design, environments, methodology, etc. for AI or LISP? Thanks, David D. Lewis CSNET: lewis@cs.umass.edu COINS Dept. BITNET: lewis@umass University of Massachusetts, Amherst Amherst, MA 01003 ------------------------------ Date: 2 Feb 88 16:58:26 GMT From: dalcs!aucs!870158a@uunet.uu.net (Benjamin Armstrong) Subject: Becoming CAI literate I have, of late, become fascinated by the as yet unexplored possibilities for the use of computers in all levels of our education systems. A book called "Mindstorms" by Seymour Papert has been most influential in inspiring me to seek out and digest as much information regarding computers in education as I can. I have not yet, however, found discussions on the net concerning such topics as: the design and evaluation of educational software; the effects of introducing computers into the schools on the social organization of classrooms; "computer as teacher" vs. "computer as learning tool"; and the availability of microcomputers to students. I hope that someone out there will either offer me some opinions on the above topics or direct me to a newsgroup where such discussions take place. [The newsgroup is AI-ED@SUMEX.STANFORD.EDU. -- KIL] ------------------------------ Date: 5 Feb 88 04:39:58 GMT From: nuchat!uhnix1!cosc2mi@uunet.uu.net (Francis Kam) Subject: neural-net I am working on the learning aspects of the neural net model in computing and would like to know what's happening in the rest of the neural net community in the following areas: 1) neural net models 2) neural net learning rules 3) experimental (analog, digital, optical) results of any kind with figures; 4) neural net machines (commercial, experimental, any kind); 5) any technical reports in these areas; For information exchange and discussion purpose, please send mail to mkkam@houston.edu. Thank you. ------------------------------ Date: Fri, 5 Feb 88 13:35:16 MST From: t05rrs%mpx1@LANL.GOV (Dick Silbar) Subject: XLISP 1.7 In V6 #19 Bill Delaney asks where he can get XLISP 1.5. XLISP 1.7 can be obtained from the Pioneer Valley PC User's Group on floppy diskette for $6 plus $5 one-year membership fee ($15 if outside US or Canada) plus $1 postage ($5 if outside US or Canada). I have not used 1.7 much myself, but this version comes with examples and much better documentation than 1.5. It includes, I believe, a C source listing. The PVPCUG is at P.O. Box H, North Amherst, MA 01059. ------------------------------ Date: Fri, 5 Feb 88 10:08:02 PST From: rik@sdcsvax.ucsd.edu (Rik Belew) Subject: A short definition of Genetic Algorithms Mark Goldfain asks: Would someone do me a favor and post or email a short definition of the term "Genetic Learning Algorithm" or "Genetic Algorithm" ? I feel like Genetic Algorithms has two, not quite distinct meanings these days. First, there is a particular (class of) algorithms developed by John Holland and his students. This GA(1) has at its most distinctive feature the "cross-over" operator, which Holland has gone to some effort to characterize analytically. Then there is a broader class GA(2) of genetic algorithms (sometimes also called "simulated evolution") that bear some loose resemblence to population genetics. These date back to at least Fogel, Owen and Walsh (1966). Generally, these algorithms make use of only a "mutation" operator. The complication comes with work like Ackley's thesis (CMU, 1987) which refers to Holland's GA(1), but which is most accurately described as a GA(2). Richard K. Belew rik@cs.ucsd.edu Computer Science & Engr. Dept. (C-014) Univ. Calif - San Diego San Diego, CA 92093 ------------------------------ Date: 5 Feb 88 18:22:21 GMT From: g451252772ea@deneb.ucdavis.edu (0040;0000003980;0;327;142;) Subject: Re: Cognitive System using Genetic Algo I offer definitions by (1) aspersion (2) my broad characterization (3) one of J Holland's shortest canonical characterizations and (4) application. (1) GA are anything J Holland and/or his students say they are. (But this _is_ an aspersion on a rich, subtle and creative synthesis of formal systems and evolutionary dynamics.) (2) Broadly, GA are an optimization method for complex (multi-peaked, multi- dimensional, ill-defined) fitness functions. They reliably avoid local max/min, and the search time is much less than random search would require. Production rules are employed, but only as mappings from bit-strings (with wild-cards) to other bit strings, or to system outputs. System inputs are represented as bitstrings. The rules are used stochastically, and in parallel (at least conceptually; I understand several folk are doing implementations, too). A pretty good context paper for perspective (tho weak on the definition of GA!) is the Nature review 'New optimization methods from physics and biology' (9/17/87, pp.215-19). The author discusses neural nets, simulated annealing, and one example of GA, all applied to the TSP, but comments that "... a thorough comparason ... _would be_ very interesting" (my emphasis). (3) J. Holland, "Genetic algorithms and adaptation", pp. 317-33 in ADAPTIVE CONTROL OF ILL-DEFINED SYSTEMS, 1984, Ed. O. Selfridge, E. Rissland, M. A. Arbib. Page 319 has: "In brief, and very roughly, a genetic algorithm can be looked upon as a sampling procedure that draws samples from the set C; each sample drawn has a value, the fitness of the corresponding genotype. >From this point of view the population of individuals at any time t, call it B(t), is a _set_ of samples drawn from C. The genetic algo- rithm observes the fitnesses of the individuals in B(t) and uses this information to generate and test a new set of individuals, B(t+1). As we will soon see in detail, the genetic algorithm uses the familiar "reproduction according to fitness" in combination with crossing over (and other genetic operators) to generate the new individuals. This process progressively biases the sampling pro- cedure toward the use of _combinations_ of alleles associated with above-average fitness. Surprisingly, in a population of size M, the algorithm effectively exploits some multiple of M^3 combinations in exploring C. (We shall soon see how this happens.) For populations of more than a few individuals this number, M^3, is vastly greater than the total number of alleles in the population. The correspond- ing speedup in the rate of searching C, a property called _implicit parallelism_, makes possible very high rates of adaptation. Moreover, because a genetic algorithm uses a distributed database (the popu- lation) to generate new samples, it is all but immune to some of the difficulties -- false peaks, discontinuities, high-dimensionality, etc. -- that commonly attend complex problems." Well, _I_ shall soon close here, but first the few examples of applications that I know of (the situation reminds me of the joke about the two rubes visiting New York for the first time, getting off the bus with all of $2.50. What to do? One takes the money, disappears into a drugstore and reappears having bought a box of Tampax. Quoth he, "With tampax, you can do _anything_!) Anyway: o As noted, the TSP is a canonical candidate. o A student of Holland has implemented a control algorithm for a gas pipe-line center, which monitors and adaptively controls flow rates based on cyclic usages and arbitrary, even ephemeral, constraints. o Of course, some students have done some real (biological) population genetics studies, which I note are a tad more plausible than the usual haploid, deterministic equations. o Byte mag. has run a few articles, e.g. 'Predicting International Events' and 'A bit-mapped Classifier' (both 10/86). o Artificial animals are being modelled in artificial worlds. (When will the Vivarium let some their animated blimps ("fish") be so programmed?) Finally, I noted above that the production rules take system inputs as bit-strings. This representation allows for induction, and opens up a large realm of cognitive science issues, addressed by Holland et al in their newish book, INDUCTION. Hope this helps. I really would like to hear about other application areas; pragmatic issues are still unclear in my mind also, but as apparent, the GA model has intrinsic appeal. Ron Goldthwaite / UC Davis, Psychology and Animal Behavior 'Economics is a branch of ethics, pretending to be a science; ethology is a science, pretending relevance to ethics.' ------------------------------ Date: 3 Feb 88 18:13:15 GMT From: umich!dwt@umix.cc.umich.edu (David West) Subject: Re: Classifier System Testbed In article <241@wright.EDU> joh@wright.EDU (Jae Chan Oh) writes: >Does anyone know where Rick Riolo (a former grad. student at Univ. of >Mich.) is located at present, or how can I reach him by email... You should be able to reach him at Rick_Riolo@ub.cc.umich.edu (case not significant). -David. ------------------------------ Date: Fri, 5 Feb 88 11:15:48 EST From: Jim Hendler Subject: Diversity While I realize that it is incredibly headstrong for an upstart like me to feel compelled to echo the words of someone like McCarthy, I wanted to quickly reply to his note about there being room for many approaches to AI with a resounding ``Hurrah.'' I do, however, want to add one thing: not only is there room for different approaches, but it may be crucial to examine methodologies which are hybrids of the differing techniques -- perhaps the whole can be stronger than the sum of the parts. The notion of logic, connectionism, cognitive modeling, and etc. as different `paradigms,' using the strong meaning of that term, seems to me to be dangerously divisive. The problem is so hard, it is difficult to believe that any one of the current approaches could possibly hold all the answers. Finally, let me briefly note that it is possible to create these sorts of mixed paradigm systems. Not only has my own work shown the possibility of reconciling differing approaches to activation-spreading (integrating a connectionist network and a semantic network in such a way that they communicate via a marker-passing-like spreading-activation mechanism), but some of the recent work in connectionist natural language processing* and work in structured connectionism** also seem to indicate that systems blending the technologies hold promise. Thus, instead of viewing things as a horse race with each entrant ridden by its own set of jockeys, we should try to harness the steeds together for maximum horsepower. -Jim Hendler Dept. of Computer Science UMCP * Jordan Pollack's recent doctoral thesis provides an excellant discussion of many of these systems. ** The work at Rochester by Feldman et. al. and the work of Shastri, now at UPenn, are good starting places for more info. on the structured connectionist approaches to traditional AI tasks. ------------------------------ Date: 5 Feb 88 02:58:18 GMT From: fordjm@byuvax.bitnet Subject: RE: interviewing experts Note: The following article is from both Larry E. Wood and John M. Ford of Brigham Young University. We have also recently read Evanson's AI Expert article on interviewing experts and feel that some discussion of this topic would prove useful. Relative to Steve Smoliar's reactions, we feel it is appropriate to begin with a disclaimer of sorts. As cognitive psychologists, we hope those reading Evanson's article will not judge the potential contributions of psychologists by what they find there. Some of the points Evanson chooses to emphasize seem counterintuitive (and perhaps counterproductive) to us as well. We attribute this in part to his being a practicing clinician rather than a specialist in cognitive processes. On a more positive note, as relative newcomers to the newly emerging field of knowledge engineering (two years), we do believe that there are social science disciplines which can make important contributions to the field. These disciplines include cognitive science research methodology, educational measurement and task analysis, social science survey research, anthropological research methods, protocol analysis, and others. While knowledge elicitation for the purpose of building expert systems (or other AI applications) has its own special set of problems, we believe that these social science disciplines have developed some methods which knowledge engineers can adapt to the task of knowledge elicitation and documentation. Two examples of such interdisciplinary "borrowing" which are presently influencing knowledge engineering are the widespread use of protocol analysis methods (see a number of articles in this year's issues of the International Journal of Man-Machine Studies) and the influence of anthropological methods and perspectives (alluded to by Steve Smoliar in his previous posting and represented in the work of Marriane LaFrance, see also this year's IJM-MS). It is our belief that there are other areas in the social sciences which can make important contributions, but which are not yet well known in AI circles. This is *not* intended as a blanket endorsement of approaches to knowledge elicitation based on social science disciplines. We do, however, believe that it is important for practicing knowledge engineers to attend to methodologies developed outside of AI so that they can spend their time refining and extending their application to AI rather than "reinventing the wheel." We have a paper in preparation which addresses some of these issues. Larry E. Wood John M. Ford woodl@byuvax.bitnet fordjm@byuvax.bitnet ------------------------------ End of AIList Digest ********************