From in%@vtcs1 Tue Mar 17 07:59:02 1987 Date: Tue, 17 Mar 87 07:58:54 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #80 Status: R AIList Digest Monday, 16 Mar 1987 Volume 5 : Issue 80 Today's Topics: Seminars - Machine Learning: Unifying Principles, Progress (GMR) & Search Algorithms (CMU) & Anatomy of a Case-Based Inference (CMU), Conference - AI and Law (Program and Registration Info) ---------------------------------------------------------------------- Date: Thu, 12 Mar 87 16:41 EST From: "R. Uthurusamy" Subject: Seminar - Machine Learning: Unifying Principles, Progress (GMR) Seminar at the General Motors Research Laboratories in Warren, Michigan. Friday, March 27, 1987 at 10 a.m. MACHINE LEARNING : UNIFYING PRINCIPLES and RECENT PROGRESS RYSZARD S. MICHALSKI Director of the Artificial Intelligence Laboratory and Professor of Computer Science and Medical Information Science University of Illinois, Urbana-Champaign, Illinois 61801 Machine learning, a field concerned with developing computational theories of learning and constructing learning machines, is now one of the most active research areas in artificial intelligence. An inference-based theory of learning will be presented that unifies the basic learning strategies. Special attention will be given to inductive learning strategies, which include learning from examples and learning from observation and discovery. We will show that inductive learning can be viewed as a goal-oriented and resource-constrained inference process. This process draws upon the learner's background knowledge, and involves a novel type of inference rules, called 'inductive inference' rules. In contrast with truth-preserving deductive rules, inductive rules are falsity-preserving. Several projects conducted at our AI Laboratory at Illinois will be briefly reviewed, and illustrated by examples from implemented programs. Non-GMR personnel interested in attending please contact R. Uthurusamy [ samy@gmr.com ] 313-986-1989 ------------------------------ Date: 12 Mar 1987 0717-EST From: Rich Thomason Subject: Seminar - Search Algorithms (CMU) COMPUTER SCIENCE COLLOQUIUM PITT/CMU SPEAKER: David Mutchler (Naval Research Laboratory) TITLE: What Search Algorithm Gives Optimal Average-Case Performance When Search Resources Are Highly Limited? DATE: March 13, 1987 TIME: 1:00 - 2:00 P.M. PLACE: 228 Alumni Hall, University of Pittsburgh Searching the state-space for an acceptable solution is a fundamental activity for many AI programs. Complete search of the state-space is typically infeasible. Instead, one relies on whatever heuristic information is available. Here is one interesting question that then arises: Given n search resources, how can one dynamically utilize those resources to achieve (on average) as good a solution as possible? In this talk, I will: (1) present a probabilistic model in which to study this question; (2) state two theorems that together answer the above question in the context of that model; (3) explain how branching processes and branching random walks are used to prove the theorems. Here is a brief description of the model I will be using. A least-cost root-to-leaf path is sought in a random tree. The tree is known to be binary and complete to depth N. Arc costs are independently set either to 1 (with probability p) or to 0 (with probability 1-p). The cost of a leaf is the sum of the arc costs on the path from the root to that leaf. The searcher (scout) can learn n arc values; after having done so, a leaf must be selected. It is easy to see how the leaf should be chosen. The interesting question is that: how should the scout dynamically allocated the n search resources to minimize the average cost of the leaf selected? ------------------------------ Date: 13 Mar 87 15:59:55 EST From: Marcella.Zaragoza@isl1.ri.cmu.edu Subject: Seminar - Anatomy of a Case-Based Inference (CMU) AI SEMINAR TOPIC: "The Anatomy of a Case-Based Inference" SPEAKER: Janet Kolodner, Georgia Tech WHEN: Tuesday, March 17, 1987, 3:30 p.m. WHERE: Wean Hall 5409 ***If you wish to meet with the speaker on Tuesday,*** please call Marce at x8818 ABSTRACT: Case-based reasoning is reasoning done on the basis of one or a set of previous experiences (or cases), rather than from general reasonable rules. Case-based inference is an inference made from a previous experience. In this medium, we can look at how case-based inference can be controlled, requirements for making a careful case-based inference, and what support mechanisms are necessary to make case-based inference feasible. ------------------------------ Date: Fri, 13 Mar 87 19:01:56 EST From: hafner%corwin.ccs.northeastern.edu@RELAY.CS.NET Subject: Conference - AI and Law (Program and Registration Info) The First International Conference on Artificial Intelligence and Law May 27-29, 1987 Northeastern University, Boston, Massachusetts Sponsored by: The Center for Law and Computer Science Northeastern University In Co-operation with ACM SIGART Schedule of Activities: Wednesday, May 27 8:30 a.m. - 12:30 p.m. - Tutorials 2:00 p.m. - 6:00 p.m. - Research Presentations (see list below) 7:00 p.m. - 10:00 p.m. - Welcoming Reception - NU Faculty Center Thursday and Friday, May 28-29 8:30 a.m. - 6:00 p.m. - Research Presentations (continued) Thursday evening, May 28 - 7:00 p.m. - Gala Banquet at the Colonnade Hotel Tutorials: A. "Introduction to Artificial Intelligence (For Lawyers)." Edwina L. Rissland, Associate Professor of Computer and Information Sciences, University of Massachusetts at Amherst, and Lecturer in Law, Harvard Law School, will present the fundamentals of AI from the perspective of a legal expert. B. "Applying Artificial Intelligence to Law: Opportunities and Challenges." Donald H. Berman, Richardson Professor of Law, and Carole D. Hafner, Associate Professor of Computer Science, Northeastern University, will survey the past accomplishments and current goals of research in AI and Law. Panels: "The Impact of Artificial Intelligence on the Legal System." Moderated by Cary G. deBessonet, Director of the Law and Artificial Intelligence Project, Louisiana State Law Institute. "Modeling the Legal Reasoning Process: Formal and Computational Approaches." Moderated by L. Thorne McCarty, Professor of Computer Science and Law, Rutgers University. List of Research Presentations: (final schedule is not yet determined) "Expert Systems in Law: The Datalex Project" Graham Greenleaf, Andrew Mowbray, Alan L. Tyree Faculty of Law, University of Sydney, AUSTRALIA "The Application of Expert Systems Technology to Case-Based Law" J.C. Smith, Cal Deedman Faculty of Law, University of British Columbia, CANADA "Legal Reasoning in 3-D" Marvin Belzer Advanced Computational Methods Center University of Georgia, USA "Explanation for an Expert System that Performs Estate Planning" Dean A. Schlobohm, Donald A. Waterman Moraga, California, USA "Expert Systems in Law: Out of the Research Laboratory and into the Marketplace" Richard E. Susskind Ernst & Whinney London, ENGLAND "An Expert System for Screening Employee Pension Plans for the Internal Revenue Service" Gary Grady, Ramesh S. Patil Internal Revenue Service Washington, D.C. USA "Conceptual Legal Document Retrieval Using the RUBRIC System" Richard M. Tong, Clifford A. Reid, Peter R. Douglas, Gregory J. Crowe Advanced Decision Systems Mountain View, California USA "Conceptual Retrieval and Case Law" Judith P. Dick Faculty of Library and Information Science, University of Toronto Toronto, Ontario CANADA "A Process Specification of Expert Lawyer Reasoning" D. Peter O'Neill Harvard Law School Cambridge, Massachusetts USA "Conceptual Organization of Case Law Knowledge Bases" Carole D. Hafner The Center for Law and Computer Science, Northeastern University Boston, Massachusetts USA "A Case-Based System for Trade Secrets Law" Edwina L. Rissland Kevin D. Ashley Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts USA "But, See, Accord: Generating Blue Book Citations in HYPO" Kevin D. Ashley, Edwina L. Rissland Department of Computer and Information Science University of Massachusetts, Amherst Massachusetts USA "A Connectionist Approach to Conceptual Information Retrieval" Richard K. Belew Computer Science and Engineering Department, Univ. of California San Diego, California USA "System = Program + Programmers + Law" Naftaly H. Minsky, David Rozenshtein Department of Computer Science, Rutgers University New Brunswick, New Jersey USA "A Natural Language Based Legal Expert System Project for Consultation and Tutoring -- The LEX Project" F. Haft, R.P. Jones, Th. Wetter IBM Heidelberg Scientific Centre Heidelberg, WEST GERMANY "Handling of Significant Deviations from Boilerplate Text in the SPADES System" Gary Morris, Keith Taylor, Maury Harwood Internal Revenue Service Washington, D.C. USA "Legal Data Modeling: The Prohibited Transaction Exemption Analyst" Keith Bellairs Management Science Department, University of Minnesota Minneapolis, Minnesota USA "Reasoning about `Hard' Cases in Talmudic Law Steven Weiner Somerville, Massachusetts USA "Designing Text Retrieval Systems for `Conceptual Searching'" Jon Bing Norwegian Research Center for Computers and Law Oslo, NORWAY "Support for Policy Makers: Formulating Legislation with the Aid of Logical Models" T.J.M. Bench-Capon Department of Computing, Imperial College London, ENGLAND "Further Comments on McCarty's Semantics for Deontic Logic" Andrew J.I. Jones University of Oslo Oslo, NORWAY "Experiments Using Expert Systems Technology for Teaching Law: Special Knowledge Representation Approaches in DEFAULT and EVAN" Roger D. Purdy School of Law, The University of Akron Akron, Ohio USA "OBLOG-2: A Hybrid Knowledge Representation System for Defeasible Reasoning" Thomas F. Gordon FS-INFRE, GMD Sankt Augustin, WEST GERMANY "ESPLEX: A Rule and Conceptual Model for Representing Statutes" Carlo Biogioli, Paola Mariana, Daniela Tiscornia Istituto per la Documentazione Giuridica Florence, ITALY "A PROLOG Model of the Income Tax Act of Canada" David M. Sherman Maintnix Services Thornhill, Ontario CANADA "Some Problems in Designing Expert Systems to Aid Legal Reasoning" Layman E. Allen, Charles S. Saxon Law School, The University of Michigan Ann Arbor, Michigan USA "Precedent-Based Legal Reasoning and Knowledge Acquisition in Contract Law: A Process Model" Seth R. Goldman, Michael G. Dyer, Margot Flowers Artificial Intelligence Laboratory, University of California, Los Angeles Los Angeles, California USA "Logic Programming for Large Scale Applications in Law: A Formalism of Supplementary Benefit Legislation" T.J.M. Bench-Capon, G.O. Robinson, T.W. Routen, M.J. Sergot Department of Computing, Imperial College London, ENGLAND ___________________________________________________________________________ Program Committee Conference Information ----------------- ---------------------- L.Thorne McCarty, Chair Prof. Carole D. Hafner, Conference Chair Donald H. Berman (617) 437-5116 Michael G. Dyer Ms. Rita Laffey, Registration Anne v.d. L. Gardner (617) 437-3346 Edwina L. Rissland Marek J. Sergot Housing Information Special Conference Rates are available at the following hotels: (Mention "Northeastern University Computers and Law Conference") 1. The Colonnade Hotel - $75 single/$95 double + tax ($8 parking) 120 Huntington Avenue, Boston, MA (617) 424-7000 2. The Midtown Hotel - $58 single/$63 double + tax (includes free parking) 220 Huntington Avenue, Boston, MA (617) 262-1000 or 1-800-343-1177 Both of these hotels are less than a 10-minute walk from the Conference. Rooms have also been arranged at Boston University dormitories, a 20-minute walk from the conference, or a 10-minute bus ride and a 5-minute walk. The rates are $29 single/$24 (per person) double. To reserve a room in the dormitory, use the attached registration form. SPACE IS LIMITED - RESERVE EARLY!! Conference Registration Fee (does not include tutorial or banquet) Regular Full-time Student ------- ----------------- Received by April 20 $95 $55 Received after April 20 $135 $85 Gala Banquest - May 28 ($40/person) Tutorial Fee: ($50 with conference registration $100 otherwise) Dormitory Fee ($29/night single, $24/night double) ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Mar 17 07:59:14 1987 Date: Tue, 17 Mar 87 07:59:05 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #81 Status: R AIList Digest Monday, 16 Mar 1987 Volume 5 : Issue 81 Today's Topics: Conference - 2nd SUNY Grad. Conf. on CS (Review) ---------------------------------------------------------------------- Date: Fri, 13 Mar 87 14:19:02 EST From: "William J. Rapaport" Subject: Conference - 2nd SUNY Grad. Conf. on CS (Review) SECOND ANNUAL SUNY BUFFALO GRADUATE CONFERENCE ON COMPUTER SCIENCE William J. Rapaport Department of Computer Science SUNY Buffalo Buffalo, NY 14260 rapaport@buffalo.csnet On 10 March 1987, the graduate students in the Department of Computer Science at SUNY Buffalo held their second annual Graduate Conference on Computer Science. (For a report on the first one, see SIGART No. 99, pp. 22-24.) This time, the conference took on an international flavor, with talks by graduate students from the University of Toronto and the University of Rochester, in addition to talks by our own students. Once again, the conference was flawlessly mounted. The conference was sponsored by the SUNY Buffalo Department of Computer Science, the SUNY Buffalo Computer Science Graduate Student Association, the SUNY Buffalo Graduate Student Association, and the Niagara Frontier Chapter of the ACM. Approximately 150 people from area colleges and industry attended. A SUNY Buffalo Department of Computer Science Technical Report with extended abstracts of the talks (James Geller & Keith Bettinger (eds.), _UBGCSS-87: Proceedings of the Second Annual UB Graduate Conference on Computer Science_, Technical Report 87-04, March 1987) is available by contacting the chair of the organizing committee, Scott Campbell, Department of Computer Science, SUNY Buffalo, Buffalo, NY 14260, campbl@buffalo.csnet. Following are the abstracts of the talks. Ted F. Pawlicki SUNY Buffalo "The Representation of Visual Knowledge" This paper reports on preliminary research into the representation of knowledge necessary for visual recognition. The problem is broken down into three parts: the actual knowledge that needs to be represented, the form that the representation should take, and how the knowledge itself and its representation should combine to facilitate the visual recognition task. The knowledge chosen to represent is a formalization of the theory of Recognition by Component. The representation chosen is a semantic network. John M. Mellor-Crummey University of Rochester "Parallel Program Debugging with Partial Orders" Parallel programs are considerably more difficult to debug than sequential programs, because successive executions of a parallel program often do not exhibit the same behavior. Instant Replay is a new technique for reproducing parallel-program executions. Partial orders of significant events are recorded during program execution and used to enforce equivalence of execution replays. This technique (1) requires less time and space to save information for program replay than other methods, (2) is independent of the form of interprocess communication, (3) provides for replay of an entire program, rather than individual processes, (4) introduces no centralized bottlenecks, and (5) does not require synchronized clocks or globally-consistent logical time. Some performance results of a prototype on the BBN Butterfly [TM] Parallel Processor will be presented, and it will be shown how Instant Replay can be used in the debugging cycle for parallel programs. Timothy D. Thomas and Susan J. Wroblewski SUNY Buffalo "Efficient Trouble Shooting in an Industrial Environment" Our work involves designing and implementing a real-time system for trouble shooting in an industrial environment. The system emulates the kind of problem-solving knowledge and behavior typical of a human expert after years of on-the-job experience. Our system, PASTE (Process Analysis for Solving Trouble Efficiently), is to be used in a real-time environment. It is because of this constraint that the design of an efficient system was of great importance. PASTE has a number of efficiency techniques that eliminate redundancy in remedy suggestion and that decrease response time. Ching-Huei Wang SUNY Buffalo "ABLS: An Object Recognition System for Locating Address Blocks on Mail Pieces" ABLS (Address Block Location System), a system for locating address blocks on mail pieces, represents both a specific solution to postal automation and a general framework for coordinating a collection of specialized image-processing tools to opportunistically detect objects in images. Images that ABLS deals with range from those having a high degree of global spatial structure (e.g., carefully prepared letter mail envelopes which conform to specifications) to those with no structure (e.g., magazines with randomly pasted address labels). Its problem-solving architecture is based on the blackboard model and utilizes a dependency graph, knowledge rules, and a blackboard. Diane Horton and Graeme Hirst University of Toronto "Presuppositions as Beliefs: A New Approach" Most existing theories of presupposition implicitly assume that presuppositions are facts and that all agents involved in a discourse share belief in the presuppositions that it generates. We argue that these assumptions are unrealistic and can be eliminated by treating each presupposition as the belief of an agent. We describe a new model, including an improved definition of presupposition, that takes this approach. The new model is more realistic and can handle cases of presupposition projection that could not be handled otherwise. Norman D. Wahl and Susan E. Miller SUNY Buffalo "Hypercube Algorithms to Determine Geometric Properties of Digitized Pictures" This research focuses on implementing algorithms to solve geometric problems of digitized pictures on hypercube multiprocessors. Specifically, in this paper, we present algorithms and paradigms for solving the connected component labeling problem. Work is ongoing to complete implementations of these algorithms and obtain running times on the Intel iPSC and Ncube hypercubes. The goal of this study is to determine under what circumstances (if any) each of the various algorithms is most appropriate. Deborah Walters and Ganapathy Krishnan SUNY Buffalo "Bottom-up Image Analysis for Color Separation" A system for automatic color separation for use in the printing industry is described. The goal of this research was to automate the labor-intensive preprocessing required before a graphics system can process the image. This system makes no assumptions about the semantic content of the image. The processing is entirely bottom-up and is based on image features used by the human visual system during the early stages of processing. The image is convolved with oriented edge operators, and the responses are stored in the Rho-Space representation. A number of parallel operations are performed in Rho-Space, and the image is segmented into perceptually significant parts, which can then be colored using an interactive graphics system. Bart Selman University of Toronto "Vivid Representations and Analogues" Levesque introduced the notion of a vivid knowledge representation. A vivid scheme contains complete knowledge in a tractable form. A closely related concept is that of an analogical representation or analogue. Sloman characterizes analogues as representations that are in some sense direct models of the domain, as opposed to representations consisting of a description in some general language. The prototypical example of an analogical representation is a pictorial representation, which is also an important source of vivid knowledge. We are studying these types of representations for their possible application in computationally tractable knowledge-representation systems. In particular, we are studying how information in a non-analogical (or non-vivid) form can be translated into an analogical (or vivid) form, using for example defaults and prototypes. This talk will cover the properties of vivid and analogical representations, a description of their relationship to each other, and some initial ideas on the translation process. Soteria Svorou SUNY Buffalo "The Semantics of Spatial Extension Terms in Modern Greek" In recent years, there have been increasing efforts to uncover the nature of the human mind by studying the structure of its building blocks: concepts. Partaking in this enterprise, this study explores the domain of spatial extension categories by looking at the way language treats them. It shows that lexical contrasts of Modern Greek in the domain of spatial extension reflect the perceptual strategies of "orientation" and "Gestalt" and their interaction with the concept of "boundedness", which speakers employ in the description of everyday objects. Yong Ho Jang and Hing Kai Hung SUNY Buffalo "Semantics of a Recursive Procedure with Parameters and Aliasing" We consider a subset of an Algol-like programming language that includes blocks and recursive procedures, with value and location parameter passing. We develop the operational and denotational semantics for both static and dynamic scope, with their different aliasing mechanisms. The main advantage of our approach is that the denotational semantics is compositional and can systematically handle the various scope and aliasing features. Josh D. Tenenberg and Leo B. Hartman University of Rochester "Naive Physics and the Control of Inference" Hayes proposed the naive physics program in order eventually to address problems involving the control of inference. At the time of the proposal, progress toward solutions of these problems seemed impeded by the lack of a well-defined body of knowledge of challenging size. The building of a formally interpretable encoding of the common-sense knowledge that people use to deal with the physical world seemed to fill this need. It was argued that the knowledge be expressed in first-order logic or an equivalent language in order to separate declarative information from control information. We argue here that no finite encoding of a formal theory can be completely separated from control choices by virtue of there being well-defined measures of the depth of a theorem in the deductive closure of a theory. In addition, any control choice is a commitment to a particular set of statistical properties of the problems an agent faces, and the measurement of such properties is required to evaluate these choices. Zhigang Xiang SUNY Buffalo "Multi-Level Model-Based Diagnostic Reasoning" Diagnostic systems capable of reasoning from _functional_ and _structural_ knowledge are _model-based_ systems. The uniqueness of our work is that problems of diagnosis that need not only functional and _logical_ structural knowledge but also _spatial_ structural knowledge are to be the focus. Towards this goal, we propose a framework for organizing, representing, and reasoning with an integrated knowledge base that includes multiple levels of abstraction of the physical system. More specifically, a physical system is decomposed into physical and logical components. Analogical (geometrical) and propositional (topological) spatial structural information are associated with physical components. The latter is mutually related to logical components. Functional relationships are established between logical components. Logical reasoning infers the functional status of logical components, whereas spatial reasoning performs fault localization. The framework is carried out using semantic-network representations. The implementation is independent of any given domain of application. The system, when given a description of a physical system's spatial structure, logical structure, and functional relationships between logical components, performs logical as well as spatial reasoning to locate faulty components, lesions, etc., from symptoms and findings. Domain-specific examples include circuitry fault localization and neuroanatomic localization. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Wed Mar 18 03:00:30 1987 Date: Wed, 18 Mar 87 03:00:22 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #82 Status: R AIList Digest Wednesday, 18 Mar 1987 Volume 5 : Issue 82 Today's Topics: Queries - Public-Domain Planners & Kyoto Common Lisp & Toshiba Voice Recognition Chip & OPS5 Public-Domain Code & Rete match, OPSxx, Charles Forgy & AI in Space Stations & Connectionist Computing vs Distributed Computing, Announcement - DEC 10 and PDP-6 History Project, Review - Borning's "Computers...& Nuclean-War" in NY Times ---------------------------------------------------------------------- Date: Mon, 16 Mar 87 09:07:14 -0100 From: unido!gmdzi!hertz@seismo.CSS.GOV (Joachim Hertzberg) Subject: Hertzberg2 Does anybody have reimplementations (or the original implementations or micro-implementations) on standard-machines (VAX, SYMBOLICS, ...) of one of the "classical" AI-planners publicly available (for non-profit organizations)? Joachim Hertzberg GMD Postfach 1240 5205 ST. AUGUSTIN W-GERMANY hertz%xps@gmdzi.usenet ------------------------------ Date: Mon, 16 Mar 87 15:30:19 est From: michael@dolphin.BU.EDU (Michael Forte) Subject: Kyoto Common Lisp My name is Michael Forte. I work in the Computer Graphics Laboratory at the Boston University Academic Computing Center. Presently, I am researching the uses of Lisp for graphics systems, particularly for user interfaces. We are using a Celerity 1260 dual processor machine for our graphics number crunching. Unfortunately, there is no Lisp available for this machine yet, and Celerity says it will be quite a while before Common Lisp (my preference, for portability reasons) is available for their line of machines. I was referred to you for information on the availablity of Kyoto Common Lisp. We do have a very good C compiler, and if Kyoto Lisp is written in C I could probably port it to our machine easily. Would you please send me information about Kyoto Common Lisp, including pricing, educational discounts, etc. You may write to me at: Michael Forte Computer Graphics Lab Academic Computing Center 111 Cummington Street Boston University Boston, Mass. 02215 if email is not appropriate. I may also be reached by phone at 617-353-2780. Thanks. ------------------------------ Date: 16 Mar 87 18:58:04 GMT From: ssc-vax!bcsaic!michaelm@BEAVER.CS.WASHINGTON.EDU (Michael Maxwell) Subject: Re: Toshiba voice recognition chip In article <1895@hoptoad.uucp> gnu@hoptoad.uucp (John Gilmore) writes: >A recent article in Newsbytes Japan mentions: > > Toshiba's Voice Recognition LSI -- Toshiba (Tokyo) has developed > a powerful LSI for recognizing human speech. This new product > recognizes a variety of spoken sounds with 95% accuracy. > Toshiba plans to use this LSI for a voice input system for its > word processors. This is a rather meaningless statement, even for a press release. How many sounds? What kind of sounds (individual phones (~=letters), words, phrases, whistles etc.)? If it's talking about speech sounds (as opposed to any sounds the human vocal tract can make), what is the size of the vocabulary one can build with it? Do words have to be separated by silence? Does it work in real time? Is it even trainable? (I can imagine having to talk to my computer with a Japanese accent :-) If anyone knows more about this... There are lots of voice recognition boards out there. Most are fairly primitive, which is part of the reason we haven't them used more. Need I say that my employer doesn't necessarily share my opinion? -- Mike Maxwell Boeing Advanced Technology Center arpa: michaelm@boeing.com uucp: uw-beaver!uw-june!bcsaic!michaelm ------------------------------ Date: 16 Mar 87 00:22:54 GMT From: clyde!masscomp!wang7!eric@rutgers.rutgers.edu (eric) Subject: OPS5 PUBLIC DOMAIN CODE HELP!!!!!!!!!! I NEED TO GET A COPY OF THE COMMON LISP OPS5 INTERPRETER IN THE PUBLIC DOMAIN. I need it yesterday! Please email to me. I will give you my first born son or considerable gratitude. Eric Van Tassell clyde!bonnie!masscomp!wang7!eric clyde!bonnie!masscomp!dlcdev!eric dlcdev!eric@eddie.mit.edu ------------------------------ Date: 15 Mar 87 23:54:46 GMT From: clyde!masscomp!wang7!eric@rutgers.rutgers.edu (eric) Subject: Rete match, OPSxx, Charles Forgy Hello, I'm a grad student at B.U. doing some investigation of production systems. I am interested in any and all information people on the net may have relating to the rete match algorithm and the OPS family of languages. Also does anyone know if Dr. Charles Forgy can be e-mail to on the net? Thanks in advance. Remember if you email quickly, the life you save may be mine. Eric Van Tassell clyde!bonnie!masscomp!wang7!eric clyde!bonnie!masscomp!dlcdev!eric harvard!mit-eddie!dlcdev!eric dlcdev!eric@eddie.mit.edu [CLF@G.CS.CMU.EDU -- KIL] ------------------------------ Date: 17 Mar 87 16:20:57 GMT From: uwai!mehta@rsch.wisc.edu (Shekhar Mehta.) Subject: AI - its use in Space stations I would like to know how AI would be useful for space stations. I am particularly interested in its application considering the distance between the space craft and earth ( and therefore there being finite time for commands to given from earth stations). How and in what way will AI deal with this problem. I would like to get some pointers as to where to begin searching for AI's application in space ( particularly the space station). shekhar mehta mehta@ai.wisc.edu ------------------------------ Date: 16 Mar 87 12:40:15 GMT From: Dekang Lindek Reply-to: lindek@cs.strath.ac.uk (Dekang Lindek) Subject: diff "connectionist computing" "distributed computing" Any one know the result of the title of this article? advThanksance. !-@-#-$-%-^-&-*-(-)-!-@-#-$-%-^-&-*-(-)-!-@-#-$-%-^-&-*-(-) Dekang Lin Dept. of CS Univ. of Strathclyde 26 Richmond St. Glasgow, G1 1XH, U.K. lindek%cs.strath.ac.uk@ucl-cs.arpa ....!seismo!mcvax!ukc!strath-cs!lindek ------------------------------ Date: 16 Mar 1987 1311-EST From: "Joe Dempster, DTN: 336.2252 AT&T: 609.665.8711" Subject: Announcement of the DEC 10 and PDP-6 history project (PROJECT-10262) This message originates from 2 sources: Les Earnest Computer Science Department STANFORD UNIVERSITY Stanford, CA 94305 415.723.9729 ARPA: LES@SAIL.STANFORD.EDU Joe Dempster DIGITAL EQUIPMENT CORPORATION 6 Cherry Hill Executive Campus Route 70 Cherry Hill, NJ 08002 609.665.8711 ARPA: DEMPSTER@MARLBORO.DEC.COM (MARKET) The goal of this project is to publish an analysis and history of the evolution, implementation and use of Digital's 36 bit systems. This period began with the PDP-6 in 1964 and continues today with TOPS 10/20 development, which is scheduled to end in 1988. We are working aggressively to finish the project, and have it published, by March/April 1988. This will require that the completed manuscript be ready to go into the publication cycle by August 1987! The project will attempt to answer the following questions: 1. In what markets/applications were these systems used? 2. Who were the users of these systems and what impact did roughly 2,500 TOPS 10/20 systems have on their organizations? 3. Who were the principle system architects of these systems? What features, and if there had been sufficient time to implement them, would have significantly improved the architecture? 4. What impact did the decision to continue to examine design extensions to the architecture have on the usefulness and acceptability of these systems. This is in contrast to a more common practice today to work from a detailed design specification, sometimes dated, building follow-on systems which provide increased performance through the use of new component technologies and packaging techniques. 5. What part of the overall design (TOPS10/20) was technology dependent and what can still be considered "unequaled" in relation to other computer architectures still undergoing active development? 6. What type of development environment (both HW and SW) supported and contributed to the evolution of 36 bit systems? 7. What influence did TOPS 10/20 have on other vendors system development? This history will undoubtedly be assembled from many sources and participants. Some information will be anecdotal; there will be interviews with the people involved (users and developers) and technical papers will be solicited. Of course there will also be the packaging and assembly of facts as we see them. The result will hopefully have sufficient depth to serve as: 1. An introductory or advanced text on system design and hardware/system software implementation. 2. A analysis of the success and difficulties of marketing complex systems into a very crowded market of competing alternatives. 3. A catharsis for those of us who have contributed to the development and use these systems and who will now move onto new computing architectures and opportunities. In addition to interviewing directly 25-50 developers, users and product managers we will continue to work to identify contributors and significant events up to when the final draft is submitted to the publisher. Two "topics" are already under development: 1. Rob Gingell from SUN is working on a paper which looks at extensions to TOPS 20 which would have enhanced its capabilities. 2. Frank da Cruz and Columbia are summarizing 10 years of experience and development of TOPS 20 systems. Some effort will also be made to detail the process which lead to their selection of a follow-on architecture to TOPS 20. There is a need to develop additional topics which represent the use and application of the technology (TOPS 10/20) in other areas. Specific recommendations are welcome as are proposals to develop them. A short abstract should accompany any such proposal. Every effort will be made to work with individuals or organizations interested in making such a contribution. There will be a standalone (no network connections) DECSYSTEM 2020 (YIPYIP) dedicated to supporting the project. This system has a 3 line hunt group, with all lines accessible from a single number (201.874.8612). Both YIPYIP and MARKET will have "public" directories for remote login (DEMPSTER.PROJECT-10262 LCGLCG). MARKET can be accessed by modem (617.467.7437), however disk quota is limited. MARKET's primary purpose is ARPAnet TELNET access. YIPYIP is a dedicated PROJECT-10262 system. MAIL can also be sent to DEMPSTER on either system. YIPYIP and MARKET will keep a running summary of ideas and comments up on Columbia's BBOARD software. KERMIT also runs on each system for uploads. SAIL.STANFORD.EDU will support ARPAnet transfers to a "public" area: FTP CONNECT SAIL.STANFORD.EDU SEND AFN.EXT DSK: AFN.EXT [PUB,LES] SAIL runs WAITS, an operating system similiar to TOPS 10. File names are limited to 6 characters and extensions limited to 3. Implementation details: 1. User input is welcomed and desired from all application and geographic areas. 2. Input from past and present developers is also desired. 3. Throughout the project a secondary goal will be to build a list of users/locations (installation date, duration and disposition) of PDP-6 and KA, KI, KL and KS systems. Serial numbers, if available, are requested. 4. We anticipate that this project will generate a large volume of information (which we hope will arrive electronically). Some information, for any number of reasons, may not be in line with the project's stated goals. Therefore, all notes, interview material and submissions will be donated to the Computer Museum in Boston at the the completion of the project to be available for future reference and research. Ideas, contributions, suggestions and criticism are welcome. As these 36 bit systems were the products of a multitude of people, so too will be the writing of their history. ------------------------------ Date: 16 Mar 87 17:37:54 GMT From: jon@june.cs.washington.edu (Jon Jacky) Subject: AI books and paper (Borning's "Computers...& N-war) in NY TIMES Eric Sandberg-Diment's regular column in the business section of the NEW YORK TIMES, called "The Executive Computer", this Sunday (3/15/87, p. F18, National Edition) reviews two popular books on computing: Grant Fjermdahl's THE TOMORROW MAKERS and Theodore Roszak's THE CULT OF INFORMATION (he criticizes both as being extreme views). At the end, Sandberg-Diment adds: The artificial intelligence community and, in fact, the entire computer cabal are nevertheless trying to mislead us into accepting the notion that the difference between the "mind" of the computer and the mind of man is merely a matter of degree, and that not only will this difference be eliminated in short order, but soon people will rank second to computers in their cognitive abilities and responsiveness. In contemplating this thesis, the article "Computer system reliability and nuclear war," by Alan Borning in the February 1987 issue of Communications of the ACM ($12 from the ACM, Order Dept., POB 64145, Baltimore, MD 21264) is must reading. Published in a journal not normally decipherable by the average individual, it is probably the clearest essay to date on why the Strategic Defense Initiative is both inevitable and doomed to failure. Here is an instance where information filtering cannot be gainsaid, for there is no way the nonspecialist could successfully draw on the 140-plus sources the author used as background for his thesis. The article is also one that leaves the reader with a sense of fatalism, along with perhaps an unspoken addendum to Samuel Johnson's observation that "the future is purchased by the present" -- how expensive it all will be in terms of humanity. At a time when there is a very real danger of our subjugating ourselves to machines to an extent far greater than already realized, readings such as these may well be all that keep our minds from becoming irreversibly enslaved." -Jonathan Jacky University of Washington ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 19 02:44:28 1987 Date: Thu, 19 Mar 87 02:44:16 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #83 Status: R AIList Digest Thursday, 19 Mar 1987 Volume 5 : Issue 83 Today's Topics: Seminars - EFFIGY: Symbolic Execution of Programs (IBM) & Applying Precedents in Case-Based Reasoning (Rochester) & Learning Decomposition Methods (CMU) & Circumscriptive Theories (SU), Conferences - TI's AI Satellite Symposium III & AAAI Workshop on Battle Management ---------------------------------------------------------------------- Date: Wed, 18 Mar 87 18:21:19 PST From: IBM Almaden Research Center Calendar Subject: Seminar - EFFIGY: Symbolic Execution of Programs (IBM) IBM Almaden Research Center 650 Harry Road San Jose, CA 95120-6099 EFFIGY: SYMBOLIC EXECUTION OF PROGRAMS J. C. King, IBM Almaden Research Center Computer Science Coll. Thurs., March 26 3:00 P.M. Room: Front Aud. Long ago and far away, a group in IBM Yorktown Heights devised a computer system called "EFFIGY" which executed computer programs "symbolically." On a recent archeological dig in musty old CMS files, I stumbled upon what appeared to be a genuine EFFIGY MODULE. After a time, with a new FILEDEF and a long forgotten LINK, I was able to execute the model, just as the ancients did. It was amazing for me to remember how advanced civilization was, even then (12-15 years ago). For some reason, the art of symbolic execution never caught on in a big way, and it has nearly been lost. For those of the younger generation, who have never seen the chanting and chest beating of the symbolic executors (sexers for short), I will try to recreate some of that ancient spirit. Especially with the new projection system in the Front Auditorium, which is capable of showing computer terminal output on-line, I can demonstrate this EFFIGY system, as it was only possible to do before in a one-on-one situation in a Yorktown cave. The sexers had discovered that the same leverage obtained by using algebra to understand and prove things about arithmetic can be applied to computer programs. If one executes a program using mathematical symbols, instead of numbers, as program inputs, the same algebraic leverage can be obtained. Of course, the dynamic aspects of program execution makes this process tantalizingly non trivial. Combining the well-known concepts of program execution and algebra, the notions of "proving the correctness of programs" and "inductive assertions" can be easily understood without knowingly resorting to heavy mathematical concepts. Host: R. Williams (Refreshments at 2:45 P.M.) ------------------------------ Date: Mon, 16 Mar 87 16:46:45 EST From: tim@linc.cis.upenn.edu (Tim Finin) Subject: Seminar - Applying Precedents in Case-Based Reasoning (Rochester) Colloquium Computer and Information Science University of Pennsylvania "Applying Relevant Precedents in a Case-Based Reasoning System" Kevin D. Ashley Department of Computer and Information Science University of Massachusetts at Amherst The law is an excellent domain to study Case-Based Reasoning (``CBR") problems since it espouses a doctrine of precedent in which prior cases are the primary tools for justifying legal conclusions. The law is also a paradigm for adversarial CBR; there are ``no right answers", only arguments pitting interpretations of cases and facts against each other. This talk will demonstrate techniques employed in the HYPO program for representing and applying case precedents and hypothetical cases to assist an attorney in evaluating and making arguments about a new fact situation. HYPO performs case-based reasoning and, in particular, models legal reasoning in the domain of trade secrets law. HYPO's key elements include: (1) a structured case knowledge base (``CKB") of actual legal cases; (2) an indexing scheme (``dimensions") for retrieval of relevant precedents from the CKB; (3) techniques for analyzing a current fact situation (``cfs"); (4) techniques for ``positioning" the cfs with respect to relevant precedent cases in the CKB and finding the most on point cases (``mopc"); (5) techniques for manipulating cases (e.g., citing, distinguishing, hybridizing); (6) techniques for perturbing the cfs to generate hypotheticals that test the sensitivity of the cfs to changes, particularly with regard to potentially adverse effects of new damaging facts coming to light and existing favorable ones being discredited; and (7) the use of ``3-ply" argument snippets to dry run and debug an argument. An extended example of HYPO in action on a sample trade secrets case will be presented. The example will demonstrate how HYPO uses ``dimensions", ``case-analysis-record" and ``claim lattice" mechanisms to perform indexing and relevancy assessment of precedent cases dynamically and how it compares and contrasts cases to come up with the best precedents pro and con a decision. March 20, 1987 3:00 to 4:30 Room 216 Refreshments Available 2:30-3:00 Faculty Lounge ------------------------------ Date: 18 Mar 87 01:11:28 EST From: Steven.Minton@cad.cs.cmu.edu Subject: Seminar - Learning Decomposition Methods (CMU) This week's speaker is Sridhar Mahadevan. As usual, the seminar is in 7220 Wean on Friday at 3:15. Come one, come all. LEARNING DECOMPOSITION METHODS TO IMPROVE HIERARCHICAL PROBLEM-SOLVING PERFORMANCE Previous work in machine learning on improving problem-solving performance has usually assumed a @i(state-space) or "flat" problem-solving model. However, problem-solvers in complex domains, such as design, usually employ a hierarchical or problem-reduction strategy to avoid the combinatorial explosion of possible operator sequences. Consequently, in order to apply machine learning to complex domains, hierarchical problem-solvers that automatically improve their performance need to designed. One general approach is to design an @i(interactive) problem-solver -- a @i(learning apprentice) -- that learns from the problem-solving activity of expert users. In this talk we propose a technique, VBL, by which such a system can learn new problem-reduction operators, or @i(decomposition methods), based on a verification of the correctness of example decompositions. We also discuss two important limitations of the VBL technique -- intractability of verification and specificity of generalization -- and propose solutions to them. ------------------------------ Date: 18 Mar 87 1142 PST From: Vladimir Lifschitz Subject: Seminar - Circumscriptive Theories (SU) CIRCUMSCRIPTIVE THEORIES Vladimir Lifschitz Thursday, March 19, 4pm Bldg. 160, Room 161K The use of circumscription for formalizing commonsense knowledge and reasoning requires that a circumscription policy be selected for each particular application: we should specify which predicates are circumscribed, which predicates and functions are allowed to vary, what priorities between the circumscribed predicates are established, etc. The circumscription policy is usually described either informally or using suitable metamathematical notation. In this talk a simple and general formalism will be proposed which permits describing circumscription policies by axioms, included in the knowledge base along with the axioms describing the objects of reasoning. ------------------------------ Date: Mon, 16 Mar 87 14:33:02 cst From: "Michael T. Gately" Subject: Conference - TI's AI Satellite Symposium III This is a short extension to Dan Cerys' message of 12-MAR-87 regarding the TI Artificial Intelligence Satellite Symposium. First, the phone number, 1-800-527-3500 can be used to answer many questions; such as how to rent a satellite antenna, what type of video equipment is necessary for different audience sizes, etc. Second, take note of the unusual time shifting for different time zones across North America. Finally, the following is a list of cities which already have public sites planned. Please call the toll-free number as soon as possible to reserve a seat. AI Satellite Symposium III "AI Productivity Roundtable" April 8, 1987 Eastern/Rocky Mountain Time Zones (Daylight Times) 9:00 - 13:00 Pacific/Central Time Zones (Daylight Times) 8:00 - 12:00 AI Symposium II condensation April 8, 1987 Eastern/Rocky Mountain Time Zones (Daylight Times) 14:00 - 15:30 Pacific/Central Time Zones (Daylight Times) 13:00 - 14:30 Atlanta GA Austin TX Boston MA Chicago IL Cleveland OH Dallas TX Dayton OH Denver CO Detroit MI Hartford CT Houston TX Huntsville AL Kansas City KS Los Angeles CA Miami FL Milwaukee WI Minneapolis MN Montreal Canada New York NY Philadelphia PA Raleigh/Durham NC San Diego CA San Francisco CA San Jose CA Seattle WA St. Louis MO Summit NJ Toronto Canada Washington DC ------------------------------ Date: Mon, 16 Mar 87 12:26:17 est From: elsaesser%mwcamis@mitre.ARPA Subject: Conference - AAAI Workshop on Battle Management Issues Concerning AI Applications To Battle Management University of Washington Thursday, July 16, 1987 Sponsored by AAAI Success in applying AI technologies to battle management (e.g., production and blackboard systems for sensor fusion, constraint propagation for non-temporal planning tasks) has generated growing interest in the defense community in developing intelligent battle management aids, workstations, and systems. Along with this growing interest, there has been an order of magnitude increase in funding for battle management AI projects (e.g., Army-DARPA's Air-Land Battle Management, SAC-JSTPS-RADC-DARPA's Survivable Adaptive Planning Experiment). Past successes belie the lag of the AI community in solving technical issues associated with these projects. These issues include those associated with cooperating knowledge-based systems, distributed problem solving, uncertainty management, non-monotonic reasoning, planning, real-time performance requirements (i.e., the need for parallel or other advanced architectures), and the ability of users to maintain understanding and control of the automation. The purpose of this workshop is to gather together researchers who are attempting to find solutions to these and related issues and to discuss the current state of these arts. We believe that not enough has been done in these key areas areas, and that one result of the workshop might be some road map of how the community ought to proceed. The issues are so numerous and the area is large enough that we feel the initial workshop will only allow us to delineate how much has been done and what needs to be done in key areas. Thus, the goal is both to articulate where the major gaps are and which ones have a reasonable chance of solution in some believable time-frame. Interested persons should submit an extended abstract of not more than six pages to either person listed below (no on-line submissions please) on an AI subject of relevance to the above workshop objectives not later than 1 May 1987. Authors will be notified of acceptances by 1 June 1987, along with information relative to the workshop administration. R. Peter Bonasso Chris Elsaesser (703) 883 6908 (703) 883 6563 bonasso@mitre elsaesser%mwcamis@MITRE MITRE Washington AI Center Mail Stop W410 7525 Colshire Drive McLean, VA 22102 ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Mar 24 02:35:49 1987 Date: Tue, 24 Mar 87 02:35:43 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #84 Status: R AIList Digest Monday, 23 Mar 1987 Volume 5 : Issue 84 Today's Topics: Queries - LOOPS Newgroup & Benchmarks for Production Systems & Expert Systems on AT&T PC6300, Comments - Toshiba Voice Recognition Chip, AI Tools - Genetic Algorithms, Book - Expert Systems: The User Interface, Paper - Categories and Counterfactuals, Funding - White Papers on Basic Research in AI ---------------------------------------------------------------------- Date: 19 Mar 87 15:13:08 GMT From: uwai!beverly@rsch.wisc.edu (Beverly Seavey) Subject: LOOPS newgroups Does anyone know how to subscribe to the LOOPS newgroup at Berkeley? ------------------------------ Date: 19 Mar 87 10:22:18 GMT From: mcvax!ukc!icdoc!cdsm@seismo.css.gov (Chris Moss) Subject: Benchmarks for production systems Could anyone send me or point me to an up-to-date listing of any benchmark figures for production systems. In particular the monkey and bananas problem in the OPS5 book is often quoted but I don't have any figures. Thanks, Chris Moss. ------------------------------ Date: 20 Mar 87 16:31:35 GMT From: ihnp4!homxb!houxm!mtuxo!mtgzy!mas@ucbvax.Berkeley.EDU Subject: Expert Systems on AT&T PC6300 We are currently writing the requirements for an "expert" system to do wiring designs using AT&T PC6300's. We are exploring various languages and shells that may work best in our domain. One of our requirements for the language/shell is a good interface to C routines and graphic libraries. Is there any body out there who can give us their experience with the available expert system tools, specifically in terms of space, speed, and customizing capabilities. Thanks in advance. Masood Shariff & John Kee AT&T Middletown, NJ 07748 ihnp4!mtgzy!mas ------------------------------ Date: Wed, 18 Mar 87 02:12:33 EST From: Alex.Waibel@CAD.CS.CMU.EDU Subject: Toshiba Voice Recognition Chip With respect to the inquiry about the Toshiba Voice Recognition Chip, here's two words of caution: First off, recognition performance claims in percent are nice to know, but in general should be taken with a grain of salt. These numbers are HEAVILY dependent on whether speech was recorded in a quiet or noisy environment, whether the speaker is cooperative or not, whether the test was done speaker-dependently or independently, whether the vocabulary in question is ambiguous (BOOK, COOK, TOOK) or not (BOOK, UNIVERSITY). Most of the current systems are also isolated word systems, i.e., one must make pauses between words. Whether such a system will work or not therefore relly depends on your particular recognition task and environment. Japanese has also two convenient properties: Words are mostly consonant-vowel sequences, and the Japanese writing system (Kana) consists of essentially sequences of syllable symbols. Toshiba and other Japanese manufacturers therefore have systems that allow the speaker to speak one of the (in the order of 100 or so (including some alternates) kanas at a time and have the word processor then convert a sequence of kanas into a kanji (the chinese word symbol). Now, unfortunately, this doesn't carry over easily into English. Since English syllables employ complex consonants clusters, there are more in the order of 20,000 English syllables (with 100,000 possible), which makes for a substantially harder recognition task. Also speaking these syllables in isolation is a lot less natural than in Japanese since our writing system isn't syllable based. The corresponding recognition of phonemes in stead of syllables in English is a VERY hard problem with good recognition accuracy hard to come by. Toshiba and other manufacturers (in Japan and the USA) have also whole word based systems, but most of them require training of the system, i.e, all words in the vocabulary must be read in at least once by the user. I've seen the systems at Toshiba and they do indeed do impressive work, but as far as hooking it up to your home computer and talking away in English, I'm afraid the story is still a little more complicated than that. Alex Waibel, CMU ------------------------------ Date: 17 Mar 87 21:15:40 GMT From: hpda!hpcllla!hpclisp!coulter@ucbvax.Berkeley.EDU (Michael Coulter) Subject: Re: Genetic Algorithms John Holland is (or was) at U. of Mich. and has written a very nice book on genetic algorithms. I once took a class on the subject which he taught. If you need more information (title, publisher, isbn number, etc.), send me a note and I'll see if I can find my copy of the book. -- Michael Coulter ...hpda!hpcllld!coulter ------------------------------ Date: 21 Mar 87 18:05:00 GMT From: uiucdcsm!matheus@a.cs.uiuc.edu Subject: Re: Genetic Algorithms Proceedings of an International Conference on Genetic Algorithms and their Applications. John Grefenstette, editor July 24-26, 1985, Carnegie-Mellon University Sponsored by: Texas Instruments, Inc. U.S. Navy Center for Applied Research in Artificial Intelligence (NCARAI) ------ Some additional references: John Holland, "Escaping Brittleness: The Possibilities of General-Purpose Learning Algorithms Applied to Parallel Rule-Based Systems." In, Machine Learning, Vol II, Michalski, Carbonell, Mitchell, (Eds.), 1986. ------ Larry Rendell, "Conceptual Knowledge Acquisition in Search." In, Computational Models of Learning, L. Bolc (Ed.), Springer-Verlag, 1987. ------ David Goldberg, "Computer-aided Gas Pipeline Operation using Genetic Algorithms and Rule Learning." Ph.D. dissertation, University of Michigan, 1983. Christopher J. Matheus Inductive Learning Group University of Illinois. ------------------------------ Date: 20 Mar 87 20:55:00 GMT From: convex!bernhart@a.cs.uiuc.edu Subject: Re: Genetic Algorithms I'm delighted to find someone interested in genetic algorithms. I'm glad I decided to wander through some notes files. About 10 years ago I did some work in this area using adaptive hashing as my application. My faculty advisor turned me on to the subject. Another student did some work with pattern generation and published a paper on the subject. His name is Gary Rogers, and last I knew he was teaching at the Swiss Federal Institute. I'll try to find a copy of the paper - I just moved so am a little(?!) disorganized. Two books that will be of interest to you are: Holland, John H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Ann Arbor: The University of Michigan Press, 1975. Holland is a professor of computer science at the University of Michigan. His book references a number of dissertations. Holland, John H., Holyoak, Keith J., Nisbett, Richard E., and Thagard, Paul R. Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA: The MIT Press, 1986. I just got this book a month or two ago and haven't had a chance to look at it what with moving and all. However, after just glancing through it, I see there is material on genetic algorithms and classifier systems. I just happened to order it because I saw an ad in an MIT Press circular and figured a John Holland book would interest me. The other authors are U of M faculty also, two in psychology and one in philosophy. I'm interested in pursuing my research in this area again. Last Fall I starting doing a computerized literature search through my company's Information Center. I didn't come up with anything, but I probably didn't just hit the right databases at first. I couldn't continue the search because funding for those activies was cut. Your note is the first reference I've seen to any conference on genetic algorithms. I'd love to get my hands on those proceedings, too! Who sponsored the conference? Where was it held? If I learn anything more, I'll respond here. If you find out any more, I'll look out for a follow- up response from you. I'd like to hear of any progress you make in your research. My most recent activities have been in the Ada arena, and I'm planning to convert my genetic modeling work of the past into Ada. I think it's going to work out very well. Good luck with your pursuits! Marcia Bernhardt Convex Computer Corporation 701 N. Plano Rd. Richardson, TX 75081 convex!bernhart ------------------------------ Date: Wed, 18 Mar 87 15:37:31 EST From: Jim Hendler Subject: Book - Expert Systems: The User Interface In response to several messages I've received at late asking questions about a forthcoming book, here's some info: The book, Expert systems: the user interface, will be published by Ablex and is not due out until this summer (we hope to hit the conferences but cannot guarantee it). Queries can be addressed to me or, preferably, to Ablex publishers. Below is the table of contents. If you are desperate for a copy of some chapter, please send your requests directly to the first author. There are no pre-release copies of the entire book available. -Jim Hendler (hendler@brillig.umd.edu) Expert Systems: The User Interface J. Hendler (Editor) Ablex Publishing Corp. Contents Preface -- Ben Shneiderman Hendler, J.A. and Lewis, C. Designing Interfaces for Expert Systems Musen, M.A., Fagan, L.A., and Shortliffe, E.H. Graphical Specification of Procedural Knowledge for an Expert System Tuhrim, S., Reggia, J.A. and Floor, M. Expert System Development: Letting the domain specialist directly author knowledge bases. Mittal, S., Bobrow, D.J. and DeKleer, J. DARN: Towards a Community Memory for Diagnosis and Repair Tasks Nau, D.S. and Gray, M. Hierarchical Knowledge Clustering: A way to represent and Use Problem-Solving Knowledge Baroff, J., Simon, R., Gilman, F and Shneiderman, B. Direct Manipulation User Interfaces for Expert Systems Fickas, S. Development Tools For Rule Based Systems Hayes, P.J. Using a Knowledge Base to Drive an Expert System Interface with a Natural Language Component Faneuf, R. and Zirk, S. A UIMS for Building Metaphoric User Interfaces Chandrasekaran, B, Tanner, M.C., and Josephson, J.R. Explanation: The role of control strategies and deep models Jacob, R.J.K. and Froscher, J.N. Facilitating Change in Rule-based Systems Stelzner, M. and Williams, M.D. The Evolution of Interface Requirements for Expert Systems Lehner, P.E. and Kraij, M.M. Cognitive Impacts Of The User Interface ------------------------------ Date: Tue, 17 Mar 87 00:16:40 est From: french@farg.umich.edu (Bob French) Subject: categories and counterfactuals The Role of Categories in the Generation of Counterfactuals: A Connectionist Interpretation by Robert M. French and Mark Weaver Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, Michigan 48109 Tel. (313) 763-5875 Keywords: counterfactuals, norm theory, connectionism, categories Abstract This paper proposes that a fairly standard connectionist category model can provide a mechanism for the generation of counterfactuals -- non-veridical versions of perceived events or objects. A distinction is made between evolved counterfactuals, which generate mental spaces (as proposed by Fauconnier), and fleeting counterfactuals, which do not. This paper explores only the latter in detail. A connection is made with the recently proposed counterfactual theory of Kahneman and Miller; specifically our model shares with theirs a fundamental rule of counterfactual production based on normality. The relationship between counterfactuals and the psychological constructs of ``schema with correction'' and ``goodness'' is examined. A computer simulation in support of our model is included. The paper has been submitted to the Cognitive Science Society Conference 1987 to be held in Seattle, WA. in July. Anyone interested in a copy of the paper, should get in touch with Bob French as follows: french@farg.umich.edu ------------------------------ Date: Thu, 19 Mar 87 8:21:42 EST From: "Dr. Ron Green" (ARO | mort) Subject: White Papers on Basic Research in AI The Army Research Office would be interested in receiving short white papers on proposed "Basic Research" in AI. The pepers should discuss a planned three year research effort with technical content discussing merits of research topic. Mail the white papers to the following address: US Army Research Office P.O. Box 12211 Electronics Division(Attn: Dr. C. Ronald Green) Research Triangle Park, NC 27709-2211 Topics of interest are purely AI as well as related topics as applied to Computer Science. I would prefer the "white papers" as opposed to a deluge of telephone calls. E-mail responses will also be acceptable. green@brl.arpa Thanks Ron Green ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Mar 24 02:36:02 1987 Date: Tue, 24 Mar 87 02:35:54 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #85 Status: R AIList Digest Monday, 23 Mar 1987 Volume 5 : Issue 85 Today's Topics: Seminars - Connectionist Networks as Models of Human Learning (SRI) & Signals to Symbols in Neural Networks (UCB), Courses - Approaches to AI (SU) & Problem Solving, Learning, and Hardware Design (SU), Conference - AAAI-87 Workshop on Real-Time Processing ---------------------------------------------------------------------- Date: Wed, 18 Mar 87 16:19:08 PST From: lansky@sri-venice.ARPA (Amy Lansky) Subject: Seminar - Connectionist Networks as Models of Human Learning (SRI) Anyone interested in giving a talk, please contact Amy Lansky -- LANSKY@SRI-AI. EVALUATING "CONNECTIONIST" NETWORKS AS MODELS OF HUMAN LEARNING Mark A. Gluck (GLUCK@SU-PSYCH) Stanford University 11:00 AM, MONDAY, March 23 SRI International, Building E, Room EJ228 We used adaptive network (or "connectionist") theory to extend the Rescorla-Wagner/LMS rule for associative learning to phenomena of human learning and judgment. In three experiments, subjects learned to categorize hypothetical patients with particular symptom patterns as having certain diseases. When one disease is far more likely than another, the model predicts that subjects will substantially overestimate the diagnosticity of the more valid symptom for the Rare disease. This illusory diagnosticity is a learned form of "base-rate neglect" which has frequently been observed in studies of probability judgments. The results of Experiments 1 and 2 provided support for this prediction in contradistinction to predictions from probability matching, exemplar retrieval, or simple prototype learning models. Experiment 3 addressed representational issues in the design of the network models. When patients always have four symptoms (chosen from four opponent pairs) rather than the statistically equivalent presence/absence of each of four symptoms, as in Experiment 1, the network model predicts a pattern of results quite different from Experiment 1. The results of Experiment 3 were again consistent with the Rescorla-Wagner/LMS learning rule as embedded within an connectionist network. VISITORS: Please arrive 5 minutes early so that you can be escorted up from the E-building receptionist's desk. Thanks! ------------------------------ Date: Fri, 20 Mar 87 10:05:47 PST From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science Program) Subject: Seminar - Signals to Symbols in Neural Networks (UCB) BERKELEY COGNITIVE SCIENCE PROGRAM Cognitive Science Seminar - IDS 237B Tuesday, March 31, 11:00 - 12:30 2515 Tolman Hall Discussion: 12:30 - 1:30 2515 Tolman Hall ``From Signals to Symbols in Neural Network Models'' Terrence J. Sejnowski Division of Biology California Institute of Technology At the earliest stages of sensory processing and at the final common motor pathways, neural computation is best described as signal processing. Somewhere in the nervous system these signals are used to form internal representations and to make decisions that appear symbolic. A first step toward under- standing the transition from signals to symbols can be made by studying the development and internal structure of massively- parallel nonlinear networks that learn to solve difficult signal identification and categorization problems. The concept of ``feature detector'' is explored in a problem concerning sonar target identification that appears to be solved by humans and network models in similar ways. The concept of a ``semi- distributed population code'' is illustrated by the problem of pronouncing English text in which invariant internal codes emerge not at the level of single processing units, but at the level of cell assemblies. --------------------------------------------------------------- UPCOMING TALKS Apr. 28: Eran Zaidel, Psychology Dept., Brain Research Insti- tute, UCLA --------------------------------------------------------------- ELSEWHERE ON CAMPUS SESAME Colloquium: Robbie Case, Ontario Institute for Studies in Education, Monday, March 30, at 4:00 p.m., 2515 Tolman. ------------------------------ Date: Fri 20 Mar 87 18:11:03-PST From: Nils Nilsson Subject: Course - Approaches to AI (SU) [Forwarded from the Stanford bboard by Laws@SRI-STRIPE.] SEMINAR ANNOUNCEMENT CS 520 ARTIFICIAL INTELLIGENCE RESEARCH SEMINAR APPROACHES TO ARTIFICIAL INTELLIGENCE Tuesdays 11:00 a.m. Terman Auditorium (Televised over SITN) Spring Quarter 1987 Convener: Nils Nilsson The student and/or researcher approaching artificial intelligence cannot fail to note that research is guided by a number of different paradigms. Among the most popular are: approaches based on one form or another of symbolic logic; approaches stressing application-specific data structures and programs for representing and manipulating knowledge; approaches involving machine learning; and approaches based on psychological models of human perception and cognition. There are many variants and combinations of all of these, and each has contributed to our broad understanding of how to build intelligent machines. During this seminar series in 1987, leading exponents of these paradigms will describe the main features of his approach, what it has achieved so far, how it differs from other approaches, and what can be expected in the future. TENTATIVE SCHEDULE Mar 31: Nils Nilsson (Stanford), ``Overview of Approaches to AI'' Apr 7: Paul Rosenbloom (Stanford), ``AI Paradigms and Cognition'' Apr 14: Bruce Buchanan (Stanford), title to be announced Apr 21: Vladimir Lifschitz (Stanford), ``The Logical Approach to AI'' Apr 28: Martin Fischler/Oscar Firschein (SRI), ``Representation and Reasoning in Machine Vision'' May 5: Richard Fikes (Intellicorp), ``Reasoning in Frame-Based Representation Systems'' May 12: Terry Winograd (Stanford), ``Is There a Standard AI Paradigm?'' May 19: Hubert Dreyfus (UC Berkeley), ``AI at the Crossroads'' May 26: David Rumelhart (UC San Diego), title to be announced [Will deal with ``connectionism''] June 2: Ed Feigenbaum (Stanford), ``AI as an Empirical Science'' June 9: Doug Lenat (MCC), ``The Experimentalist's Approach to AI: from Learning to Common Sense'' ------------------------------ Date: 20 Mar 1987 1446-PST (Friday) From: Tanya Walker Subject: Course - Problem Solving, Learning, and Hardware Design (SU) [Forwarded from the Stanford bboard by Laws@SRI-STRIPE.] ELECTRICAL ENGINEERING DEPARTMENT-EE392H Problem Solving, Learning, and Hardware Design Spring Quarter, 1987 (3 units) Instructor: Professor Daniel Weeise, CIS 207, 5-3711 Time: Tuesday and Thursday 4:15 to 5:30 pm Place: ESMB 138 The aim of this course is to understand state-of-the-art AI techniques for planning, problem solving, and learning. This course is the starting point for investigating "self-configurable" systems capable of becoming expert problem solvers in given domains. Our particular domain of interest is hardware design. The global problem is automatically creating expert hardware designers for different types of hardware. Extant planners, such as Tweak, Molgen, and Soar will be studied first. We will then look at truth maintenance systems. Then we will investigate the learning and generalization methods of Strips, Soar, Hacker, and similar systems. We will briefly discuss domain exploration (a la Hasse and Lenat) and reflection (a la Smith). We will then investigate using general problem solving methods to solve problems from integrated circuit design. Examples include channel routing, leaf cell generation, logic design, and global routing. We will study two expert systems: Joobbani's Weaver system for channel routing, and Kowalski's DAA system for VLSI design. They will be used as examples of expert systems which might be automatically generated. This will be largely a reading and discussion course. Students will be required to write a term paper. Familarity with basic AI techniques will be assumed. Enrollment is by consent of the instructor. ------------------------------ Date: 22 Mar 1987 18:21-EST From: cross@afit-ab.arpa Subject: Conference - AAAI-87 Workshop on Real-Time Processing The AAAI-87 Workshop committee has approved a workshop to be held on Tuesday, July 14, 1987 entitled "Real-Time Processing in Knowledge-Based Systems." A call for participation follows. Workshop on Real-Time Processing in Knowledge-Based Systems AI techniques are maturing to the point where application in knowledge intensive, but time constrained situations is desired. Examples include monitoring large dynamic systems such as nuclear power plants; providing timely advice based on time varying data bases such as in stock market analysis; sensor interpretation and management in hospital intensive care units, or in military command and control environments; and diagnoses of malfunctions in airborne aircraft. The goal of the workshop is to gain a better understanding of the fundamental issues that now preclude real-time processing and to provide a focus for future research. Specific issues that will be discussed include: Pragmatic Issues: What is real-time performance? What metrics are available for evaluating performance? Parallel Computation: How can parallel computation be exploited to achieve real-time performance? What performance improvements can be gained by maximizing and integrating the inherent parallelism at all levels in a knowledge-based system (e.g., application through the hardware levels). Knowledge Organization Issues: What novel approaches can be to maximize the efficiency of knowledge retrieval? Meta-Level Problem Solving: How can intelligent problem solving agents reason about and react to varying time-to-solution resources? What general purpose or domain specific examples exist of problem solving strategies employed under different time-to-solution constraints? What are the tradeoffs in terms of space, quality of solution, and completeness of solution. Complexity Issues: How can an intelligent agent reason about the inherent complexity of a problem? Algorithm Issues: What novel problem solving methods can be exploited? How can specialized hardware (for example , content addressable memories) be exploited? To encourage vigorous interaction and exchange of ideas between those attending, the workshop will be limited to approximately 30 participants (and only two from any one organization). The workshop is scheduled for July 14, 1987, as a parallel activity during AAAI 87, and will last for a day. All participants are required to submit an abstract (up to 500 words) and a proposed list of discussion questions. Five copies should be submitted to the workshop chairman by May 1, 1987. The discussion questions will help the workshop participant's focus on the fundamental issues in real-time AI processing. Because of the brief time involved for the workshop, participants will be divided into several discussion groups. A group chairman will present a 30 minute summary of his group's abstracts during the first session. In addition, the committee reserves the right to arrange for invited presentations. Each group will be assigned several questions for discussion. Each group will provide a summary of their groups discussion. The intent of the workshop is to promote creative discussion which will spawn some exciting ideas for research. Workshop Chairman: Stephen E. Cross, AFWAL/AAX, Wright-Patterson AFB OH 45433- 6583, (513) 255-5800. arpanet: cross@afit-ab.arpa Organizing Committee: Dr. Northrup Fowler III, Rome Air Development Center Dr. Barbara Hayes-Roth, Stanford University Dr. Michael Fehling, Rockwell Palo Alto AI Research Lab Ms. Ellen Waldrum, Computer Science Laboratory, Texas Instruments Dr. Paul Cohen, University of Massachusetts at Amherst Invited Talks From: Dr. Michael Fehling, Rockwell Palo Alto AI Research Lab Dr. Barbara Hayes-Roth, Stanford University *Dr. Vic Lesser, University of Massachuesetts at Amherst *tentative ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Mar 26 02:50:29 1987 Date: Thu, 26 Mar 87 02:50:23 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #86 Status: R AIList Digest Wednesday, 25 Mar 1987 Volume 5 : Issue 86 Today's Topics: AI Tools - OPS5 Source & Mickey MICE & Lispm/WorkStation Survey ---------------------------------------------------------------------- Date: 18 Mar 87 23:39:52 GMT From: clyde!masscomp!dlcdev!eric@RUTGERS.EDU (eric van tassell) Subject: OPS5 source Thanks to all the people who sent me OPS5! The response overwhelmed my spool directory. If you too would like OPS email to me and I will email to you. Eric Van Tassell Data Language Corp. 617-663-5000 clyde!bonnie!masscomp!dlcdev!eric harvard!mit-eddie!dlcdev!eric dlcdev!eric@eddie.mit.edu ------------------------------ Date: 17 Mar 87 18:00:54 GMT From: ihnp4!alberta!calgary!arcsun!rob@ucbvax.Berkeley.EDU (Rob Aitken) Subject: Mickey MICE The following is a review of Machine Intelligence Corporation's MICE Expert System, which allegedly runs on IBM PC's with MS-DOS 3.1 or greater. Upon receiving the software, I looked at the directory of disk #1 to see if there were any installation instructions. There were none, so I opted for the manual. Under "Getting Started with MICE" there is a discussion of power failures, with such notable statements as "One can reset a tripped circuit breaker to recover the power", but nothing about installation. Four pages later the installation section begins. In fact, as I was to discover later, there are at least three (mutually inconsistent) sections about installation in the manual. Each informs me that I must create a top-level directory called \MICE and copy "all three" diskettes onto the hard drive. Five diskettes came with the package. I was unsure which three, so I copied all five. The Tutorial session, which lists all program variables, including such dandies as "how,9,p10->p10h" indicated that I must run a program called DEFOPT and provide answers to six verbose questions (containing, for example, "The memron description of atomic facts can be used to store customized prompts for the expert advice consultation"). Naturally, DEFOPT asked seven questions. I guessed that I did not want the "Initial Data Feature". A third section of the manual which covers installation states "If you have followed the procedure up to this point" there should be a directory called \POWER containing files MEMRON1 through MEMRON5. None of the other installation sections mentioned this. It turns out, though, that each knowledge base must be in its own directory and all must contain the elusive MEMRON* files. I crashed the system in a variety of amusing ways for half an hour until finally I discovered, buried deep in an appendix, the statement "MICE cannot coexist with any RAM based software". MS-DOS is RAM-based, but I assume that it qualifies as an exception. Nothing else does, however, and so my network software had to go. With everything else gone, MICE began running. I think I liked the crashes better. After using the program for a while I determined that MICE is not an expert system after all, but rather an adventure game. The goal is to navigate through the rules. Easy steps are, for example, "Please indicate whether LIGHTS CAN BE TURNED ON is relevant to the current situation. Respond 'y' for varying degree of certainties and 'n' if LIGHTS CAN BE TURNED ON is irrelvant to our discussion". This turns out to mean "Can the lights be turned on?". A more complex part of the game is guessing the secret key for "Please respond to .CIRCUIT BREAKER TRIPPED". The answer turns out to be "on" or "off". If you become expert at the beginner level of the game, expansions can be purchased all the way up to a 1 Megabyte version. As you may have guessed, the demo system diagnoses power failures. I wonder though, in the event of a real power failure, what good is an electronic expert system? Just asking. The clever people at MIC continue by informing us that MICE is implemented in C because its designers believe that LISP and PROLOG are "not adequate for practical applications" (I suspect this is synonymous with "do not provide nearly enough scope for sleazy programming") and because of the "efficiency of the UNIX operating system". PC's run MS-DOS, not Unix, so I am unsure of the relevance, let alone the veracity, of the preceding statement. In conclusion, MICE is a pathetic expert system. Any self-respecting organization would be embarrassed to be associated with it. There are plenty of cheaper ways to get a good laugh. Rob Aitken {...alberta,...ubc-vision}!calgary!arcsun!rob P.S. Since writing this, MICE has ceased to function altogether, producing messages like "Attempting to close file that failed to open" and writing greek letters all over the screen. Disclaimer: The Alberta Research Council neither affirms nor refutes the above review. ------------------------------ Date: 20 Mar 87 00:55:52 GMT From: mcnc!duke!ravi@seismo.css.gov (Ravi Subrahmanyan) Subject: Re: Mickey MICE (another review) I agree. $20 for MICE is a ripoff. Things they don't tell you in the ad: 1) You need a mouse 2) You need to print out the docs to use it (I became sufficiently discouraged that I didn't waste the paper) It would be nice to have a good system based on semantic nets, but this is not it. The list of features was too good to be true anyway. Michael Lee Gleicher (-: If it looks like I'm wandering Duke University (-: around like I'm lost . . . Now appearing at : duke!ravi (-: Or P.O.B. 5899 D.S., Durham, NC 27706 (-: It's because I am! ------------------------------ Date: Mon, 23 Mar 87 15:05:58 PST From: TAYLOR%PLU@ames-io.ARPA Subject: Summary of Best Lispm/WorkStation Responses This is a summary of responses I received to a request for opinions and experience on the best Lisp Machine (Lispm) or AI workstation. I received techical summaries and internal reports from marketing reps of the following companies: Symbolics, TI and Integrated Solutions (AI workstations). Except for the Symbolics vs. Explorer and Symbolics vs. Xerox comparisons which appeared on the net last year, I received no extensive comparisons of two or more Lispms/WorkStations. I did get responses (positive & negative) from users with opinions (op) and/or experience (ex) on a particular machine. First a short summary, then some detailed comments. machine configuration type positive negative ----------------------- ---- -------- -------- Apple Macintosh II op 1 0 Hewlett-Packard 350 workstation op 2 0 HP-UX, integrated Lisp/Prolog Intel 3086 op 1 0 Golden Common Lisp LMI - 0 0 Sun 3/160, 2/160 diskless ex 3 0 Sun 3/280 server, 16-20 MB memory Symbolics - 0 0 Tektronix 4400 AI Work Station op 1 0 VAX AI Work Station ex 2 1/2 1 1/2 Xerox 1186 0 2 As you can see the response was not great. Now some detailed comments: ------------------------------------------------------- From: Malcolm Slaney Organization: Schlumberger Palo Alto Research In article <8703010658.AA21849@ucbvax.Berkeley.EDU> you write: >11. A SUN without disks is useless. No, No, No, No, No. But you must have enough memory so you don't swap. I have a Sun3/160 on my desk with 16M of memory.....I NEVER PAGE or SWAP!!!!! If you do start paging then things lose real fast. Franz Lisp image are small and you can probably get by with less. I think the reason that memory is so critical to the current generation of Sun Lisp's is because of their swapping garbage collection. Every few minutes it must touch every page of your dynamic area. If you have to go to disk then chances are you will flush one of the pages you are currently using (isn't least recently used wonderful???). I have seen Lucid and Franz Common Lisp running anywhere between .5 and 4 times a Symbolics machine. Things are even faster with a Sun3/260. It is safe to say that Sun Lisps have caught up with Symbolics machines on speed. Now if they just had the environment.... [for program development? - wmt] P.S. I keep a Sun on my desk because that is my religion of choice...but whenever I have a real hairy problem debugging my lisp I run to the Symbolics machine. -------------------------------------------------------- From: IN%"beane%bartok.DEC@decwrl.dec.com" 13-FEB-1987 07:06 I suppose somebody working at Digital can be expected to have a very positive opinion of the AI VAXstation, but I do, even so. I especially like the ability to run lots of completely independent processes, especially VMS ones doing mail, file transfer, access to other resources in the network without any LISP overhead. The editor is especially good, compared to other editors on the VAX. I much prefer it over EMACS, which I have stopped using. I've written several editor extensions (eg., menus for common commands) which I'll be glad to send you in hardcopy (to get the screen images). Oh, yeah, I've never used any other machine, so no comparison, just praise. -------------------------------------------------- From: IN%"@charon.mit.edu:meltsner@athena.mit.edu" 25-FEB-1987 19:09 We use Vaxstation II's + Vax LISP (Ultrix) here, and I'm fairly happy. But -- 1) Lucid's LISP is twice as fast on the same machine, given enough memory. 2) Ultrix LISP's don't yet support the window system, although the VMS ones do. 3) DEC memory and disks are notoriously over-priced. Consider buying a minimal system in a BA123 box, and getting an Emulex disk ($3000 MIT price for a 140 meg drive+ controller, installed) and a third-part memory board. Personally, I like the Vaxstation. The machine feels fairly solid, the software doesn't crash and everything install very easily. DEC service is expensive, but ubiquitous. In general, the Symbolics stuff is wonderful if you are willing to devote the guru time to keeping it running. DEC has an acceptable product which seems much easier to support. I have never managed a cluster of LISPM's, but I do manage our 5 machines in not much more than 3-4hours/week (I don't do backups....). --------------------------------- From: "Christoph M. Hoffmann" We used the 1180 and 1186 Xerox dandilions at Cornell for a year or so in our research on solid modeling. We wern't too thrilled about them, because of the poor floating point handling and because they were very hard to learn. The trouble with floating point was software related: The compiler boxed everything, so it made a lot of work for the garbage collector. Also the fp precision turned out troublesome. The net effect was that we couldn't work with surfaces of (algebraic) degree 4 or higher, which excluded for example the torus. Here at Purdue we now use Symbolics machines, and so does Cornell. --------------------------------- Well, after all that, what are we going to do? We now have a configuration of a Symbolics file server, serving: 4 - Symbolics 3640's 2 - Symbolics 3620's 1 - Symbolics 3670 2 - LMI 2+2 (3 lisp, 1 unix) 1 - Xerox 1186 1 - TI Explorer We also have a beta-test machine from Integrated Inference Machines and a VAX 780/VMS with Franz, Interlisp, OPS-5, MRS, ITP, etc. We are tentatively thinking of expanding our facility by adding: 3 - Xerox 1186's. Reasons : ease of learning, superior windowing, Common Loops, NoteCards, Xerox PARC innovation, inexpensive 1 - TI Explorers. Reasons : ease of learning, different yet similar to Symbolics, integrated Lisp/Prolog, source code 2 - Sun 3/260 diskless and Sun 3/260 server. Reasons: fast, NeWS, numeric speed, work station versatility, easy access to other languages It is worth noting that Symbolics, TI Explorer & Sun 3/260 are all in the same price ballpark. Xerox is considerably less, however it is not designed for the development of large systems. Hope that this is of benefit - Will P.S. Please send me your comments - thanks -------------------------------------------------------------------------- Will Taylor - Sterling Software, MS 244-17, AI Research & Applications Branch NASA-Ames Research Center, Moffett Field, CA 94035 arpanet: taylor@ames-pluto.ARPA usenet: ..!ames!plu.decnet!taylor phone : (415)694-6525 ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Mar 28 03:00:24 1987 Date: Sat, 28 Mar 87 03:00:14 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #87 Status: R AIList Digest Wednesday, 25 Mar 1987 Volume 5 : Issue 87 Today's Topics: Policy - American Militarism, Comments - Limitations of AI/Expert Systems & Explanation Capability, Application - Analysis of Unknown Data ---------------------------------------------------------------------- Date: Fri, 20 Mar 87 14:28:46 +0100 From: mcvax!cwi.nl!tomi@seismo.CSS.GOV (Tetsuo Tomiyama) Subject: Policy - American Militarism In Article 1432 of mod.ai, Chris Elsaesser (elsaesser%mwcamis@MITRE.ARPA) submits a Call for Papers > Issues Concerning AI Applications To Battle Management > > University of Washington > Thursday, July 16, 1987 > > Sponsored by AAAI > > Success in applying AI technologies to battle management (e.g., production > and blackboard systems for sensor fusion, constraint propagation for > non-temporal planning tasks) has generated growing interest in the defense > community in developing intelligent battle management aids, workstations, > and systems. Along with this growing interest, there has been an order of > magnitude increase in funding for battle management AI projects (e.g., > Army-DARPA's Air-Land Battle Management, SAC-JSTPS-RADC-DARPA's > ....... First of all, I really feel doubts about the policy of AAAI to sponsor such a BLOODY nonsense, but since this is not the right place to criticize AAAI, I don't write about it. (Since I am a member of AAAI, I reserve my right to do so, though.) Now, I am strongly against such a posting circulated ALL AROUND THE WORLD through the net. Of course, I personally do hate such BLOODY research and at least I won't do such things. But, this is absolutely my personal opinion and I know that anyway I don't have power big enough to stop it. So, as far as it remains an AMERICAN MATTER, I don't bother guys over there. However, mod.ai is broadcasted to all over the world and I really do not want to see OUR computer networks are used to promote such BLOODY NONSENSE which may contribute only to destroying everything. I think this kind of postings should be even prohibited from the world wide net distribution. You (in plural) should be aware that there are lots of people who work hard for peace and many scientists and engineers are against the use of modern technology for military purposes even in the American AI community. --Tetsuo Tomiyama (UUCP: tomi@cwi.nl) [I don't believe that this message is offensive to the general AI community. I regret that it offends you, but can't censor all such material to suit your preferences. I can only offer you the same channel for stating your own position. AIList is a global channel. It could be limited to just the Arpanet, as it once was, but that would not be in the best interest of all involved. -- KIL] ------------------------------ Date: Mon, 23 Mar 87 18:04:56 +0100 From: mcvax!cwi.nl!tomi@seismo.CSS.GOV (Tetsuo Tomiyama) Subject: Re: Re: Submission to mod.ai [...] I am not saying that you are prohibited from military research. I am saying that it is all up to you whether you take part in military research or not. However, since there are people who do not like military research, just like there are people who do not want to see pornography in public or who do not want to get somebody else's smokes in a public space, you should not at least promote military research in public. I propose, therefore, to submit postings relevant to militarism should NOT be PROHIBITED but at least requested to be MARKED as military related article at the responsibility of original authors (rather than by the moderator), just like advertisements from tobacco companies, so that if I don't want to read it I can skip it. [...] ... so please give us a method to recognize military related articles as soon as possible. --Tetsuo Tomiyama (UUCP: tomi@cwi.nl) [This is the first time this matter has come up in four years of AIList. It does not seem to be a problem for the vast majority of readers, but you are welcome to try convincing submitters of defense-related messages to add a keyword to their headers. AIList is primarily an Arpanet discussion list. The Arpanet was developed by the military, is supported by the military, and is intended for defense-related communication among military contractors. One could assume that all Arpanet messages are military in nature, although that heuristic does not seem very useful in the case of AIList. What is really needed here is an intelligent mail reader that screens your messages and adds the appropriate keywords. -- KIL] ------------------------------ Date: Mon, 23 Mar 87 13:35:16 cst From: lugowski%resbld%ti-csl.csnet@RELAY.CS.NET Subject: Oxymoron: Real-time Knowledge-Based Nurse/Nuclear Plant Operator Regarding the following... Date: 22 Mar 1987 18:21-EST From: cross@afit-ab.arpa Subject: Conference - AAAI-87 Workshop on Real-Time Processing Workshop on Real-Time Processing in Knowledge-Based Systems AI techniques are maturing to the point where application in knowledge intensive, but time constrained situations is desired. Examples include monitoring large dynamic systems such as nuclear power plants... sensor interpretation and management in hospital intensive care units... Desired by whom? I wouldn't trust AI techniques with monitoring large dynamic systems of the class of a medium-sized municipal toilet. I would certainly want out of any ICU where my fragile well-being did not depend on an ICU nurse, overworked as though he or she may be. The AI community has had up to now the good sense of relegating its really questionable achievements to the battlefield, where they are fondly appreciated. Let's not get too greedy by introducing the battlefield to our rather safe nuclear plants and ICUs. -- Marek Lugowski Texas Instruments lugowski%crl1@ti-csl.csnet ------------------------------ Date: 22 Mar 87 21:14:25 GMT From: tektronix!tekcrl!vice!tekfdi!videovax!dmc@ucbvax.Berkeley.EDU (Donald M. Craig) Subject: Re: AI Project Information Request Well, I'm probably over reacting to what will end up being nothing more than a spelling checker, but I find the thought of having creative writing graded by a computer program appalling. It's particularly pernicious in the public school system, where penalties for failure to conform to some computer program's judgement of style and content are brought to bear. The best and most universal writing is about the human condition. What does a computer program (or indeed its artificially intelligent author) know about that? What would it do with... James Joyce? William S. Burroughs? Anthony Burgess? Ogden Nash? What would happen to literary experiment? Would there be an image processing version that graded Picasso? It's bad enough that some smartass robot comes up to me at trade shows pedalling product, or some auto-dialer phones me while I'm in the shower to sell carpet cleaner, but these uppity machines I can be rude to and ignore. The one that's marking my school essays I cannot. In law I have the right to be judged by a jury of my peers. In school I demand that same right. I will NOT be judged by a machine. Yours for a better tomorrow, Don Craig Whose opinions are his own. -- Don Craig dmc@videovax.Tek.COM Tektronix Television Systems ... tektronix!videovax!dmc ------------------------------ Date: 23 Mar 87 15:22:32 GMT From: mcvax!ukc!warwick!gordon@seismo.css.gov (Gordon Joly) Subject: Explanation and Justification. In answer to the question "does an expert system need to be able to explain itself to be useful", consider teaching. Anyone who has taught knows, to teach something (ie to explain it to a class), you really need to understand the issues, before you can begin to get them across. Also, in the process of teaching itself, ones own understanding is often deepened. Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon ------------------------------ Date: 18 Mar 87 20:48:17 GMT From: hpcea!hpfcdc!hpldola!hpldolm!ben@hplabs.hp.com (Benjamin Ellsworth) Subject: Re: analysis of unknown data I have two comments on this discussion; the first is general the second is specific. My first comment on this whole discussion, as I understand it, is that it is silly. We are being asked to find "the" meaning of some large file without any context for the file. Is it text? Is it integer data? Is it floating point data? Is it encrypted in any way? The search for meaning in the absence of context is a waste of time. (In essence, I agree with M. B. Brilliant as follows.) What is meaningful in one context is often not meaningful in another. However, sometimes, it is. A file full of integer measurement data will usually be indistinguishable from a file of a bit-mapped color image. A bunch of integers is a bunch of integers (unless some *recognizable* context information is included). If you take a group of integers and make a pretty picture with them, what will you do when I tell you that they were process measurements from a ball-bearing factory? What will you do when you interpret a Mandelbrot image as a bad lot of wafers in an otherwise well controlled fab? I'm sure that you would like to say that you can't make a pretty picture with ball bearing data. Perhaps not in every case, but I know of a gentleman who *sells* "art" generated from HP stock performance data. He has given some stock data meaning in a new context. The best response to this question was the one from Mr. Adrian who suggested that you look for the context(s) that the file was used in. If you can't find the correct context, you cannot ascertain the correct meaning. If the data exists in a vacuum, you can choose whatever context that you wish and with enough massaging you can make the data meaningful. Second comment: > Testing for randomness might be the first test; sure would save Random is too loose of a term. Are they "random" samples from a uniform distribution, or "random" samples from a Gaussian distribution? In either case is the distribution a real population, or a mathematical model of a distribution function? I don't want to sound like a flame, but testing for randomness is ridiculous! You *cannot* prove a set of data to be "random." In fact the key to some encryption schemes is to make a dataset appear "random" to most simple minded tests. This does not mean that there is no information in the data. It just means that the context of the information is well hidden from such simple minded filters. What you are saying when you say that you will test for randomness is that you will test to see if the data is meaningful in any known context. Do you know all possible contexts? Will you live long enough to test for all of them? What happens when the data is meaningful in more than one context? --------- Benjamin Ellsworth hplabs!hpldola!ben (303) 590-5849 P.O. Box 617 Colorado Springs, CO 80901 2+2=4 (void where prohibited, regulated, or otherwise restricted by law) ------------------------------ Date: 23 Mar 87 19:01:10 GMT From: dave@mimsy.umd.edu (Dave Stoffel) Subject: Re: analysis of unknown data In article <11160001@hpldolm.HP.COM>, ben@hpldolm.HP.COM (Benjamin Ellsworth) writes: > My first comment on this whole discussion, as I understand it, is that > it is silly. We are being asked to find "the" meaning of some large > file without any context for the file. Is it text? Is it integer > data? Is it floating point data? Is it encrypted in any way? The > search for meaning in the absence of context is a waste of time. Maybe I am at fault for inadequately describing the problem, but it is neither silly nor a waste of time. Apart from these two comments and the later one about test for randomness being ridiculous, Ben's comments are helpful in further detailing the possibilities. > What is meaningful in one context is often not meaningful in another. > However, sometimes, it is. A file full of integer measurement data will > usually be indistinguishable from a file of a bit-mapped color image. > A bunch of integers is a bunch of integers (unless some *recognizable* > context information is included). If you take a group of integers and > make a pretty picture with them, what will you do when I tell you that > they were process measurements from a ball-bearing factory? What will > you do when you interpret a Mandelbrot image as a bad lot of wafers > in an otherwise well controlled fab? > I'm sure that you would like to say that you can't make a pretty > picture with ball bearing data. Perhaps not in every case, but I know > of a gentleman who *sells* "art" generated from HP stock performance > data. He has given some stock data meaning in a new context. I wouldn't like to say you can't have multiple representations of a set of data poin However, one man's "art" is simply another man's pictoral or imagic presentation of stock data. (Particularly if the raw stock data was not convaluted by the artist). In fact, it might be a useful presentation for certain kinds of trend analysis. > The best response to this question was the one from Mr. Adrian > who suggested that you look for the context(s) that the file > was used in. If you can't find the correct context, you cannot > ascertain the correct meaning. If the data exists in a vacuum, you can > choose whatever context that you wish and with enough massaging you > can make the data meaningful. Certainly there is a pitfall in the analytic process; one may "discover" meaning that was not the intent of the creator of the data. So it goes, sometimes. "finding the correct context" and "finding the meaning" are the same thing! > Random is too loose of a term. Are they "random" samples from a > uniform distribution, or "random" samples from a Gaussian distribution? > In either case is the distribution a real population, or a mathematical > model of a distribution function? > I don't want to sound like a flame, but testing for randomness is > ridiculous! You *cannot* prove a set of data to be "random." In fact > the key to some encryption schemes is to make a dataset appear "random" > to most simple minded tests. This does not mean that there is no > information in the data. It just means that the context of the > information is well hidden from such simple minded filters. Hmm. I think what I mean is that if the data set appears to be a Gaussian distribution, then I'm not going to apply any other tests. > What you are saying when you say that you will test for randomness is > that you will test to see if the data is meaningful in any known > context. Do you know all possible contexts? Will you live long enough > to test for all of them? What happens when the data is meaningful in > more than one context? I can't possibly imagine all conceivable or theoretic contexts. I can imagine too many to try. I am looking for an analytic process that is more efficient than enumerating all the context tests I can imagine. If multiple context tests yield "reasonable" representations, I might just have to flip a coin or allow for all interpretations. I never said that the data has no context! I simply said that I don't know a-priori what its context is. It *is* the case that data points can be analysed in the absence of knowledge of the structure of the function which produced them. The object is to detect patterns, if possible, and search for "meaningful" interpretations. Some of the discussion of this subject sounds like the participants are frustrated by these two facts: 1. I *won't* live long enough to apply every possible context test. (Discovery by enumeration). and 2. they don't know of any more efficient methodology than discovery by enumeration, ergo the problem is silly or a waste of time. -- Dave Stoffel (703) 790-5357 seismo!mimsy!dave dave@Mimsy.umd.edu Amber Research Group, Inc. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Mar 28 03:01:09 1987 Date: Sat, 28 Mar 87 03:01:04 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #88 Status: R AIList Digest Thursday, 26 Mar 1987 Volume 5 : Issue 88 Today's Topics: Conference - Genetic Algorithms, Seminars - NUPRL as a Framework for Defining Logics (UPenn) & Natural Deduction Meets Schubert's Steamroller (SMU) & Parallel Production System Algorithms (UTexas) ---------------------------------------------------------------------- Date: Tue, 24 Mar 87 14:04:55 est From: John Grefenstette Subject: Genetic Algorithms Copies of the Proceedings of the First International Conference on Genetic Algorithms, held at Carnegie-Mellon in 1985 can be obtained by sending me your US Mail address. The 2nd GA Conference, sponsored by AAAI, the Navy Center for Applied research in AI, and Bolt Beranek and Newman, will be held July 28-31, 1987, at MIT. For registration forms and info concerning local arrangements, contact: Mrs. Gayle M. Fitzgerald Conference services Office Room 7-111 MIT 77 Massachusetts Ave. Cambridge, MA 02139 If you would like to submit a paper to the Conference, please send three copies of the paper to: John J. Grefenstette Navy Center for Applied Research in AI Naval Research Lab Washington, DC 20375-5000 (202) 767-2685 Arpanet: gref@NRL-AIC.ARPA The program committee will review papers starting April 10. Final camera ready versions will be due May 30. -- JJG ------------------------------ Date: Mon, 23 Mar 87 08:24:18 EST From: tim@linc.cis.upenn.edu (Tim Finin) Subject: Seminar - NUPRL as a Framework for Defining Logics (UPenn) From: dale%linc.cis.upenn.edu@cis.upenn.edu Math/CS Logic Seminar University of Pennsylvania RECENT RESULTS ABOUT NUPRL: USING NUPRL AS A FRAMEWORK FOR DEFINING LOGICS. Robert Constable Cornell University Abstract: Nuprl can be used to define natural deduction style logic. We will also mention other recent results about the Nuprl type theory such as those about representing partial functions. Math/Physics Building (DRL) 4th floor Math Seminar Room Monday 23 March 87, 10:30am ------------------------------ Date: Tue, 24 Mar 1987 18:21 CST From: Leff (Southern Methodist University) Subject: Seminar - Natural Deduction Meets Schubert's Steamroller (SMU) Past Seminar, Southern Methodist University, Department of Computer Science Natural Deduction meets Schubert's Steamroller Frank Vlach Texas Instruments Schubert's Steamroller is a test problem for automatic theorem provers that has attracted a lot of attention recently, and has proved difficult for resolution theorem provers. A human theorem prover would prove Schubert's Steamroller using a `natural' but mechanical and totally non-creative method that is readily programmable and quite different from resolution. Hand computations indicate that this strategy is much less complex than resolution for Schubert's Steamroller and a number of similar problems. An implementation is in progress in order to compare this method with resolution (and other methods) over a wide range of problems. This strategy also has the advantage that it requires no preprocessing of formulas (such as Skolemization or conversion to clausal form), and lends itself to the generation of natural proofs, readable by humans. ------------------------------ Date: Tue 24 Mar 87 14:23:26-CST From: Adam Farquhar Subject: Seminar - Parallel Production System Algorithms (UTexas) The COMPUTER SCIENCES GRADUATE STUDENT COUNCIL PRESENTS Daniel P. Miranker Recent Developments in Parallel Production System Algorithms at the CSGSC BROWN-BAG SEMINAR Friday, March 27, 12:00 Noon Tay 2.106 All Students and Faculty are invited. Okay to bring your lunch. The development of a parallel production system interpreters may be seperated into three nearly independent facets, low-level matching, partitioning of the rule base and synchronizing the partitions. This talk will address the partitioning issue. A problem associated with parallelizing production system execution is that on any given production system cycle only a small subset of the rules require processing. Worse, on a given cycle, often the processing requirements for a single rule will completely dominate the execution time. "Copy and constrain" is a method by which the processing requirements for matching a single rule may be distributed over many processors. This method has been shown to very effectively reduce the variance of the match times of different rules. Further, this method has implications for the fault tolerant execution of production systems. It appears, due to increased processor utilization, that fault tolerance may be introduced into a parallel production system interpreter without modification of the hardware and without significant performance degradation. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Sat Mar 28 03:02:00 1987 Date: Sat, 28 Mar 87 03:01:55 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #89 Status: R AIList Digest Saturday, 28 Mar 1987 Volume 5 : Issue 89 Today's Topics: Queries - ECOOP'87 & OPS5 for the SUN 3's & AI Expert Sources & Object Recognition, AI Tools - Genetic Algorithms, Expert Systems - Explanation and Justification & Capabilities, Application - Text Critiquing ---------------------------------------------------------------------- Date: 25 Mar 87 15:45 CET From: Gabriel_Barta_DEC%EUROKOM@MIT-MULTICS.ARPA Reply-to: Gabriel_Barta_DEC%EUROKOM@MIT-MULTICS.ARPA Subject: Info on Advanced Program ECOOP'87, June 15.-17. We would like to get more information on the seminar mentioned above. Where can we register, who is the organizer. Please send info to @ECC.DEC: isakson or phone Nikola Storp [49[(89)9591-1122 at Digital Equipment GmbH, Munich. Best regards, Nikola Storp ------------------------------ Date: 25 Mar 87 18:02:52 GMT From: clyde!mcdchg!wucs1!wucs2!posdamer@rutgers.rutgers.edu (Jeff Posdamer) Subject: OPS5 for the SUN 3's We are seeking a source for a compiled version of OPS5 that will run on the SUN 3/160. Any help would be appreciated. Please reply by e-mail to: ..!{ihnp4,seismo}!wucs!posdamer Thanks! ------------------------------ Date: 25 Mar 87 18:15:42 GMT From: ssc-vax!bcsaic!phyllis@BEAVER.CS.WASHINGTON.EDU (Phyllis Melvin) Subject: 1986 AI Expert Sources Can someone tell me where to get copies of November and December AI Expert magazine sources? -- Phyllis Melvin uucp: ...uw-beaver!uw-june!bcsaic!phyllis (206)865-3293 arpanet: phyllis@boeing.com ["imagen!turner"@ucbvax.berkeley.edu has been sending the source files to the Usenet comp.ai stream and may also be placing them in comp.sources. Bitnet redistribution is being handled by Streiff%HARTFORD.BITNET@WISCVM.WISC.EDU. I have last year's sources in file AIE.SRC and the Jan-Mar sources in AIE2.SRC; Arpanet readers can FTP them from directory on SRI-STRIPE. -- KIL] ------------------------------ Date: 27 Mar 87 12:26:44 EST From: BIESEL@RED.RUTGERS.EDU Subject: Information request for object recognition papers. I would appreciate pointers and references to current work in object recognition. My group is beginning work in the automation of visual database design for real-time image generators. These databases consist of polygonal approximations of real-world objects (everything from houses to bushes). Currently, individual objects are constructed by hand, using models, maps, photographs, graph paper, geometry, and lots of time and patience. We would like to develop a modeling station which can extract the basic geometry of objects from sets of photographs, and which can produce good approximations to polygonal models of the regular structures, such as buildings and other cultural features, which it recognizes in the source photographs. We expect that such a system will require some operator assistance for resolving ambiguities, at least initially, but even such a system would be of great help in the modeling task. Although we have some papers of current work, please assume that we are completely ignorant about who is doing what, and what the state of the art is, and forward all references to me. I realize that there are probably several netlists which are relevant, but I've not kept in touch with these. Pointers to the more active and relevant of these are also appreciated. I will summarize the responses if they are sufficiently general for this audience, and if the volume of replies warrants it. Many thanks in advance. Heiner BIESEL@RUTGERS [The best collections of papers are the DARPA Image Understanding Workshops. The February '87 proceedings have been made available to the general public. Much of this work is oriented toward aerial cartography (as well as target recognition). Other good papers have appeared in recent vision conferences such as PRIP/CVPR/ICCV and in journals such as IEEE PAMI and CVGIP. Some of the most pertinent work is being carried out at SRI by Pascal Fua and Andy Hanson. They have developed ways of extracting rectilinear objects (i.e., buildings of complex shape) and are extending their techniques to identify roads and vegetation. One of the inputs to their system is a segmentation map derived from my own work in computer vision. -- KIL] ------------------------------ Date: 25 Mar 87 17:42:00 GMT From: convex!bernhart@a.cs.uiuc.edu Subject: Re: Genetic Algorithms The Proceedings of the conference are copyrighted by John J. Grefenstette the editor. At the time of the conference (and perhaps now) he was at Vanderbilt University. You could contact him about procuring the book, or contact John Holland, the conference chairman, at the University of Michigan. Your university library should be able to assist with procurement of these proceedings and any doctoral dissertations you might need. They probably have extensive inter-library loan resources. Again, good luck! Marcia Bernhardt Convex Computer Corp. ------------------------------ Date: 26 Mar 87 18:21:31 GMT From: allegra!dougf@ucbvax.Berkeley.EDU (Doug Foxvog) Subject: Re: Genetic Algorithms In article <63800001@convex> bernhart@convex.UUCP writes: > >Your note is the first reference I've seen to any conference on genetic >algorithms. I'd love to get my hands on those proceedings, too! Who >sponsored the conference? Where was it held? If I learn anything more, >I'll respond here. If you find out any more, I'll look out for a follow- >up response from you. I'd like to hear of any progress you make in your >research. > The "International Conference on Genetic Algorithms & their Applications" was held July 24-26, 1985, at Carnegie-Mellon University. It was jointly sponsored by Texas Instruments & the US Navy Center for Applied Research in Artificial Intelligence (NCARAI). The editor was Professor John Grefenstette at Vanderbilt University. I took a course on Genetic algorithms from Professor Grefenstette last year. However, I believe that he has moved to another school by now. Vanderbilt should be able to point you to him, and he has copies of the proceedings. -- doug foxvog ihnp4!allegra!lcuxlj!dougf if only Bell Labs would agree with my opinions... For NSC line eaters: Names of drug dealing CIA agents working on TEMPEST for NRO encrypted above. ------------------------------ Date: 27 Mar 87 12:49 EST From: denber.wbst@Xerox.COM Subject: Re: Explanation and Justification "does an expert system need to be able to explain itself to be useful" No. - Michel ------------------------------ Date: Fri, 27 Mar 87 10:20:41 GMT From: Martyn Thomas Reply-to: ...seismo!mcvax!ukc!praxis!mct (Martyn Thomas) Subject: Re: Oxymoron: Real-time Knowledge-Based Nurse/Nuclear Plant Operator In article <8703250728.AA21290@ucbvax.Berkeley.EDU> lugowski%resbld@ti-csl.CSNET writes: > I wouldn't trust AI techniques with monitoring large dynamic >systems of the class of a medium-sized municipal toilet. I would certainly >want out of any ICU where my fragile well-being did not depend on an ICU >nurse, overworked as though he or she may be. The AI community has had up >to now the good sense of relegating its really questionable achievements to >the battlefield, where they are fondly appreciated. Let's not get too greedy >by introducing the battlefield to our rather safe nuclear plants and ICUs. > > -- Marek Lugowski > Texas Instruments > lugowski%crl1@ti-csl.csnet I strongly agree. Any safety-critical system should have certain characteristics: it should be rigorously specified (AT LEAST the safety aspects); it should be possible to reason rigorously about the implementation, to convince others that it matches the specification; it should be developed using QC/QA techniques that guarantee an audit trail so that any faults discovered after development can be traced to their cause. These considerations dictate the use of mathematically rigorous methods, and a certified Quality Assurance regime. Does anyone know of an AI system which measures up? Please reply by mail - I'll summarise. Martyn Thomas mct%praxis.uucp@ukc.ac.uk Praxis Systems plc ...seismo!mcvax!ukc!praxis!mct 20 Manvers Street, Tel: +44 225 335855 BATH BA1 PX England. Fax: +44 225 65205 (Groups 2&3) ------------------------------ Date: 25 Mar 87 01:44:00 GMT From: kadie@b.cs.uiuc.edu Subject: Re: AI Project Information Request Automatic checking and automatic grading are different things. I think <<* 3. WEAK: I think *>>^ automatic computer checking is a good thing, especially for spelling and simpler grammar. But there is no reason to grade automatically, just let the students ^<<* 23. SENTENCE BEGINS WITH BUT *>> work on their papers (with the automatic checker) until they are satisfied. <<* 21. PASSIVE VOICE: are satisfied. *>>^ <<* 17. LONG SENTENCE: 24 WORDS *>>^ Then have them turn in their work and the final computer critique to a human grader. The situation is similar to programming, where the compiler automatically checks the syntax. It would be unthinkable to make people turn in programs without letting them compile the programs first. On the other hand it would unthinkable to leave a syntax error in when the compiler tells you right were it is. <<** SUMMARY **>> READABILITY INDEX: 10.42 Readers need a 10th grade level of education to understand. STRENGTH INDEX: 0.41 The writing can be made more direct by using: - the active voice - shorter sentences DESCRIPTIVE INDEX: 0.65 The use of adjectives and adverbs is within the normal range. JARGON INDEX: 0.00 SENTENCE STRUCTURE RECOMMENDATIONS: 1. Most sentences contain multiple clauses. Try to use more simple sentences. << UNCOMMON WORD LIST >> The following words are not widely understood. Will any of these words confuse the intended audience? CRITIQUE 1 SYNTAX 2 UNTHINKABLE 2 << END OF UNCOMMON WORD LIST >> Carl Kadie University of Illinois at Urbana-Champaign UUCP: {ihnp4,pur-ee,convex}!uiucdcs!kadie CSNET: kadie@UIUC.CSNET ARPA: kadie@M.CS.UIUC.EDU (kadie@UIUC.ARPA) ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Mar 31 02:44:55 1987 Date: Tue, 31 Mar 87 02:44:45 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #90 Status: R AIList Digest Saturday, 28 Mar 1987 Volume 5 : Issue 90 Today's Topics: Neural Networks - Newsletters, Discussion Lists - Symbolic Math List & Impact of Information Services, Policy - Censorship & Militarism, Review - Spang Robinson Report, March 1987 ---------------------------------------------------------------------- Date: 12-MAR-1987 From: GATELY%CRL1@TI-CSL.CSNET Subject: Newsletters for Neural Networks [Forwarded from the Neuron Digest by Laws@SRI-STRIPE.] This message is ment simply to inform the reader of two monthly newsletters which seem to be focusing on neural networks. The first is named "Intelligence," is edited by Edward Rosenfeld, and is available for $295 per year (published monthly). The address for more information (and perhaps a free copy) is POBox 20008, New York, NY 10025, (212) 749-8048. The second newsletter is titled "Neurocomputers," is edited by Derek F. Stubbs, and is available (on a new member basis?) for US$24 (USA, Canada, and Mexico) or US$32 (all other countries) per year (published bi-monthly). The address is: NEUROCOMPUTERS, Gallifrey Publishing, POBox 155, Vicksburg, Michigan 49097. Intelligence seems to be an older (seasoned) newsletter, dealing with all aspects of AI - but focusing on neural networks. The issue of Neurocomputers that I have (V1 #1) has a wide variety of NN items (news, books, results). I have no ties with either of these newsletters! ------------------------------ Date: Thu, 26 Mar 1987 19:11 CST From: Leff (Southern Methodist University) Subject: Symbolic Math List I am now taking over symbolic math list editor/moderator responsibilities. Please send your submissions to sym-list%smu@csnet-relay. Sym-list-request%smu@csnet-relay will be for administrative messages. People with access to bitnet may wish to send mail to my personal account: E1AR0002@SMUVM1. However, in the event of a change of moderator, only the [Arpanet] addresses will be automatically forwarded. I will be copying materials posted in the USENET group sci.math.symbolic to the ARPANET/CSNET and BITNET mailing lists and vice versa. Needless to say, I will filter out irrelevant or otherwise inappropriate materials from the mailing list. Routine queries, such as those asking for ordering information on various symbolic math systems, will be handled personally and not forwarded. ------------------------------ Date: Thu, 26 Mar 1987 19:11 CST From: Leff (Southern Methodist University) Subject: Impact of Information Services Source: Information Week, March 23, 1987, Page 15 In a survey of DP managers at the largest companies, 92% said that overnight delivery had high impact on their operations while only 39% saw E-mail that way. 75% saw facsimile transmission as high impact and 9% said video conferencing was. ------------------------------ Date: Wed, 25 Mar 87 11:31:11 PST From: pyramid!ctnews!mitisft!markb@decwrl.DEC.COM Subject: censoring mod.ai ? In article <8703231704.AA12675@boring.cwi.nl>, tomi@cwi.nl (Tetsuo Tomiyama) writes: While I sympathize with your distaste for military matters, I suggest that it is in everyone's best interest to be constantly aware of what the military is doing with AI. Hiding one's head in the sand will not make matters any better. So keep the articles coming. The 'n' key works just fine if you're really offended. ------------------------------ Date: Thu, 26 Mar 87 09:46:24 pst From: marks@ads.ARPA (Phil Marks) Subject: AMERICAN-MILITARISM > > Date: Fri, 20 Mar 87 14:28:46 +0100 > From: mcvax!cwi.nl!tomi@seismo.CSS.GOV (Tetsuo Tomiyama) > Subject: Policy - American Militarism > > Now, I am strongly against such a posting circulated ALL AROUND THE > WORLD through the net. [...] I think > this kind of postings should be even prohibited from the world wide > net distribution. [...] > > I propose, therefore, to submit postings relevant to militarism should > NOT be PROHIBITED but at least requested to be MARKED as military > related article at the responsibility of original authors (rather than > by the moderator), just like advertisements from tobacco companies, so > that if I don't want to read it I can skip it. re AMERICAN-MILITARISM: Very Interesting...that we should get such an opinion from a Japanese. A review of recent history shows that Japan's main contribution to the 20th century has been a series of brutal attempts to subjugate its neighbors (China, Korea, the Philippines, etc). The only reason that Japan was not able to impose its barbarianism on these peoples was AMERICAN-MILITARISM. If it had not been for AMERICAN-MILITARISM the infamous and cowardly attack on Pearl Harbor might have ultimately lead to the subjection of America to the same atrocities as Japan's other victims. It was the Americans (including the American military) which rebuilt Japan from a devastated military dictatorship and tried to give the Japanese people a chance at the opportunities and responsibilities of freedom...a lesson which is apparently totally lost on mr tomiyama. It is common practice today for the adherents of all stripes of totalitarianism to decry AMERICAN-MILITARISM because it is the ONLY thing which stands between them and their goal of world domination. If they can get us to reduce our strength and vigilance then they can resume where they left off 40 years ago. Philip Marks [That's a bit strong, isn't it? Mr. Tomiyama can hardly be accused of desiring world domination just because he's an ardent pacificist. I'm sure that many Japanese have learned the lessons you mention. The new generations of Japanese are no more responsible for, or necessarily prone to, the excesses of past leaders than I am responsible for the past mistreatment of Native Americans, Negros, or Orientals in this country. AIList is a good forum for debating the linkage of AI and militarism, but let's not debate militarism per se. And for the record, I am the one who chose the title "American Militarism". I think the original was just called "Submission for mod.ai". My choice of title still seems appropriate, but I'm sorry if it rankled anyone. -- KIL] ------------------------------ Date: Fri, 27 Mar 87 02:31 EDT From: STANKULI%cs.umass.edu@RELAY.CS.NET Subject: AI militarism perhaps i am stepping out of my league here, but i feel that Ken Laws and AILIST should be encouraged to circulate information about all AI-related applications, especially whatever military issues can be circulated among us. tactics is the use of force (a semantics on a fundamental physical property), and the evolutionary development of it has been an ongoing process which has been around as long as there has been animal life on this planet. strategy is a metalanguage on tactics; and, according to clausewitz (1832, On War), policy is a metalanguage on strategy. human lives have been lost or saved through the application of these principles. to think that an artificial intelligence can avoid or ignore the fact that force can break structure is naive. hide-your-head-in-the-sand moralities which seek to deny the validity of tactics almost always begin with the preamble "IF nobody used force..." democratic societies are directly based on the ability of the populace to make informed decisions on policy. censorship (the selective hiding of information) is an inheirent evil in a society which tries distribute political power across the widest possible base. i believe that atomic weapons have been so judiciously unused because of the widespread knowledge of their lethality. the only tactical use of them took place when their existence was security classified and controlled by a few people. if AI has the power we believe it does, then the safest use is that which circulates the information to the widest possible audience-- especially to those who have do-not-use viewpoints. rather than trying to embarass our military sponsers when they do share with us insights into tactical AI uses, we should encourage such rare openness. the danger lies in power that can exist which is kept secret. if AI does not have the power we believe it does, then little is lost in the publication of plausible fiction, and we all gain by the integrity of an open knowledge base. stan [EOF] AI-Military.Mai ------------------------------ Date: 26 March 1987 1527-PST (Thursday) From: thode@nprdc.arpa Reply-to: thode@nprdc.arpa Subject: Censorship of AIList submissions Tetsuo Tomiyama (mcvax!cwi.nl!tomi@seismo.CSS.GOV) in a recent posting complained about a call for papers for AI Applications to Battle Management because of its relationship to US military research. He suggests that the rest of us somehow mark any military research related submissions to discussion lists like the AIList so that he can easily identify them and avoid reading them. I thought Ken Laws' response was appropriate: > AIList is primarily an Arpanet discussion list. The Arpanet > was developed by the military, is supported by the military, > and is intended for defense-related communication among > military contractors. One could assume that all Arpanet > messages are military in nature, although that heuristic > does not seem very useful in the case of AIList. What is > really needed here is an intelligent mail reader that screens > your messages and adds the appropriate keywords. -- KIL I would go a bit further than Ken. What is REALLY needed is an intelligent (human) mail reader. Readers of net mail who are afraid of what they might read shouldn't read anything. This reminds me of those who want to censor books, prohibit free speech, and otherwise govern the way we live our lives because they don't like what might be read or said. Freedom of speech (and electronic postings) should be anyone's right. If you don't like what someone writes, don't read it--but keep your hands off my (and others') rights to read and say what we want. --Walt Thode (thode@NPRDC) ------------------------------ Date: Thu, 26 Mar 1987 19:11 CST From: Leff (Southern Methodist University) Subject: Review - Spang Robinson Report, March 1987 Spang Robinson Report, Volume Number 3, March 1987, Summary Thereof The main article discusses AI and Database Technology with the results of an interview with James Neiser of Ashton. Ashton Tate is going to be concentrating non-AI decison-rules with natural language and expert systems to be considered later. The newsletter also includes a two page table listing various company's plans and products in the database-AI integration area. Other items of note in this article include: Symantec has sold 40,000 copies of their system which is a data base system with natural language. Cullinet has agreed to acquire the company selling a COBOL based expert system shell Man-Machine systems is marketing G-Base for the LMI Lambda and TI explorers which allows the interfacing of LISP and PROLOG to the database IBM has created a natural language and Prolog front end to SQL. __________________________________________________________ New Applications of Expert Systems: Cannon - copier maintenance system Ishikawajima Heavy Industry- engine failure analysis system Yasukawa Electric Manufacturing System - large crane analysis system Iwai Mechanical Industry - plant failure analysis sytem Technical Collaborates - expert system for architects in the area of disaster/ safety regulations (in planning) Takenaka Engineering - construction, surveying, (in development) __________________________________________________________ Shorts Fuji Xerox will be distributing PARC Smalltalk in Japan and ASR will be marketing ExSys in Japan. Medical Information Systems has a network allowing people to use medical expert systems that is accesible via Fujitsu's VAN service. Level Five Insight 2+ can access DBase II files. The Senior marketer at Applied Expert Systems, Richard Karash, has left. Larry Geisel is leaving the Carnegie Group CEO position possibly to start another company. __________________________________________________________ The newsletter also contains a review of the recent IEEE conference on AI applic ations. Also reviews of the CRI Directory of Expert Systems and SEAI's Expert Systems 19 86: An Assessment of Technology and Applications. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Mar 31 02:45:19 1987 Date: Tue, 31 Mar 87 02:45:09 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #91 Status: R AIList Digest Monday, 30 Mar 1987 Volume 5 : Issue 91 Today's Topics: AI Tools - TMYCIN: Free EMYCIN-like ES Tool, CAD - Solid Modeling & CAD/CAM/Robotics/Vision Policy, Comments - American Militarism & Ad Hominem Arguments ---------------------------------------------------------------------- Date: Fri 27 Mar 87 17:39:04-CST From: Gordon Novak Jr. Subject: TMYCIN: Free EMYCIN-like ES Tool The following two messages contain the code and documentation for a small EMYCIN-like expert system tool called TMYCIN (for Tiny EMYCIN). TMYCIN is written in Common Lisp (in a rather "old" Lisp style to make it easy to port to other dialects). Since it is only about 10 pages of code, it does not implement all of the features of EMYCIN, but it does cover some of the most-used features. The implementation is a new one, written from scratch, so it is different internally from EMYCIN; however, I have tried to follow EMYCIN conventions where possible. TMYCIN was originally written for use in an AI and Expert Systems course taught at Hewlett Packard. While it is not an "industrial strength" ES tool, others may find it useful for teaching or for self-study. The Artificial Intelligence Laboratory at the University of Texas at Austin receives major support from the U.S. Army Research Office under contract DAAG29-84-K-0060. The A.I. Lab has also benefitted from major equipment grants from Hewlett Packard and Xerox. Enjoy... Gordon Novak [Remember the AI Expert sources? I had to set a policy of not distributing large amounts of code. Granted, the two digests worth of code and examples is much smaller, but the same principle seems to apply. I would also prefer not to be responsible for such distributions because people ask for the code with fair regularity; I then have to keep it on disk or repeatedly pull it from tape, and my company has to bear the cost. I'm open to suggestions for how code should be handled, but AIList doesn't seem to be the place. (Usenet has a comp.sources, and there is a Unix code distribution, but Arpanet really has no mechanism other than contacting the author or FTPing his files.) -- KIL] ------------------------------ Date: 27 Mar 87 20:12:10 GMT From: ssc-vax!thornton@BEAVER.CS.WASHINGTON.EDU (Ken Thornton) Subject: Solid Modeling Unfortunately, there is no CAD/CAM, Robotics, or Automation newsgroups so I decided to post here. I am interested in hearing from people who know about solid modeling systems and have experience using them. Specific questions I'm interested in are: What is generally preferred, constructive solid geometry (CSG) representations or boundary represesentations (B-rep)? Of the available commercial systems, is CSG or B-rep more predominant? I am specifically interested in generating procedures for a robotic vision system to automatically inspect a part, given a solid model of the part. In addition to the actual part model, it would be necessary to have information about specific features, relationships between features, feature tolerances, and object surface reflectance. From what I understand, commercial systems do not provide this information in the output file representation of the part. More than anything, I'm interested in stimulating some discussion about solid modeling and related computer graphics algorithms. If such a discussion is considered inappropriate to this newsgroup, I might be interested in forming another group or starting a mailing list, if anyone is interested. Ken -- Ken Thornton {decvax,ihnp4}!uw-beaver!ssc-vax!ssc-bee!thornton Boeing Aerospace PO Box 3999 MS 2E-73 Seattle, WA 98124-2499 "A little learning is a dang'rous thing" - Alexander Pope ------------------------------ Date: 29 Mar 87 04:14:36 GMT From: rpics!chassin@seismo.css.gov (Dave Chassin) Subject: Re: Solid Modeling In article <798@ssc-bee.ssc-vax.UUCP>, thornton@ssc-vax.UUCP (Ken Thornton) writes: > > > Unfortunately, there is no CAD/CAM, Robotics, or Automation newsgroups > so I decided to post here. I guess it as good a place as any... > > I am interested in hearing from people who know about solid modeling systems > and have experience using them. Specific questions I'm interested in are: > > What is generally preferred, constructive solid geometry (CSG) > representations > or boundary represesentations (B-rep)? > > Of the available commercial systems, is CSG or B-rep more predominant? Preference really depends on application (see below), as for predominance, it depends on what system you using. B-rep modeling is the predominant form of geometric data representation on microcomputers. This is mainly because of memory/speed restrictions that have existed since the dawn of micros (things are changing but not yet enough, and not fast enough). CSG is far more common on minis and mainframes for the same reasons, but also because data is much more easily manipulated, and more logically in terms of geometric thinking (unions, intersections, cutting, etc). My preference (as an architect) is to use CSG for conceptual manipulations, and B-rep for detailed representations. Each have their limitations, and if anyone is interested, we can discuss these at great length sometime later. > > I am specifically interested in generating procedures for a robotic vision > system to automatically inspect a part, given a solid model of the part. > In addition to the actual part model, it would be necessary to have > information about specific features, relationships between features, > feature tolerances, and object surface reflectance. From what I understand, > commercial systems do not provide this information in the output file > representation of the part. CSG seems to me to be the most readily applied to this type of work. The reason is that CSG can naturally indicate whether two parts geometrically intersect each other, for example. However surface features like color and reflectances are not inherently applied to CSG modeling, although I imagine this could be developed, and might even be worth while. B-rep seems to be a bit more of a problem in terms of manipulating relationships between parts. I think that you have another problem when you get involved with robotic vision, and this is something that I've never thought about in terms of robotics, but I am working on in terms of architectonics (architectural modeling of sorts). That is that you will need to create some sort of algorithm for generating a 3D model from 2D information received by the cameras. Essentially the idea is to analyse a pair of images, extract the boundary data, assemble a 2D 'image' for each view, project the two images together into a 3D 'image', and finally take the resulting B-rep data and convert it to CSG type data, which can then be correlated with the previous frame and the motor algorithms to properly direct the parts into their desired positions. Piece of cake, eh... Each of these steps involve some very complicated and SLOW computing. I've worked out the basics for the first 4 steps, but have a long way to go still. In any case I would love to talk more about the ins and outs of this type of analysis because this is the main focus of my work for the next year or so. By the way, it's all being done on a Sun 2/120 and 2 AT clones... ...wish me luck!!! I know there are some people who have already done some work in these areas, but it has always amazed me how little is in fact published. I have NO, get that, NO references relating to 3D reconstructions other than the following, and these have nothing to do with computer application thereof: Wittcower & Carter, "The perspective of Piero della Francesca's Flagellation", COURTAULD INSTITUTES, vol.16, 1953 In this article the authors explain the method they used for reconstruct the actual architectural space that Piero painted. The mathematics of perspective are treated, and discussed. Since this is obviously not directly related to the subject I would greatly appreciate any sources anyone might know of. They are rare, and those that I have found, uninspiring. So, anyway, I encourage further discussion of this topic as it is a very difficult one, and it will, I believe, in the long run test what we computer graphics buffs are really made of. This problem goes beyond simply one of analysis, to become one of representation and ordering. The results, or lack thereof, will reveal much more about how we perceive and order what we see. This is the heart of the problem. _____________________ David P. Chassin Rensselaer Polytechnic Institute | School of Architecture __+__ Troy, NY 12181 / _ \ USA | | | | /=======/ = \=======\ (518) 266-6461 | _ | _ | _ | | | | | | | | | | | chassin@csv.rpi.edu | = | | | | = | ======================================================================= The above is my opinion, and mine alone. The organization I belong to may refute these statements at any time. They are however more likely to take credit for them. ======================================================================= ------------------------------ Date: Sun 29 Mar 87 21:55:16-PST From: Ken Laws Reply-to: AIList-Request@SRI-AI.ARPA Subject: Policy - CAD/CAM/Robotics/Vision I hate to turn away AI-related discussions, but CAD/CAM, Robotics, and Vision are large enough areas that they should have their own lists. I've heard that there is a CADinterest^.es@Xerox.COM list (reachable via seismo from csnet) that discusses VLSI design, CAD workstations, etc. There are Arpanet lists for graphics and for workstations, as well as Vision-List@ADS for machine-vision discussions. I don't know of any robotics list, although Vision-List has carried related messages. (Vision workers are often interested in path planning and other robotic issues.) Some of the aforementioned lists have been inactive lately. You could either "take over" one of their discussions for awhile or start a new list that combines your own interests. I'm told that an AI-Hardware list will be formed soon -- perhaps CAD/CAM will be of interest there. -- Ken Laws BTW, there is indeed a literature on combining 2-D views to construct 3-D objects -- both for converting mechanical drawings to solid models and for extracting buildings from aerial imagery. I don't have references handy, but Tom Strat here at SRI published some papers on this a year or so ago. Underwood also worked on this problem, as have others. There is also a vast literature on combining stereo views to obtain 3-D models for robotic inspection and grasping. ------------------------------ Date: Sat 28 Mar 87 11:58:49-EST From: Richard A. Cowan Subject: Re: American Militarism (This is a condensed version of a response I sent to Tetsuo Tomiyami:) Although I may share your sentiments regarding the military emphasis of computer science, I agree with Ken Laws that the the mention of military applications in AILIST is appropriate and should not be censored out, for two reasons. First, if there is military research going on in AI on the automated battlefield, I think it is better that this be acknowledged openly than hidden from view. Keeping military work shielded from view merely makes this work more difficult to criticize. Acknowledging its presence allows affected communities (such as the AI community) to openly debate the nature of such work and reach a community decision. Second, I don't think your analogy (to showing pornography in public) holds. The harmful effects of showing pornography (not erotica, but degrading, sexually exploitative material) come directly from showing it, but the harmful effects of military work do not come from merely acknowledging its presence. I think it would be more constructive to engage people on the AILIST in discussions of the implications of military AI. If people responded by saying that discussion about the effects of AI research on society are irrelevant to the list because they are political questions, *then* you might have something to gripe about. Why? Because scientists and engineers (especially those who receive public funds) have a responsibility to society to consider the implications of their work. Therefore, discussion of the implications of military AI (or civilian AI) is totally appropriate, and should not be suppressed in one of the major forums for communication used by AI scientists. (Though it is certainly appropriate for a moderator to cut out stuff to prevent flaming from getting out of control.) Now that Artificial Intelligence, having found uses in society, is no longer an ivory tower avocation, politics is not extraneous to AI. Rather, as the AAAI conference on "Issues Concerning AI Applications To Battle Management" shows, AI *is* political. -rich ------------------------------ Date: Sat, 28 Mar 87 13:52:49 EST From: cross@nrl-css.arpa (Chuck Cross) Subject: ad hominem arguments Phil Marks' reply to Tetsuo Tomiyama begins: ``Very interesting...that we should get such an opinion from a Japanese'' [the dots are his]. I can think of nothing more offensive in a discussion than using a person's race or national origin to ridicule his position. It is the worst kind of ad hominem argumentation. Chuck Cross ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Wed Apr 1 16:05:09 1987 Date: Wed, 1 Apr 87 16:04:54 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #92 Status: R AIList Digest Tuesday, 31 Mar 1987 Volume 5 : Issue 92 Today's Topics: AI Tools - How to FTP TMYCIN, Policy - TMYCIN Code and Military AI, Conference - Computing and Political and Social Issues, Query & Replies - Daemons, AI Tools - Genetic Algorithms, CAD - Solid Modeling ---------------------------------------------------------------------- Date: Mon 30 Mar 87 10:42:04-CST From: Gordon Novak Jr. Subject: How to FTP TMYCIN An earlier msg to this list offered TMYCIN, a small EMYCIN-like expert system tool. Due to a policy of not distributing code on AILIST, the code was excised by the editor. The TMYCIN files are located on R20.UTEXAS.EDU ; Arpanet sites should be able to get them, from directory , by anonymous FTP. If you can't get them by FTP, let me know and I'll attempt to mail them. The single file TMYCIN.ALL contains all the material appended together. That may be the easiest to FTP. The individual files are TMYCIN.CL , TMDOC.DOC , TMTEST.CL . TMYCIN.CL is the largest, at about 22K chars. Personally, I think AILIST should change its policy on distributing source code; it would benefit a lot of people and be more valuable than much of the material that appears on AILIST now. I have had poor luck in mailing code to non-Arpanet sites. So long as code is identified as such so that people can skip it if they don't want to see it, I don't see the objection to including it. The bibliography lists are each as big as the TMYCIN code, and lots of them are put on the AILIST. If anyone wants to move the TMYCIN files to repositories of code where people can get it more easily, that is fine with me. And let me add my support to the idea of starting such a repository on the Arpanet. Cheers, Gordon ------------------------------ Date: Mon, 30 Mar 87 10:08:31 pst From: Eugene Miya N. Subject: TMYCIN code and Military AI The following should be regarded as opinion and commentary rather than the expression of fact. First, the dissemination of Tiny MYCIN sounds interesting. I wish I had a convenient Common LISP machine to try it out. I realize that TMYCIN is just a tool, but I wonder how good a tool it really is. What is the value of learning about this tool (to learn about ESs) when you don't have an expert around (rhetorical question, obviously some value)? I ask this because I have a relative who does research in pathology and got his PhD in bacteriology at BYU (any connection with Dugway Proving Ground is not coincidence). Anyway, anyone in the South SF Bay with a Common LISP machine which we can try this code out on? Second, I too am torn about the posting of military AI material. My immediate response was against it. But Ken Laws (The voice of reason) made some good points: similar to ones I made about why the UC should keep the weapons labs (to prevent them from disappearing from public view, but I don't like the idea of a U doing this work). The comments by another reader (which had nothing to do with AI) just go to reinforce certain observations of latent (?) rascism within military circles and is a prejudice I occasionally get (when I get asked to attend certain military meetings as an outside reviewer). For some people, WWII has not ended, nor I guess has the Civil War for some. Recent trade developments again have people talking about Economic Pearl Harbors (Fujitsu-Fairchild and other chip agreements). I know a decorated, one armed Senator and our local House Representative who are watching developments very carefully. (The Rep is on the side of the USA, but will defend against any and all attacks against a people who were interned during a prior war.) Watch it people. --eugene miya ------ Asian Italian ancestory(?) NASA Ames Research Center eugene@ames-aurora.ARPA "You trust the `reply' command with all those different mailers out there?" "Send mail, avoid follow-ups. If enough, I'll summarize." {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene ------------------------------ Date: 30 Mar 87 16:58:04 GMT From: jon@june.cs.washington.edu (Jon Jacky) Subject: Battle Management at AAAI, mod.ai postings policy, "Militarism" I think it is fine that you ran the AAAI Battle Management Workshop announcement. I have grave reservations about a lot of that stuff; nevertheless it is useful even for critics to be informed of what's going on in the area. Also, it is important to note the role of the military in supporting so much AI research. If anything, there is too little rather than too much acknowledgement of this fact in the AI community. The original announcement noted the recent "order of magnitude increase in funding for battle management AI projects," but that is only the tip of the iceberg -- a very large body of apparently more generic AI work is also funded by the Department of Defense. Much of that is putatively basic research, but the motivation for the funding, as described to Congress and the Secretaries of Defense, emphasises potential weapons applications. This relationship should be frankly acknowledged, rather than concealed or glossed over. A very important question is, does this source of funding and this kind of motivation for even "basic" AI research make any difference, either for the content of the technical work, or for the larger matters of war and peace? These issues will be addressed at another event (here comes the plug). On Sunday, July 12 in Seattle, the day before the AAAI conference, the Seattle Chapter of Computer Professionals for Social Responsibility is sponsoring a one day-conference concerning computing and political and social issues. The keynote speakers will be Bob Kahn, who now heads the nonprofit Corporaton for National Research Initiatives and who until 1985 was director of the Information Processing Techniques Office at DARPA, and Terry Winograd. We are accepting papers until April 1. (If you have something you would like to submit but are worried about making the deadline, or would just like to attend, call or write to me). -Jonathan Jacky University of Washington jon@june.cs.washington.edu (206)-548-4117 ------------------------------ Date: 29 Mar 87 14:43:44 GMT From: ihnp4!cord!gwr@ucbvax.Berkeley.EDU (GW Ryan) Subject: daemons... where's the name from? this came up in a class last week; we came up with a few interesting ideas but no real answers. Why are "daemons" called "daemons"? that is, what is the derivation of that name? We got answers like "something to do with Maxwell's daemon" and "maybe if you say the magic words (i.e. satisfy the conditions to fire the daemon) then the daemon wakes up". anybody know the right answer?? mail to me, and I'll summarize to the net. thanks jerry allegra!cord!gwr gwr@cord.garage.nj.att.com ------------------------------ Date: 29 Mar 87 22:20:49 GMT From: flowers@locus.ucla.edu Subject: Re: daemons... where's the name from? >this came up in a class last week; we came up with a few interesting ideas but >no real answers. Why are "daemons" called "daemons"? that is, what is the >derivation of that name? >From "Pattern Recognition by Machine", by Selfridge and Neisser, Scientific American 1960, in describing the Pandemonium model they proposed: In parallel processing all the questions would be asked at once, and all the answers presented simultaneously to the decision maker. Different combinations identify the different letters. One might think of all the various features as being inspected by little demons, all of whom then shout the answers in concert to a decision-making demon. From this conceit comes the name "Pandemonium" for parallel processing. This paper was reprinted in the seminal and still useful book _Computers and Thought_, Feigenbaum and Feldman, eds., 1963. Anyway, Selfridge and Neisser have some earlier publications about pattern matching and the Pandemonium model which probably introduced the idea of demons. I don't know if their use of the term was inspired by any prior specific use. Around 1970 demons were utilized and popularized by Charniak's Ph.D. thesis. Margot Flowers, Asst. Prof., UCLA AI Lab Flowers@CS.UCLA.EDU [or Flowers@UCLA-CS for old host tables] ...!{ucbvax|ihnp4}!ucla-cs!flowers (uucp) ------------------------------ Date: 29 Mar 87 23:10:00 GMT From: ihnp4!inuxc!iuvax!port@ucbvax.Berkeley.EDU Subject: Re: daemons... where's the name from? The use of daemon in Unix for a program that `wakes up' and does some task whenever it is required is actually a regular use of the word. It isnt one of those typical computing terms that has an arcane history (one thinks of the derivation of `nroff, grep, awk, winchester,' etc). The word is classical Greek for any kind of spirit or genie -- some kind of minor deity. In Latin they borrowed the Greek word and spelled it daemon (for Greek daimon-ion), to descdribe such spirits. For the Christians, of course, all such deities were paganisms so, they were viewed as evil. Thus the English word demon has the strong flavor of evil about it. But we also seem to have split the word in two, so now the original pagan meaning has been restored in modern English with a more classical spelling as daemon. The use in `Maxwell's daemon' is in just this sense. Similarly, in the 1950's Selfridge proposed a parallel model of the perception of alphabetic letters that had `daemons' for each letter. They were competing with each other to `find themselves' in the incoming visual features. The one that `shouted' the loudest was the one that caused a `decision demon' to issue a conclusion. The use of this word for independent procesess that seem to have a `will of their own' as in operating systems is very appropriate. ------------------------------ Date: 26 Mar 87 15:50:33 GMT From: hpcea!hpfcdc!hpfclp!hillary@hplabs.hp.com (Hillary Davidson) Subject: Re: Genetic Algorithms Concerning GAs.... I am researching genetic algorithms for my Master's thesis work at CSU in Ft. Collins, CO. I am doing this research under Dr. Darrell Whitley. There is a conference on GAs this summer....it is the 2nd International Conference on Genetic Algorithms an Their Applications, sponsored by AAAI and the U.S. Navy Center for Applied Research in AI (NCARAI). It will be on July 28-31, 1987 at MIT in Cambridge, Mass. John Holland is the Conference Chairperson. For more information contact: Mrs. Gayle M. Fitzgerald Conference Services Office Room 7-111 MIT 77 Massachusetts Avenue Cambridge, MA 02139 (617) 253-1703 The first of this conference was held on July 24-26, 1985 at Carnegie-Mellon U. in Pittsburgh, PA. I obtained a copy of the proceedings by writing the editor at the following address: Dr. John J. Grefenstette Navy Center for Applied Research in AI Naval Research Laboratory Washington, DC 20375-5000 gref@NRL-AIC.ARPA (202) 767-2685 Holland's newest book "Induction: ...." is a well written book. It expands on the chapter in "Machine Learning, Volume 2" that he wrote. Hope this info is helpful. Hillary Davidson :-) {hplabs,ihnp4}!hpfcla!hillary ------------------------------ Date: 30 Mar 87 19:52:10 GMT From: puff!upl@rsch.wisc.edu (Future Unix Gurus) Subject: Re: Solid Modeling In article <798@ssc-bee.ssc-vax.UUCP> thornton@ssc-vax.UUCP (Ken Thornton) writes: > > >Unfortunately, there is no CAD/CAM, Robotics, or Automation newsgroups >so I decided to post here. > >I am interested in hearing from people who know about solid modeling systems >and have experience using them. Specific questions I'm interested in are: I am doing a solid modeling based animation system as my senior's thesis (on the Amiga1000. I also hope to eventually release it as a product, it should beat the living daylights out of Caligari. (Modesty is not one of my strong points)). In preperation for the thesis, I have spent the past year and a half researching pertinient issues such as solid modeling techniques. While I am not as informed as someone might be who has been working in the field in the real world (i.e. not a student) I have learned a fair bit. I am also VERY interested in discussing this topic with ANYONE out there! > >What is generally preferred, constructive solid geometry (CSG) representations >or boundary represesentations (B-rep)? The current trend seems to be toward CSG-BREP hybrid systems. BREP is very good for generating wireframes, doing things like mass calculations, and certain approaches to ray tracing. The big problem with BREP is the user interface. We do not have a true 3d output device available yet, and most of the systems for plotting 3d points on 2d displays are awkward, confusing, and time consuming. BREP offers a system in which the user can work with 3d primatives to begin with, on a more higher level and in a manner more natural to most people. What most of the systems I've seen do is take input as CSG from the user, and simultaneously perform CSG operations on pre-defined BREP primatives that approximate the CSG ones. There is a good article in the conference proceedings from Siggraph '86 on one way to do these CSG ops on BREP objects. > >Of the available commercial systems, is CSG or B-rep more predominant? > See the above. Realize that I have seen more art intended systems than CAD type systems, but they seem to be the same difference. > >More than anything, I'm interested in stimulating some discussion about >solid modeling and related computer graphics algorithms. If such a >discussion is considered inappropriate to this newsgroup, I might be interested >in forming another group or starting a mailing list, if anyone is >interested. GREAT! Lets discuss! Jeff Kesselman ihnp4!uwvax!puff!uhura!captain (Captain @ Uhura in the Undergraduate Project Lab) ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Apr 9 04:50:39 1987 Date: Thu, 9 Apr 87 04:50:30 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #93 Status: R AIList Digest Thursday, 2 Apr 1987 Volume 5 : Issue 93 Today's Topics: Seminars - AI, Mathematical Programming, and VLSI Design (Rutgers) & Automating Theory Formation (Rutgers) & Concept Learning (Ames) & Argo: Analogical Reasoning for Design Problems (Rutgers) & Decomposition for Hierarchical Problem Solving (Rutgers), Conferences - Simulation & Protocol Specification ---------------------------------------------------------------------- Date: Wed, 25 Mar 87 18:17:55 EST From: liew@aramis.rutgers.edu (Liew) Subject: Seminar - AI, Mathematical Programming, and VLSI Design (Rutgers) The next design colloquim will be held on Thursday (march 26th) at 1:30pm in TCB 103. Most of you are unfamiliar with the location of TCB 103 so we will meet at Hill 423 at 1:15 and proceed from there. The speaker will be Wayne Wolf of ATT Bell Laboratories and the title of his talk is "Artificial Intelligence, Mathematical Programming and VLSI Design". The suggested readings are: Wolf, Kowalski, McFarland "Knowledge Engineering Issues in VLSI Synthesis", AAAI-86. Brayton, et al., "Multiple-Level Logic Optimization System", ICCAD-86, pp. 356-360. Gregory, et al., "SOCRATES: A System for Automatically Synthesizing and Optimizing Combinational Logic", DAC-86, pp. 79-85. Shin and Sangiovanni-Vincentelli, "MIGHTY: A Rip-Up and Reroute Detailed Router", ICCAD-86, pp. 2-5. Joobani, "WEAVER: A Knowledge-Based Routing Expert", PhD dissertation, CMU, 1985. ---------------------------------------------------------------------- Abstract: Title: Artificial Intelligence, Mathematical Programming, and VLSI Design Speaker: Wayne Wolf, AT&T Bell Laboratories, Murray Hill Artifical intelligence techniques have found their greatest success in diagnosis and classification problems. The application of AI to design problems is relatively new. In this talk I want to consider how the intellectual tools that AI brings to the design problem can best be used by contrasting two paradigms: artificial intelligence and mathematical programming. I will argue that mathematical programming is a more powerful paradigm than AI for a lot of synthesis problems because mathematical programming a) allows better application of brute force; b) encourages us to formulate solvable problems. I will argue that AI is a more powerful paradigm for knowledge representation because it provides a lot of tools for separating particular pieces of knowledge from the engines used to maintain them. The talk will be in three parts: 1) The VLSI design problem: what is hard about VLSI design; what tools people need to make bigger, better designs; what people would do with VLSI synthesis if they had it. 2) Synthesis and search: search in AI and mathematical programming; problem formulation and search; results in application of AI and mathematical programming techniques to some design problems. 3) Synthesis and knowledge representation: why knowledge representation is important; examples of KR problems and solutions from Fred, the database; how AI knowledge representation and mathematical programming complement each other in Lucy, the controller designer. ------------------------------ Date: 26 Mar 87 17:01:17 EST From: SOO@RED.RUTGERS.EDU Subject: Seminar - Automating Theory Formation (Rutgers) THE III, AN INFORMAL SEMINAR FOR AND BY STUDENTS, --- INITIATES ITS SPRING SEASON --- Title: Automating Theory Formation -- Postulation of Enzyme Kinetic Models and Experimental Design Date: April 7th, Tuesday Time: 11:00 AM Place: Hill 423 Speaker: Von-Wun Soo This is a practice talk for my Ph. D. thesis defense. I would like to present the work that I have been involved for the past five years. I cordially invite you to come, support, and make comments before my final defense. Abstract: In this talk I discuss how expert reasoning in scientific research such as designing biochemical experiments or postulating kinetic mechanisms can be modeled. Broadly speaking, designing an experiment, an important compoent of scientific theory formation, can be viewed as a process of searching and testing plausible decompositions of a hypothesis space. In my thesis, I show how the results of qualitative reasoning and a set partition method can be used to select experimental setups that discriminate a set of plausible models. The interpretation of experimental results, the critiques of previous experiments, and comparisons of similarities and discrepancies among experiments are all related issues that lead us to the automation of scientific discovery. ------------------------------ Date: Thu, 26 Mar 87 21:31:07 PST From: SIMS%PLU@ames-io.ARPA Subject: Seminar - Concept Learning (Ames) Title: LEARNING CONCEPTS TO IMPROVE PERFORMANCE: The Role of Context By: Dr. Richard Keller (KELLER@RED.RUTGERS.EDU) Computer Science Department Rutgers University Where: NASA AMES When: Monday, April 6 Concept learning, like most intelligent behavior, should be influenced by the context in which the behavior takes place. If concept learning occurs in the context of improving the performance of a problem solving system, then the type of concept learned and the form of its description should depend on the goals and the capabilities of the problem solver. Unfortunately, most current inductive learning systems incorporate a set of fixed, implicit assumptions about the problem solver being improved by learning. This causes problems when the original problem solver changes over time, and also makes it difficult to reuse the same inductive system to improve a different problem solver. As an alternative to the inductive framework, I describe a new concept learning framework -- the concept operationalization framework -- which makes contextual assumptions more explicit and easier to change. To illustrate the new framework, I discuss how an existing inductive system (the LEX system [Mitchell et al. 1981]) was converted to a concept operationalization system (the MetaLEX system). In contrast with LEX, MetaLEX adapts more successfully to certain changes in its learning context, learns contextually suitable approximations of its target concept as necessary or expedient, and has the potential to automatically generate its own concept learning tasks to improve its problem solver. ------------------------------ Date: Mon, 30 Mar 87 15:22:26 EST From: liew@aramis.rutgers.edu (Liew) Subject: Seminar - Argo: Analogical Reasoning for Design Problems (Rutgers) There will be a design colloquim on Tuesday March 31st at 10:30 am in Hill 423. The speaker will be Ramon Acosta of MCC and an abstract of his talk is given below. A copy of his paper is in JoAnn Gabinelli's office (Hill 408). Argo: An Analogical Reasoning System for Solving Design Problems Michael N. Huhns and Ramon D. Acosta Microelectronics and Computer Technology Corporation AI/KBS and VLSI CAD Programs 3500 West Balcones Center Drive Austin, TX 78759 The static and predetermined capabilities of many knowledge-based design systems prevent them from acquiring design experience for future use. To overcome this limitation, techniques for reasoning and learning by analogy that can aid the design process have been developed. These techniques, along with a nonmonotonic reasoning capability, have been incorporated into Argo, a tool for building knowledge-based systems. Closely integrated into Argo's analogical reasoning facilities are modules for the acquisition, storage, retrieval, evaluation, and application of previous experience. Problem-solving experience is acquired in the form of a problem-solving plan represented as a rule-dependency graph. From this graph, Argo calculates a set of macrorules, each based on an increasingly abstract version of the plan. These macrorules are partially ordered according to an abstraction relation for plans, from which the system can efficiently retrieve the most specific plan applicable for solving a new problem. The use of abstraction in a knowledge-based application of Argo allows the system to solve problems that are not necessarily identical, but just analogous to those it has solved previously. Experiments with an application for designing VLSI digital circuits are yielding insights into how design tools can improve their capabilities and extend their domains of applicability as they are used. ------------------------------ Date: 31 Mar 87 10:36:22 EST From: KAPLAN@RED.RUTGERS.EDU Subject: Seminar - Decomposition for Hierarchical Problem Solving (Rutgers) PhD Oral Qualifying Examination for Mr. S. Mahadevan Mr. Mahadevan's examination is scheduled for Wednesday, April 1 at 10:30 AM in Hill 423. The examination committee is chaired by T. Mitchell, and includes T. McCarty, J. Mostow, and L. Steinberg. DCS faculty are welcome to attend; graduate students are invited to the public portion of the examination. Mr. Mahadevan's dissertation proposal is abstracted below: LEARNING DECOMPOSITION METHODS TO IMPROVE HIERARCHICAL PROBLEM-SOLVING PERFORMANCE Previous work in machine learning on improving problem-solving performance has usually assumed a state-space or "flat" problem-solving model. However, problem-solvers in complex domains, such as design, usually employ a hierarchical or problem-reduction strategy to avoid the combinatorial explosion of possible operator sequences. Consequently, in order to apply machine learning to complex domains, hierarchical problem-solvers that automatically improve their performance need to designed. One general approach is to design an interactive problem-solver -- a learning apprentice -- that learns from the problem-solving activity of expert users. In this talk we propose a technique, VBL, by which such a system can learn new problem-reduction operators, or decomposition methods, based on a verification of the correctness of example decompositions. We also discuss two important limitations of the VBL technique -- intractability of verification and specificity of generalization -- and propose solutions to them. Finally, we present a formalization of the problem of learning decomposition methods based on viewing actions and problems as binary relations on states. ------------------------------ Date: Thu, 26 Mar 1987 19:05 CST From: Leff (Southern Methodist University) Subject: Conferences - Simulation & Protocol Specification The Society for Computer Simulation Eastern Simulation Conferences April 6-9, 1987, Orlando, Florida AI and Simulation at Johnson Space Center, Robert Salvely (verbal presentation) Flight Simulator Evaluation of Aircraft Systems Using AI Technology (verbal .. Edward M. Huff, NASA Ames Research Center An Expert System fo rManaging Multiple Cooperating Expert Systems (verbal ... A. Gerstenfeld, Geoffrey Gosling, David S. Touretzky Worcester Polytechnic The Simulation of Simple Analog and Discrete Circuits from a Knowledge Base Representaiton of Structure and Function NASA, Kennedy Space Center Acknowledge2: A Knowledge Acquisition System Pradip Dey, Kevin D. REilly, J. Todd Brown, University of Alabama at Birmingham Knowledge Corpora Connectivities - Toward the Construction of a Thought Simulator Testbed Alhad M. Chande, Marti.n Marietta Baltimore Aerospace, Joe Clema, IIT Research Institute Qualitative Expert Systems: A Demographic Simulator with Heuristic Reasoning Walt Conley, W. Lawrence, U. Sengupta, R. Hartley, M. Coombs, New Mexico State University Model Management in Knowledge Based Simulation Hawa Singh, Alan Butcher, R. Reddy, West Virginia University Constraint Directed Reasoning for Simulation Problem Formulation Neena Sathi, Gary STrohm, Thomas Morton, Sean Winters, Carnegie Group Inc. Knowledge-Based Resource Behavior Allen Matsumoto, V. Baskaran, Beth Marvel, Carnegie Group Sonar Plexus - Enhancing a Command and Control Simulation with Reasoning Marc R. Halley, Thomas MIller, Craig Hougum, William Mosenthal Analytic Sciences Corporation Computer System Simulation in Scheme Daniel B. Pliske The Analytical Sciences Corporation Real Time Intelligent System Analysis by Dsicrete Event Simulation J. M. Poole, T. M. McDermott, D. P. Glasson, The Analytica Sciences Corporation The Mobile Intercontinental Ballistic MIssible Simulation Douglas Roberts, J. Darrell Morgeson, Jared S. Dreicer, Howard W. Egdorf Los Alamos National Laboratory SIMSMART: Dynamic Simulation fo rAutomated Control of Complex Industrial Processes Don Waye Applied High Technology Limited Applicability of AI Techniques to Simulation Models Norman R. Nielsen, SRI International, Victoria P. Gilbert, Intellicorp Improving Effectivenes of Computer Simulation SModeling with Knowledge- Based Problem-Solving Capability Ronak Shodhan, J. J. Talavage, Purdue University Expert Systems within Simulations JohnPaul SanGiovanni, Jockey Holley Technologies A Communication Network Model of the Brain Ray Moses, Boeing Aerospace Company An ARtificial Intelligence (AI) Simulation Based Approach for Aircraft Maintenance Training Lee Keskey, Dave Sykes, Honey Well Inc. Knowledge Representation in Ada Sumitra M. REddy, Francis L. VAn Scoy, West Virginia University A Simulator of an Automatic Text Reading System Nikolaos G. Bourbakis, George Mason University, Scott Schneider, IDA Two-dimensional Image Scanning for Hierarchical Data Structures and Its Simulation Nikolaos G. Bourbakis, George Mason University Cognitive Learning Theory: A Tool for Modelling and Simulation Donald A. MacCuish, ICSD Corporation A Computer Simulation Program of Animal Maze Learning Roger Ingliss, Warren Marchioni, Montclair High School +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Protocol Specification, Testing and Verification: VII May 5- 8, 1987, IFIP Protocol Symposium Interconventional Ltd. c/o SWISSAIR CH-8058 Zurich-Airport, Switzerland Communicati.ng Rule Systems L. F. Mackert & I. Neumeier-Mackert IBM European Network Center, Heidelberg An Atomic Calculus of Communicati.ng Systems L. Logrippo and A. Obaid University of Ottawa Fundamental Results for the Verification of Observational Equivalence: A Survey T. bolognesi, CNUCE, Pisa, S. A. Smolka, SUNY, STony Brook Proof of Specification Properties by Using Finite State Machines and Temporal Logic A. R. Cavalli, F. Horn, CNET, Issy, Les Moulineaux Translation of Formal Protocol Specifications to VLSI Designs A. S. Krishankumar, B. Krishnamurthy, K. Sabnani AT&T Bell Labs, Murray Hill ------------------------------ End of AIList Digest ******************** ------- From in%@vtcs1 Thu Apr 9 04:51:17 1987 Date: Thu, 9 Apr 87 04:51:07 est From: vtcs1::in% To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #94 Status: R AIList Digest Thursday, 2 Apr 1987 Volume 5 : Issue 94 Today's Topics: Queries - Intelligent Database Retrieval & Reliable AI Systems, Humor - 7th Generation Computing Proposal: Basketball and AI, Jargon - Daemons, Comments - Military Sponsorship & Teaching Expert Systems & Policy on Broadcasting Code, Inference - What is the Color of Clyde? ---------------------------------------------------------------------- Date: Tue 31 Mar 87 09:51:26-PST From: Kevin W. Whiting Subject: Wanted:Info. on Tools which intelligently facilitate db retrieval All information about tools (PC based tools particularly), shells, products, and/or projects aimed at adding intelligence to database retrieval would be appreciated. Information on experiences with products such as Guru, Clout, Savvy, Q & A, or software resulting from project work such as ZOG, GUIDON, etc., is desired as well. I had thought there was a summary more or less on this topic posted to the net last fall but can't find it now. If you can point me to it or other summaries - it would be mucho appreciated. Kevin Whiting whiting@stripe.sri.com ------------------------------ Date: 29 Mar 87 17:18:32 EST (Sun) From: dciem!mmt@seismo.CSS.GOV Reply-to: mmt@dciem.UUCP (Martin Taylor) Subject: Re: Oxymoron: Real-time Knowledge-Based Nurse/Nuclear Plant Operator > >I strongly agree. Any safety-critical system should have certain >characteristics: it should be rigorously specified (AT LEAST the safety >aspects); it should be possible to reason rigorously about the >implementation, to convince others that it matches the specification; >it should be developed using QC/QA techniques that guarantee an audit trail >so that any faults discovered after development can be traced to their >cause. > >These considerations dictate the use of mathematically rigorous methods, and >a certified Quality Assurance regime. Does anyone know of an AI system >which measures up? Please reply by mail - I'll summarise. > Does anyone know of a human-controlled safety-critical system that measures up to these criteria? Presumably not, but people don't expect them to do so. To examine the reasons why not might be to get at the heart of what "Artificial Intelligence" is and is not, in relation to human intelligence. ------------------------------ Date: Tue, 31 Mar 87 13:16:03 cst From: lugowski%resbld%ti-csl.csnet@RELAY.CS.NET Subject: 7th generation computing proposal: basketball and AI In the wake of Indiana's capture of NCAA 1987 men's basketball championship and in the wake of AIList discussions on militarism in AI and real-time safety-critical AI, I propose that the emulation of basketball games would be a good domain for developing all sorts of useful technology, starting with multi-agent planning and ending in real-time control. For starters, one could consider a bird's eye view of the basketball court with moving circles representing the players and the ball. The robotics people could work on the missed dunk. The vision people could work on recognizing timeout signals. The naive physics crowd could model missed free throws. And the speech-to-text and image-to-speech ("this game's so good it speaks for itself") could zero-in on play-by-play. Analogies and metaphor folks could distinguish zone defenses from man-to-man, as well as the eigen-cliches of various color commentators. Reasoning under uncertainty could model the referees' calls. And the AI-in-law effort could model Coach Knight's use of the technical faul -- and the connectionist models of sentences -- of his faul language. This endeavor would be plenty difficult. It would offer abundant military applications as well as civilian ones. Moreover, it would provide the AI research community with a common performance yardstick while allowing everyone to do their own thing, from neural networks to expert systems. It would advance science and technology, not to mention the physical fitness of AI experimentalists. It might even do something for Indiana's AI effort and boost CMU's basketball standing. And we could anticipate hearing Marvin Minsky or David Rumelhart from the TV booths of the NCAAs tournaments to come -- "The Society of Swoosh", "Backpropagation of Missed Free Throws". There's just one more thing... Um, funding anyone? -- Marek Lugowski (Indiana M.S. '84) Neural Networks Project Texas Instruments Lugowski%CRL1@ti-csl.csnet P.O. Box 655936, M/S 154 (214) 995-4207 Dallas, Texas 75265 "basketball people and AI folks, unite!" [Too late -- it's being done. The following seminar at SRI described a system that tracks soccer players in down-looking imagery and reasons about their actions and intentions. It then generates a play-by-play commentary, being careful not to state anything that the listener could infer from previous statements. -- KIL] Prof. Wolfgang Wahlster of the Univeristy of Saarbruecken will give a talk and demonstration of his systems on Friday February 20th at 10 AM. GENERATING NAUTRAL LANGUAGE DESCRIPTIONS FOR IMAGE SEQUENCES Wolfgang Wahlster Computer Science Department Univerity of Saarbruecken West Germany The aim of the project VITRA (VIsual TRAnslator) is the development of a computational theory of the relation between natural language and vision. In this talk, we will focus on the semantics of path prepositions (like "along" or "past") and their use for the description of trajectories of moving objects, the intrinsic and deictic use of spatial prepositions and the use of linguistic hedges to express various degrees of applicability of spatial relations. First, we describe the implementation of the system CITYTOUR, a German question-answering system that simulates aspects of a fictitious sightseeing tour through a city. Then we show how the system was interfaced to an image sequence analysis system. From the top of a 35m high building, a stationary TV camera recorded an image sequence of a street crossing on video tape. In 130 selected frames the moving objects were automatically recognized by analyzing displacement vector fields. Our system then answers natural language queries about the recognized events. Finally, we discuss current extensions to the system for the generation of a report on a soccer game that the system is watching. Here we focus on the problem of incremental, real-time text generation and the use of a re-representation component that models the assumed imagination of the listener. ------------------------------ Date: Tue, 31 Mar 1987 09:58 EST From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU Subject: AIList Digest V5 #92 The term "demon" comes from Oliver Selfridge, via the paper, "Pandemonium: A Paradigm for Learning", published in Symposium of the mechanization of Thought Processes, November 1858. Selfridge's demons were small feature-detecting agents, whose inputs were linearly weighted sums of other signals, with autonomous hill-climbing learning procedures for determining the weights. Selfridge's demons were arranged in hierarchical networks; typical demons were constantly active - and "shrieking" with intensities proportional to their degrees of arousal; the nonlinear part was that certain "decision demons" would "recognize" which of their inputs was most active. ------------------------------ Date: Tue 31 Mar 87 15:10:11-PST From: Rich Alderson Subject: Daemons (and others...) The following definitions are from a file often distributed with Tops-20 EMACS, known there as INFO:JARGON.TXT; its origins are the files GLS; JARGON > at MIT-MC and AIWORD.RF[UP,DOC] at SAIL. This text is from the 1981 version; later, expanded versions eventually were published as _The Hacker's Dictionary_ around 1984. DAEMON (day'mun, dee'mun) [archaic form of "demon", which has slightly differ- ent connotations (q.v.)] n. A program which is not invoked explicitly, but which lays dormant waiting for some condition(s) to occur. The idea is that the perpetrator of the condition need not be aware that a daemon is lurking (though often a program will commit an action only because it knows that it will implicitly invoke a daemon). For example, writing a file on the lpt spooler's directory will invoke the spooling daemon, which prints the file. The advantage is that programs which want (in this example) files printed need not compete for access to the lpt. They simply enter their implicit requests and let the daemon decide what to do with them. Daemons are usually spawned automatically by the system, and may either live forever or be regenerated at intervals. Usage: DAEMON and DEMON (q.v.) are often used interchangeably, but seem to have distinct connotations. DAEMON was introduced to computing by CTSS people (who pronounced it dee'mon) and used it to refer to what is now called a DRAGON or PHANTOM (q.v.). The meaning and pronunciation have drifted, and we think this glossary reflects current usage. DEMON (dee'mun) n. A portion of a program which is not invoked explicitly, but which lays dormant waiting for some condition(s) to occur. See DAEMON. The distinction is that demons are usually processes within a program, while daemons are usually programs running on an operating system. Demons are particularly common in AI programs. For example, a knowledge manipulation program might implement inference rules as demons. Whenever a new piece of knowledge was added, various demons would activate (which demons depends on the particular piece of data) and would create additional pieces of knowledge by applying their respective inference rules to the original piece. These new pieces could in turn activate more demons as the inferences filtered down through chains of logic. Meanwhile the main program could continue with whatever its primary task was. DRAGON n. (MIT) A program similar to a "daemon" (q.v.), except that it is not invoked at all, but is instead used by the system to perform various secondary tasks. A typical example would be an accounting program, which keeps track of who is logged in, accumulates load-average statistics, etc. At MIT, all free TV's display a list of people logged in, where they are, what they're running, etc. along with some random picture (such as a unicorn, Snoopy, or the Enterprise) which is generated by the "NAME DRAGON". See PHANTOM. PHANTOM n. (Stanford) The SAIL equivalent of a DRAGON (q.v.). Typical phantoms include the accounting program, the news-wire monitor, and the lpt and xgp spoolers. SERVER n. A kind of DAEMON which performs a service for the requester, which often runs on a computer other than the one on which the server runs. Rich Alderson A.Alderson@{Lear, Othello, Hamlet, Macbeth}.Stanford.EDU ------------------------------ Date: 31 Mar 1987 15:25-EST From: DAVSMITH@A.ISI.EDU Subject: Re: AIList Digest V5 #92 My two cents on the Military AI issue. I totally agree with KIL's "voice of Reason" - the only reason for the existence of Arpanet is military sponsorship. I am currently working on the Pilot's Associate project - and am therefore biased in my view. Military applications such as this are excellent for "blowing the fluff away" and finding out which AI technologies are ready for real applications where need has been demonstrated. Perhaps a little later, we can digress on some of those findings. Without the military applications, who in the commercial sector would attempt to put together cooperating expert systems in real-time? [ One could broaden the issue and ask "Who in their right mind would..?"] The sad fact is that a technology in the university lab can look very good on viewgraphs, but you would be surprised at the back-pedalling which occurs when you offer the opportunity to plug into a real application. David Smith DAVSMITH@a.isi.edu ------------------------------ Date: Wed, 1 Apr 87 11:03:56 PST From: Ritchey Ruff Subject: Teaching Expert Systems > > First, the dissemination of Tiny MYCIN sounds interesting. > I wish I had a convenient Common LISP machine to try it out. > I realize that TMYCIN is just a tool, but I wonder how good a tool > it really is. What is the value of learning about this tool > (to learn about ESs) when you don't have an expert around (rhetorical > question, obviously some value)? I ask this because I have a relative > who does research in pathology and got his PhD in bacteriology > at BYU (any connection with Dugway Proving Ground is not coincidence). > > --eugene miya > NASA Ames Research Center > eugene@ames-aurora.ARPA Well, 3 months ago I would have said that an expert system tool in vivo is not much use, but now...I was a TA (teaching assistant) for an expert systems course here at Oregon State last term taught by Tom Dietterich. It was his first time around teaching this subject and so he decided to go at it from a case study/theory viewpoint (if the theory of expert systems isn't oxymorphic :-). Thus there was really nothing said about how to implement systems. The term project though WAS to implement a small expert system (4 or 5 weeks to do this, and we DON'T have any expert systems tools - just LISP, PROLOG, and OPS5). The projects were very impressive overall, but the style/organization/etc. were generally dismall. Not in a traditional sense but more from an expert systems sense. The code was documented, modular, etc. but not in a way that made it easy to analyze as an expert system. It was often hard to understand WHAT knowledge the system had from code reading. What is needed is both sides of the coin - the theory/case study, and a how-to-implement course. Having the proper tools is bound to help here, but several project in the above languages WERE readable as knowledge bases (style makes a difference). IF the TMYCIN tool comes with some GOOD examples (no matter how toy-ish) I think that a person could learn quite a bit about the how-to-code end of expert systems - which is just as important (in its own way) as the theory. --ritchey ruff (reformed couch potato) ruffwork%oregon-state@csnet-relay (soon to be ruffwork@cs.orst.edu) from the Home for the Artificially Intelligent ------------------------------ Date: 31 Mar 87 09:38 PST From: ghenis.pasa@Xerox.COM Subject: Special Postings and Digest Title Regarding the issue of whether source code, bibliographies, etc should be included in AIList... I realize this would create more work for Moderator Ken Laws, but what if these special postings always went out grouped in SEPARATE ISSUES and the "Subject:" line were made MORE DESCRIPTIVE so readers could skip selectively? Thus instead of getting AIList Digest V5 #92 we could get messages titled: AIList V5 #92 - Source AIList V5 #92 - Bibliography AIList V5 #92 - General or something along those lines (you get the idea) I would like to see source postings back in AIList, maybe the above system can satisfy those who would rather skip them. Any comments? Pablo Ghenis Xerox Artificial Intelligence Systems Educational Services [I could add such a heading, but one result would be longer delays for some material until enough arrived for a full digest. Anyway, I'm not sure I see the savings. My mailer, which is probably the used throughout the Arpanet, doesn't display enough of the title for the keywords to be visible. If I read enough of the message to get the full title, I only have to scroll a few more lines to get the Topics listing. A better solution is to have independent mailing lists for different types of material. Even the Stanford bboard is partitioned now, so why not AIList? The only difficulty is that I don't want to maintain multiple mailing lists. It wouldn't be so bad if I had a good database system for converting request messages into additions and deletions, but I have to do it by hand and I'm not eager to double or triple the time this takes. I've heard of a database server for code distributions that might be open to the Arpanet; I will investigate. I am beginning to think, though, that FTP and mail requests are not such a bad thing. Gordon Novak tells me he has had over thirty requests for his code, in addition to any FTPs (which he wouldn't know about). Handling thirty requests is a bit of a hassle, but also a bit of a thrill. It generates professional contacts and keeps people in touch. Why, I can imagine someone disallowing FTP altogether just to keep track of who is getting the code. To go even further, a separate interest list could be established. And if a code author didn't want the hassle at all, s/he could use AIList to find someone else willing to handle the distribution in return for access to the code. Isn't this better than having an impersonal central server stuffed with obsolete, unmaintained code? Or a broadcast system like AIList? The only real disadvantage is that code may become inaccesible if the author leaves his current site, but copies should be available from somewhere (perhaps via AIList query). -- KIL] ------------------------------ Date: 1 Apr 87 13:40:18 GMT From: Dekang Lindek Reply-to: lindek@cs.strath.ac.uk (Dekang Lin) Subject: Re: What is the color of Clyde? In article <8703021016.AA22995@stracs.cs.strath.ac.uk> lindek@seismo.CSS.GOV@cs.strath.ac.uk (Dekang Lindek) writes: >Look, WORLD, here is a little default reasoning exercise: > >95% of elephants have color grey. >40% of Royal Elephants have color yellow. >Clyde is a Royal Elephant. > >The color of Clyde is likely to be: > a) Grey b) Yellow c) Red d) Unknown > There are several bugs here: 1) 'most likely' should be used in place of 'likely' to make the question clear. 2) 'Unknown' should not be one of the choices because neither 'likely to be unknown' nor 'most likely to be unknown' makes any sense. It is a fact that the color of Clyde is unknown, otherwise we won't need to guess it. 3) 'elephants have color grey' sounds like Next-Generation-Database English. 4) 'Lindek' is his E-name, not surname. 5) (This place is reserved for future use) After fixing the first four bugs, we could make the following inference: #define confidence probability proposition confidence (1) A Royal Elephant is yellow. .40 (2) A Royal Elephant is not yellow .60 (3) (An elephant is not yellow) ==> (The elephant is grey) .95~1.0 (4) (A Royal elephant is not yellow) ==> (A Royal elephant is grey) .95~1.0 (5) A Royal Elephant is grey .57~.60 (6) Clyde is a Royal Elephant. 1.0 Conclusion(subject to change without notice): Clyde is most likely to be GREY.[] Discussion: The decision becomes harder to make when the confidence of (1) is inside the interval of (5). Comments: This problem seems too technical to be discussed on the net. An opinion poll in Glasgow will definitely show that the color of Clyde is green even though it is blue on the maps. Dekang Lin Dept. of CS Univ. of StrathClyde 26 Richmond Street Glasgow G1 1XH, U.K. lindek%uk.ac.strath.cs@ucl-cs.arpa ....!seismo!mcvax!ukc!strath-cs!lindek ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Thu Apr 9 04:51:54 1987 Date: Thu, 9 Apr 87 04:51:43 est From: vtcs1::in% <<@stripe.sri.com:LAWS@sri-stripe.arpa>> To: ailist@sri-stripe.arpa Subject: AIList Digest V5 #95 Status: R AIList Digest Thursday, 2 Apr 1987 Volume 5 : Issue 95 Today's Topics: Application - Text Critiquing ---------------------------------------------------------------------- Date: Wed 1 Apr 87 22:35:15-PST From: Ken Laws Reply-to: AIList-Request@SRI-STRIPE.ARPA Subject: Policy - Text Critiquing The following messages are about a system that critiques English prose. It could be argued whether this particular system is within the realm of AI, but the application area does seem to be of interest. This discussion really should be moved to the AI-ED@SUMEX-AIM list (which has just distributed the text grading query that started all this), or perhaps to the NL-KR list. For now, I will continue to distribute to the Arpanet discussions on this topic that have circulated on Usenet. -- Ken ------------------------------ Date: 27 Mar 87 19:07:40 GMT From: ritcv!rocksvax!rocksanne!sunybcs!colonel@CS.ROCHESTER.EDU (Col. G. L. Sicherman) Subject: Re: automatic checking ) But there is no reason to grade automatically, just let the students ) ^<<* 23. SENTENCE BEGINS WITH BUT *>> ) work on their papers (with the automatic checker) until they are satisfied. ) <<* 21. PASSIVE VOICE: are satisfied. *>>^ ) <<* 17. LONG SENTENCE: 24 WORDS *>>^ "They are satisfied" is in the passive voice? That's what comes of letting computers run things.... "Hey, Rocky! Watch me pull a UNIX program outa m' source directory!" "AGAIN?" "Nothin' up my sleeve ... PRESTO!" IDENTIFICATION DIVISION. PROGRAM-ID. PROCESS-DATA. AUTHOR-NAME. B. J. MOOSE, FROSTBYTE DATA SYS. SOURCE-COMPUTER. IBM-7044. OBJECT-COMPUTER. IBM-7044. . . . "No doubt about it--I gotta get a new source directory!" -- Col. G. L. Sicherman UU: ...{rocksvax|decvax}!sunybcs!colonel CS: colonel@buffalo-cs BI: colonel@sunybcs, csdsiche@ubvms ------------------------------ Date: 30 Mar 87 00:24:00 GMT From: kadie@b.cs.uiuc.edu Subject: Re: AI Project Information Request Several people have ask if the grammar checker I used was real. It is. It is a commercial product for the IBM PC. Here is some more information and an example. I own a spelling checker that I always use. And a grammar and style checker that I sometimes use. I have a lot of confidence in the spelling checker; I take virtually all of its advice. The style checker is not as good. I always consider it's suggestions, but I know that it has missed many grammar and style errors and that not everything it flags is really wrong. Enclosed find its critique of a draft report. This gives a pretty good indication of how well the program works. The program is RIGHTWRITER version 2.0, a Right Soft product by Decisionware, Inc. of 2033 Wood Street, Suite 218, Sarasota, Florida 33577. It runs on IBM PC's and compatible computers. It costs about $100.00. Carl Kadie University of Illinois at Urbana-Champaign UUCP: {ihnp4,pur-ee,convex}!uiucdcs!kadie CSNET: kadie@UIUC.CSNET ARPA: kadie@M.CS.UIUC.EDU (kadie@UIUC.ARPA) (I disclaim any ulterior relationship to Decisionware.) .+c "A Program To Compute Moore's Stable Expansions" .pp Moore has recently proposed a possible-world semantics for autoepistemic logic. His method has the intriguing property of producing multiple expansions, that <<* 16. UNNECESSARY COMMA *>>^ is it list the (finite) theories of what you believe about the world, given the axioms. ^<<* 17. LONG SENTENCE: 27 WORDS *>> For example, if your unbelief in proposition $P$ implies $Q$, and your unbelief in proposition $Q$ implies $P$, then we can theorize that either $P$ is true or alternatively $Q$ true. <<* 17. LONG SENTENCE: 31 WORDS *>>^ <<* 31. COMPLEX SENTENCE *>>^ .pp In Lisp notation the axioms are expressed: .(L (and (imp (not (l 'p)) q) (imp (not (l 'q)) p)) .)L and the conclusion is expressed: .(L (Q) (P) .)L .pp I have written a program that finds the stable expansions of formula in Moore's autoepistemic logic. As might be expected <<* 21. PASSIVE VOICE: be expected *>>^ the program run in time exponential to the number of variables. <<* 32. INCOMPLETE SENTENCE OR MISSING COMMA *>>^ Let's look at some runs: .(L A non-autoepistemic sentence: (expand '(and p (imp p (not q)) (imp (not q) r)) ;; axioms '(p q r) ;; propositions 0) ;; trace level returns: ((P (NOT Q) R)) .)L In other words, the axioms entail that $P$ is true, $Q$ is false, and $R$ is true. This is of course just what we expect for this propositional sentence. .pp Here is a trace of the run of the example we saw before: .(L [Figure goes here. -- CMK] .)L .pp The program also identifies cases where no stable expansion exists: .(L [Figure goes here. -- CMK] .)L .pp At higher trace levels, the program provides counter-models to non-grounded theories. For example: .(L (expand '(and (imp (not (l 'p1)) p2) (imp (not (l 'p2)) p3) (imp (not (l 'p3)) p4) (imp (not (l 'p4)) p1)) '(p1 p2 p3 p4) 2) ... (P1 P2 P3 P4) in theory is stable w.r.t. the axioms. S5 is ((P1 P2 P3 P4)) (s5:((P1 P2 P3 P4)) , V:((NOT P1) P2 (NOT P3) P4)) is a model of A Counter-model: (s5:((P1 P2 P3 P4)), V:((NOT P1) P2 (NOT P3) P4)) Theory (P1 P2 P3 P4) is NOT a stable expansion of the axioms ... ((P2 P4) (P1 P3)) .)L .pp In fact it is just this test of groundness that makes Moore's logic different from the logic of Shoham that we will see later. <<* 17. LONG SENTENCE: 24 WORDS *>>^ For example when we give Shoham's gun example to the program it replies that there are no stable <<* 1. REPLACE: that there BY there *>>^ expansions. This is because it does not have Shoham's chronological ignorance criteria with which to choose ungrounded theories. Here is the trace: .(L [Figure goes here. -- CMK] .)L .pp Having no stable expansion and believing nothing are two separate case. Here is a case where the only stable expansion is the theory where nothing is believed. ^<<* 21. PASSIVE VOICE: is believed. *>> .(L [Figure goes here. -- CMK] .)L .pp The program works by enumerating every theory, then constructing the corresponding S5 structure. Next, it tests every world of the S5, if any world fails to support the axioms then it is unstable and the theory is removed from consideration. <<* 21. PASSIVE VOICE: is removed *>>^ <<* 17. LONG SENTENCE: 27 WORDS *>>^ Stable theories are next tested for groundness. This is done <<* 21. PASSIVE VOICE: are next tested *>> <<* 21. PASSIVE VOICE: is done *>>^ by trying every variable assignment $V$. If an assignment makes the axioms true then $V$ must correspond to a world in the S5, or else the theory is not grounded. A theory <<* 21. PASSIVE VOICE: is not grounded. *>>^ <<* 17. LONG SENTENCE: 33 WORDS *>>^ that is both stable and grounded is added to the stable <<* 21. PASSIVE VOICE: is added *>>^ expansion list to be returned at the end of the program. <<* 21. PASSIVE VOICE: be returned *>> <<* 17. LONG SENTENCE: 24 WORDS *>>^ .pp Overall, the program works very well on small problems (four variable problems take only seconds on a SUN). The program accepts any formula that Lisp can evaluate; so very complex formula may be input. However, since the program relies on enumeration, it can not be expanded <<* 21. PASSIVE VOICE: be expanded *>>^ to first-order logic, nor can it be considered practical <<* 21. PASSIVE VOICE: be considered *>>^ unless the problems can be guaranteed to be small. <<* 21. PASSIVE VOICE: be guaranteed *>> <<* 17. LONG SENTENCE: 31 WORDS *>>^ <<* 31. COMPLEX SENTENCE *>>^ <<** SUMMARY **>> READABILITY INDEX: 7.63 Readers need an 8th grade level of education to understand. STRENGTH INDEX: 0.19 The writing can be made more direct by using: - the active voice - shorter sentences - more common words - fewer abbreviations DESCRIPTIVE INDEX: 0.74 The use of adjectives and adverbs is within the normal range. JARGON INDEX: 0.25 SENTENCE STRUCTURE RECOMMENDATIONS: 15. No Recommendations. << UNCOMMON WORD LIST >> The following words are not widely understood. Will any of these words confuse the intended audience? AUTOEPISTEMIC 3 AXIOM 2 AXIOMS 40 CHRONOLOGICAL 1 CRITERIA 1 DRIBBLE 1 ENTAIL 1 ENUMERATING 1 ENUMERATION 1 EXPONENTIAL 1 FINITE 1 FIRE4 20 GROUNDNESS 2 IMP 24 INTRIGUING 1 LISP 2 LOAD1 20 MOORE 1 MOORE'S 3 NIL 5 NOISE6 17 P 4 PROPOSITION 2 PROPOSITIONAL 1 PROPOSITIONS 2 Q 4 R 1 SEMANTICS 1 SHOHAM 1 SHOHAM'S 2 THEORIZE 1 UNBELIEF 2 UNGROUNDED 1 V 2 VACUUM5 17 WRT 19 << END OF UNCOMMON WORD LIST >> <<** WORD FREQUENCY LIST **>> A 31 ABOUT 1 ACCEPTS 1 ADD 1 ALSO 1 ALTERNATIVELY 1 AN 1 AND 18 ANY 2 ARE 4 AS 1 ASSIGNMENT 3 AT 3 AUTOEPISTEMIC 3 AXIOM 2 AXIOMS 40 BE 7 BECAUSE 1 BEFORE 1 BELIEVE 3 BOTH 1 [Rest of word frequency list goes here -- CMK] <> ------------------------------ Date: Mon, 30 Mar 87 11:31 EST From: "Linda G. Means" Subject: Re: AI Project Information Request means%gmr.com@relay.cs.net >Date: 25 Mar 87 01:44:00 GMT >From: kadie@b.cs.uiuc.edu >Subject: Re: AI Project Information Request > > >Automatic checking and automatic grading are different things. I think > <<* 3. WEAK: I think *>>^ >automatic computer checking is a good thing, especially for spelling >and simpler grammar. > >But there is no reason to grade automatically, just let the students > ^<<* 23. SENTENCE BEGINS WITH BUT *>> >work on their papers (with the automatic checker) until they are satisfied. > <<* 21. PASSIVE VOICE: are satisfied. *>>^ > <<* 17. LONG SENTENCE: 24 WORDS *>>^ >Then have them turn in their work and the final computer critique to a human >grader. > >The situation is similar to programming, where the compiler >automatically checks the syntax. It would be unthinkable to make people turn >in programs without letting them compile the programs first. On >the other hand it would unthinkable to leave a syntax error in >when the compiler tells you right were it is. > > > <<** SUMMARY **>> > > READABILITY INDEX: 10.42 > Readers need a 10th grade level of education to understand. > > STRENGTH INDEX: 0.41 > The writing can be made more direct by using: > - the active voice > - shorter sentences > > DESCRIPTIVE INDEX: 0.65 > The use of adjectives and adverbs is within the normal range. > > JARGON INDEX: 0.00 > > SENTENCE STRUCTURE RECOMMENDATIONS: > 1. Most sentences contain multiple clauses. > Try to use more simple sentences. > > << UNCOMMON WORD LIST >> >The following words are not widely understood. >Will any of these words confuse the intended audience? > CRITIQUE 1 SYNTAX 2 UNTHINKABLE 2 > << END OF UNCOMMON WORD LIST >> > > >Carl Kadie >University of Illinois at Urbana-Champaign >UUCP: {ihnp4,pur-ee,convex}!uiucdcs!kadie >CSNET: kadie@UIUC.CSNET >ARPA: kadie@M.CS.UIUC.EDU (kadie@UIUC.ARPA) Carl, Your submission regarding grammar/style checkers sends a mixed message to me. The content appears to advocate the use of such systems as tutoring tools. The automatic critique interspersed throughout the text, however, seems to belie your intention. First of all, it failed to point out three blatant errors in the text: > But there is no reason to grade automatically, just let the students work on their papers (with the automatic checker) until they are satisfied. - a comma is an inappropriate conjuntion for the two independent clauses here; a semi-colon would be more appropriate. > On the other hand it would unthinkable to leave a syntax error in when the compiler tells you right were it is. - 'where' is misspelled as "were". - 'be' was omitted before 'unthinkable'. Second, I have a number of objections to the types of criticisms the program does make. You characterize automatic style checking as "... a good thing, especially for spelling and simpler grammar". I would call it simple-minded, not simple. The complaint about the use of passive voice in "until they are satisfied" is ridiculous. This is not an example of passive voice at all; it's a predicate adjective. And even if passive voice had been used there, so what? The strength index in the summary gives the text a low grade on the basis of two supposed weaknesses: one occurrance of passive voice, and one sentence which is overly long (24 words). This evaluation is quite misleading to a student, who will subsequently comb his papers for constructions like "they are satisfied" to be purged, and will frantically count words in sentences. No machine or human critic should object to the use of "they are satisfied" in this context (or probably any other). And if you want to evaluate sentence length as an index of readability, number of words is too superficial an index to use. The readability of a sentence is better judged by the embedding of clauses, or syntactic complexity. I think it would be dangerous to have students become obsessed with counting words in sentences while ignoring sentence structure, just because the teacher requires them to use an inadequate computer program as a teaching aid. I could go on and on. And I think I will, because I get so angered by the commercial crap which is passed off to gullible teachers and parents as computer-aided instruction! I don't want my child to learn how to write with a program that scolds him every time he begins a sentence with the word 'but', and tells him that he should "try to use more simple sentences" because "most [of his] sentences contain multiple clauses". There's nothing wrong with multiple clauses, even in most of your sentences. Try writing most sentences with single clauses. The technique will not enhance your writing style, I assure you. Granted, you don't want to embed clauses in your sentences to the depths of Hell. But the difficulty in comprehending sentences with a lot of embedding stems from the syntactic structure, and not simply the number of clauses. (Nyaaa nyaaa, I just started a sentence with 'but'. Did it make your skin crawl as you read it? No, of course not. Some sentences just cry out to begin with 'but', although not ALL your sentences should.) Which sentence do you find more readable: the sentence criticized for its length in your text (number 1 below), or my utterly grammatical and very short sentence number 2? 1. But there is no reason to grade automatically, just let the students work on their papers (with the automatic checker) until they are satisfied. 2. The man the girl the boy loved kissed died. I don't want my child's creativity and personality in his writing to be stifled by a computer program which performs such rigid and superficial analysis. Nor do I want his writing to be limited to an average 10th grade level when he's in 10th grade if his writing ability goes beyond the level of the average 10th grader. Thanks for including your style checker's criticisms in your message. It serves as evidence that those programs may do more harm than good to a child's developing literary talents. Linda Means means%gmr.com@relay.cs.net ------------------------------ End of AIList Digest ******************** ------- ------- From in%@vtcs1 Sat Apr 11 11:33:21 1987 Date: Sat, 11 Apr 87 11:33:16 est From: vtcs1::in% To: ailist@stripe.sri.com Subject: AIList Digest V5 #96 Status: R AIList Digest Monday, 6 Apr 1987 Volume 5 : Issue 96 Today's Topics: Code Source - BBS for Micro AI and Geotechnical Applications, Queries - Fuzzy Logic Implementation & OPS5 Examples & Knowledge Representation Languages, Comments - Demons and Censorship, Application - Police Computer Detects Suspects ---------------------------------------------------------------------- Date: 1 Apr 87 23:38:22 GMT From: nbires!isis!csm9a!japplega@ucbvax.Berkeley.EDU (Joe Applegate) Subject: New AI BBS for Micro AI Applications The Colorado School of Mines Consortium for AI Research is sponsoring a public BBS for AI and geotechnical discussions and public domain software. This forum features areas for AI and conventional language development as well as the geologic and geophysical disciplines. Currently 6 meg. of PC based public domain applications are on line (most with source). You can reach this forum at (303) 273-3989 300/1200/2400 baud 8-N-1 24 hrs. Supports XMODEM, TELINK, and YMODEM transfer protocols. Joe Applegate - Colorado School of Mines Computing Center {seismo, hplabs}!hao!isis!csm9a!japplega *** UNIX is a philosophy, not an operating system! *** ------------------------------ Date: Thu, 2 Apr 87 14:06:08 PST From: jain%newton.Berkeley.EDU@berkeley.edu (Pramod Jain) Subject: Information on Fuzzy Logic Implementation wanted We are interested in information on research and development issues in implementation of fuzzy logic, computer architecture based on fuzzy logic, hardware issues in fuzzy control, the fuzzy chip and the like. Any leads, pointers, references. will be greatly appreciated. Reply to jain@newton, or call 415-642-8255. Pramod Jain. ------------------------------ Date: Wed, 1 Apr 87 23:08:00 WET From: John Fitch (Bath Univ.) Subject: OPS5 Examples Does anyone have any example s of OPS5 programs? I have the Monkeys and Bananas, and also the Manhattan mapper. What I want is examples so we can test alternative matching algorithms. Any help or pointers would be appreciated ==John Fitch University of Bath ------------------------------ Date: Thu, 2 Apr 87 16:46:57 EST From: weltyc@csv.rpi.edu (Christopher A. Welty) Subject: Knowledge Representation Languages Pardons if this has been done recently, I've been off ailist for the past month (doctors orders :-). I'm working on some KR tools - specifically, designing a new representation language - and I am currently discussing with colleagues and students in the project the issues involved. We are looking at various existing KR languages and their merits/faults, but only I and one other person in the project have any real experience with any of these (SRL, KRL, CRL, FRL ...). I thought it would be interesting (and hopefully enlightening for me) to get some input from the net here. I'd like to discuss what other people who are actually using/have used KR systems (like Knowledge Craft, KEE, etc.), think of these systems. It seems to start an interesting discussion (I thought the conciousness stuff was interesting) you have to make a bold statement that you know people will disagree with and get riled - or you have to be Marvin Minsky and just post a simple message, so maybe I'm going about this the wrong way...but I'll give the soft approach a shot first. -Chris Welty, RPI weltyc@csv.rpi.edu [I have forwarded a copy of this query to the NL-KR@ROCHESTER.ARPA list, and would expect the main discussion to occur there. -- KIL] ------------------------------ Date: Fri, 3 Apr 87 18:29 PST From: Tom Garvey Subject: Demons and censorship One of the first places I saw "demons" used in the manner that they now exist (as "daemons," a sexier, more classical sounding spelling and a "subordinate deity" into the bargain) was in Carl Hewitt's Planner system/language (actually, I don't know whether there ever was a real implementation of Planner, but Winograd & Sussman (and al, too, I suspect) implemented a version in Lisp called Microplanner). Anyway, demons (as in the Maxwell's Demon sense) were the little processes with associated patterns that watched the "data base," and when an assertion was made, any demon with a pattern that matched the assertion would be activated to do something interesting. These were also known as "antecedent theorems," but that isn't nearly as catchy, and most people referred to them as "demons." All this took place in the early '70s, and I suspect their use predated the "daemon" processes that today screw up our mail, hardcopies, and network access on a wide variety of machines. By the way, I feel that it is an incredible expression of arrogance to assume that we can ever produce a machine intelligence, and I continue to be astounded that you allow messages dealing with that topic. From now on, I would like to request that you filter these messages from the list; if you feel that this approach is too draconian or possibly controversial, perhaps you could just insist that anyone writing a note dealing with intelligence in any form so indicate in the header, and the rest of us (who are, of course, better and more socially conscious than the rest of you) can just skip over them. This note has nothing to do with intelligence in any form. And, of course, "Everything you know is wrong!" Cheers, Tom [I have faith that only half of what I know is wrong. I'll let you know when I find out which half. -- KIL] ------------------------------ Date: Sat, 4 Apr 1987 16:27 CST From: Leff (Southern Methodist University) Subject: Police Computer Detects Suspects >From the Daily Campus, Thursday, April 2, 1987, Original Source: Associated Press Grand Prairie, Texas - A Computer used by police to detect likeley locations for crime pinpointed the likely time and location for a burglar enabling the police to stake out the area and make the arrest. The computer predicted the time to an accuracy of four hours and the place to within a few blocks. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Apr 14 15:26:57 1987 Date: Tue, 14 Apr 87 15:26:48 est From: vtcs1::in% To: ailist@stripe.sri.com Subject: AIList Digest V5 #97 Status: R AIList Digest Tuesday, 14 Apr 1987 Volume 5 : Issue 97 Today's Topics: Administrivia - Recent Delivery Problems, Policy - Source Code Postings, Seminars - The Anatomy of AI Tarpits (CMU) & Analogical Transformation Extension (CMU) & Leaning on the World (CMU) & Parallelism for KR Languages (SRI) & Understanding and Personality (SUNY Buffalo) & An Integrated Framework for Factory Scheduling (CMU) ---------------------------------------------------------------------- Date: Mon 13 Apr 87 10:28:38-PDT From: Ken Laws Subject: Recent Delivery Problems AIList distribution has been delayed for about a week due to 1) mailer problems caused by the recent change in Arpanet host names, 2) the time it has taken me to put in a garden, and 3) the birth of my third child, Devon Lee Laws. The mailer and garden problems seem to be fixed now, but please excuse my continued poor response time -- I have to spend a greater percentage of my evenings and weekends looking after my family now. -- Ken ------------------------------ From: "Norbert E. Fuchs" Subject: Source Code Postings Using FTP for source code downloading may be fine for those on the Arpanet, but there are others like myself outside the USA who have no access to an Arpanet site. I support posting source code in AIList - or an equivalent solution - so that everybody has the possibility of downloading. --- nef ------------------------------ Date: 1 Apr 87 15:01:35 EST From: Patricia.Mackiewicz@isl1.ri.cmu.edu Subject: Seminar - The Anatomy of AI Tarpits (CMU) SPECIAL AI SEMINAR TOPIC: "The Anatomy of AI Tarpits" SPEAKER: Phil Agre, MIT WHEN: Monday, April 6, 1987, 1:00 pm WHERE: Doherty Hall 3313 ABSTRACT This is a talk I gave at a recent workshop on Meta-Level Architectures. I originally wrote it to blow off steam at the last dozen AI papers I had read, but I have come to think that it provides a clean, simple explanation of why AI (among other fields) is wedged. My thesis is that AI, as a field, has a pathological attitude toward language. AI repeatedly gets itself into tarpits based on pseudo-technical words like, for example, "planning." The community (or, lately, some sub-field of it) cultivates a habit of seeing "planning" in an activity's slightest intentionality, regularity, deliberateness, or planfulness -- and marginalizing or ignoring anything else. Then it writes (destruct plan ...). I will describe an anatomy of tarpits that lets us predict in remarkable detail how their victims will get stuck. Then I will discuss five examples: the mind, planning, knowledge, variables, and the meta level. Along the way I will suggest some ways out. ************************************************************************** If you are interested in an appointment with Phil Agre please contact Patty at extension 8818 or pah@d. ************************************************************************** ------------------------------ Date: 1 Apr 87 19:51:18 EST From: Steven.Minton@cad.cs.cmu.edu Subject: Seminar - Analogical Transformation Extension (CMU) Wei-Min Shen is giving this week's seminar. As usual, we will meet in 7220 at 3:15 on Friday. Here's the abstract: Analogical Transformation Extention and its Applications One of the aspects of learning by analogy is concerned with constructing and generalizing a transformation in the source domain and productively using it in the target domain. In this talk, we will discuss a preliminary approach, ATE, to the problem and its applications to: (1) creating new operators (more general than Macro-Operators) in AI discovery systems; and (2) solving problems in Geometric-Analogy Intelligence-Tests. For the first application, we will discuss in detail an implemented system, ARE. It starts with a small set of creative operations and a small set of heuristics, and uses ATE to create all the concepts attained by Lenat's AM system, and others as well. Besides showing a way to meet the criticisms of lack of parsimony that have been leveled against AM, the ARE system provides a route to discovery systems that are capable of "refreshing" themselves indefinitely by continually creating new operators. For the second application, we will compare the ATE approach with the method used by Evans in his program for solving problems in Geometric-Analogy Intelligence-Tests, and show that the ATE approach can solve the problems more efficiently. This discussion is a report on an ongoing project. We will appreciate any suggestions and comments. In case I cannot answer your hard questions, I will bring some delicious chinese rice pudding as my defence. ------------------------------ Date: 3 Apr 87 08:36:22 EST From: Patricia.Mackiewicz@isl1.ri.cmu.edu Subject: Seminar - Leaning on the World (CMU) TOPIC: "Leaning on the World" SPEAKER: Phil Agre, MIT WHEN: Tuesday, April 7, 1987, 3:30pm WHERE: Wean Hall 5409 David Chapman and I have been studying the organization of everyday routine activity (things like making breakfast and driving to work) with an eye to understanding the human cognitive architecture. In trying to explain what we've observed, we've been lead away from mentalistic metaphors emphasizing containment and boundary (perception, behavior, programs and processes, content-bearing datastructure-like representations) and toward metaphors emphasizing agents' interactions with their worlds. Our central distinction is between an agent's "machinery" and the "dynamics" of its activity. We have found that, for the broad range of routine activity we have studied, a very simple architecture suffices. It consists of an innate "periphery" (along the lines of Marr and Ullman) and a constructed "center". Careful analysis of the reliable patterns of interaction in the agent's world allows the center to be made out of very simple hardware, in fact combinational logic. This simplicity derives largely from a new theory of representation. Where traditional representation schemes posit objectively defined "individuals" in the world, our scheme of "indexical-functional aspects" (or "aspects" for short) parses the nearby materials according to their relationship to the agent's person (i.e., indexically) and purposes (i.e., functionally). Such a scheme generalizes its understanding without putting variables in for constants, so it does not need any hardware for matching, binding, and substitution. Chapman is almost done implementing an instance of this architecture. Pengi is a program that plays the video game Pengo. Pengi's periphery simulates a person looking at a video game monitor. Its center is a fixed combinational network derived from a specification of the salient aspects of the recurring game situations. With luck, a demo will be available. Strongly suggested reading (copies may be available): Chapman and Agre, Penti: An Implementation of a Theory of Situated Activity, submitted to AAAI-87. Chapman and Agre, Abstract Reasoning as Emergent from Concrete Activity, Workshop on Reasoning About Action, 1986. Shimon Ullman, Visual Routines, MIT AI Lab Memo 723, June 1983. ************************************************************************** If you are interested in an appointment with Phil Agre please contact Patty at extension 8818 or pah@d. ************************************************************************** ------------------------------ Date: Fri, 10 Apr 87 12:23:48 PDT From: Amy Lansky Subject: Seminar - Parallelism for KR Languages (SRI) PARALLELISM IN INTERPRETERS FOR KNOWLEDGE REPRESENTATION LANGUAGES Henry Lieberman (HENRY@OZ.AI.MIT.EDU) MIT 11:00 AM, MONDAY, April 13 SRI International, Building E, Room EJ228 While there has been considerable interest in applying parallelism to problems of search in knowledge representation languages, lingering assumptions of sequentiality in the interpreters for such languages still stand in the way of making effective use of parallelism. Most knowledge representation languages have a sequential QUERY-SEARCH-ANSWER loop, the analog of the READ-EVAL-PRINT loop of Lisp, and employ parallelism only in the SEARCH phase, if at all. I will discuss parallel alternatives to sequential interpreters for knowledge representation languages, and new approaches to constructing user interfaces for these languages. These observations arise out of experience with the representation language Omega of Attardi, Simi, and Hewitt. The approach is motivated by a desire to respond to Hewitt's "open systems" critique of logic-based systems, which strives for systems that can deal with inconsistent beliefs, dynamically revise beliefs, and are sensitive to allocation of resources. VISITORS: Please arrive 5 minutes early so that you can be escorted up from the E-building receptionist's desk. Thanks! ------------------------------ Date: 10 Apr 87 22:24:44 GMT From: rocksvax!rocksanne!sunybcs!rapaport@CS.ROCHESTER.EDU (William J. Rapaport) Subject: Seminar - Understanding and Personality (SUNY Buffalo) philosophy of science STATE UNIVERSITY OF NEW YORK AT BUFFALO GRADUATE GROUP IN COGNITIVE SCIENCE JOHN HAUGELAND Department of Philosophy University of Pittsburgh UNDERSTANDING AND PERSONALITY Artificial Intelligence (AI) has inherited a conception of pure under- standing from modern philosophy, especially Descartes and Kant. How- ever, developments within AI, specifically with regard to knowledge representation, have partially undermined this conception. It will be argued that they have not gone far enough in this. In particular, ``impurities'' like ego and affects must be included as well. Thursday, April 23, 1987 4:00 P.M. Knox 4, Amherst Campus Co-sponsored by: Department of Computer Science and Colloquium in the History and Philosophy of Science Informal discussion at 8:00 P.M. at Stuart Shapiro's house, 112 Park- ledge Drive, Snyder, NY. Call Bill Rapaport (Dept. of Computer Science, 636-3181), Gail Bruder (Dept. of Psychology, 636-3676), or Zeno Swijtink (Dept. of Philosophy, 636-2444) for further information. ------------------------------ Date: 10 Apr 87 13:05:44 EDT From: Patricia.Mackiewicz@isl1.ri.cmu.edu Subject: Seminar - An Integrated Framework for Factory Scheduling (CMU) AI SEMINAR TOPIC: Toward An Integrated Framework For Factory Scheduling SPEAKER: Steve Smith, CMU WHEN: Tuesday, April 14, 1987, 3:30 p.m. WHERE: Wean Hall 5409 ABSTRACT: In this talk we present work aimed at providing an integrated framework for coordinating factory production. An integrated framework is defined as one that merges predictive generation/expansion of the production schedule with reactive schedule management in response to the dynamics of factory operation. We describe OPIS, a knowledge-based scheduling system that advocates a common view of predictive and reactive scheduling as an opportunistic problem solving process. This view is realized by a system architecture that combines constraint propagation and consistency maintenance techniques with heuristics for dynamically focusing the scheduler according to characteristics of current solution constraints. A collection of scheduling methods, varying in the decomposition of the problem that is assumed and the types of constraints and objectives that are emphasized, are defined to provide strategic alternatives. We present experimental evidence of the effectiveness of this approach in generating schedules and give examples of its use in reactively revising them as the situation warrants. We then turn attention to the central assumption of an incrementally maintained schedule as the basis for factory floor decision-making and consider its computational implications. Current work directed toward improving the robustness of predictive schedules and hierarchically distributing the scheduling effort is described. ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Apr 14 15:27:16 1987 Date: Tue, 14 Apr 87 15:27:05 est From: vtcs1::in% To: ailist@stripe.sri.com Subject: AIList Digest V5 #98 Status: R AIList Digest Tuesday, 14 Apr 1987 Volume 5 : Issue 98 Today's Topics: Conference Session - 11th Annual Computer Science Conference, Conferences - Midwest AI and CogSci Society & Philosophy/Psychology Conference ---------------------------------------------------------------------- Date: Sat, 11 Apr 1987 01:47 CST From: Leff (Southern Methodist University) Subject: Conference Session - 11th Annual Computer Science Conference Eleventh Annual Computer Science Conference Texas Woman's University, Denton Texas, Thursday, April 23, 1987 9:00AM, Steve Krueger, Topic: The Texas Instrument AI/LISP Chip -- Its Functional Architecture 10:45AM Embedding Parallelism into an Expert System, L. Haerim 11:15 Topics in the Applications of Prolog, D. Scott Thorp 11:45 A Program to Learn and Play Bridge, S. Starmer, T. Nabors, t. Nute, J. R. Rinewalt Speech Recognition Perspective, D. H. Lin 1:30 Pattern Recognition for Analysis of Inexact Data 2:00 Analysies of Some Strategies for Playing Mastermind Kwok-bun Yue 3:00 Developing an Expert System for Process Planning G. N. Black, East Texas State University 4:00 PM, Using a Two Camera System to Computer 3-D Positons by Silvia Monroe ------------------------------ Date: Mon, 6 Apr 87 13:18:29 cdt From: Kris Hammond Subject: Conference - Midwest AI and CogSci Society The First Annual Meeting of The Midwest Artificial Intelligence and Cognitive Science Society The University of Chicago April 24th and 25th The Enrico Fermi Research Institute 5640 Ellis Ave - Room 480 We now have a schedule for the first meeting of Midwest Artificial Intelligence and Cognitive Science Society (MAICSS): Friday, April 24th. 7:00 Welcome to MAICSS 7:15 Keynote Address - Gerald DeJong: Machine Learning at UIUC 8:15 MAICSS reception/dinner Saturday, April 25th. 9:00 AI at Ohio - Ashok Goel 9:30 Ohio Student Talks 10:30 AI at Michigan - Steve Lytinen 11:00 Michigan Student Talks 11:20 The Organization of Natural Movement - Peter Greene: IIT 11:50 IIT Student Talks 12:30 Lunch 1:30 AI at Wisconsin 2:00 AI at Chicago - Kristian Hammond 2:30 Adaptive Feedback Testing System - Ming Rao: UI Circle 2:50 Break 3:20 UIUC Student Talks 4:20 Northwestern Student Talks 5:30 MAICSS Business meeting If you plan on attending, please get in touch with us now. We need to have an accurate head count so we can order food and print up the right number of proceedings. There is no registration fee but we would prefer that people don't just show up at the door without notice. So, if you have not been in touch with us yet, please call (312) 702-8070 and talk to Andrea. If you are planning on coming, DO THIS NOW. We still have space to put up out-of-town graduate students on Friday and Saturday night. But, here again, we need to know before hand who needs space. There is also a definite limit on how much space we have. For non-students, we have arranged for housing at the Hyde-Park Hilton. The number there is (312) 288-5800. They are holding a block of reduced rate rooms for the conference. If you have any other questions concerning the conference, call Kris Hammond at (312) 702-1571 or send mail to kris@gargoyle.uchicago.csnet - for CSnet mail or kris%gargoyle.uchicago.csnet-relay.arpa - for ARPA mail. Thanks and we'll see you there. ------------------------------ Date: 4 Apr 87 05:55:03 GMT From: princeton!mind!harnad@RUTGERS.EDU (Stevan Harnad) Subject: Conference - Philosophy/Psychology Conference Program of the 13th Annual Meeting of the Society for Philosophy and Psychology June 21 -23, University of California, San Diego For program information: William Bechtel (SPP Program Chairman), Philosophy Department, Georgia State University, Atlanta GA 30303-3083 phone: (404)-658-2277 bitnet address: psuvax1!phlpwb%GSUMVS1.BITNET For membership information: Patricia Kitcher, Philosophy Department, University of California-San Diego, La Jolla CA 92093 arpanet address: sdcsvax!ir205%sdcc6 -------- SUNDAY, JUNE 21, 1987 -------- 9:00 - 11:00am SYMPOSIUM: DEPRESSION, COGNITION, AND RATIONALITY Chair: Evalyn Segal, Psychology, San Diego State University Speakers: George Graham, Philosophy, University of Alabama at Birmingham Christopher Peterson, Psychology, University of Michigan Lynn Rehm, Psychology, University of Houston Commentator: Richard Garrett, Philosophy, Bentley College 1:00 - 3:15pm CONCURRENT CONTRIBUTED PAPERS SESSIONS I AND II SESSION I: Behavior and Belief Chair: James Pate, Psychology, Georgia State Unviersity Speaker: Ruth Garrett Millikan, Philosophy, University of Connecticut "What is Behavior? or Why Narrow Psychology/Ethology is Impossible" Commentator: John Biro, Philosophy, University of Oklahoma Speaker: David Martel Johnson, Philosophy, York University "'Brutes Believe Not': Why Non-Human Animals Have No Beliefs" Commentator: Carolyn Ristau, Psychology, Vassar SESSION II: Computational Theories of Mind Chair: Owen Flanagan, Philosophy, Wellesley Speaker: David Kirsch, Artificial Intelligence, MIT "The Concept of Computation in Connectionist Systems" Commentator: Brian Cantwell Smith, Computer Science, Xerox PARC Speaker: Joseph Levine, Philosophy, North Carolina State University "Demonstrative Thought" Commentator: La Verne Shelton, Educational Testing Service, Princeton 3:30-5:00pm INVITED LECTURE: LANGUAGES OF THE DEAF Chair: Adele Abrahamsen, Language Research Center, Georgia State Speaker: Howard Poizner, Salk Institute, San Diego "Brain Function for Language: Perspectives from Another Modality" 7:00-10:00pm SYMPOSIUM: ANALOGY AND LEARNING Chair: Paul Thagard, Cognitive Science, Princeton Speakers: Dedre Gentner, Psychology, University of Illinois Doug Medin, Psychology, University of Illinois Keith Holyoak, Psychology, University of California, Los Angeles Commentator: Eva Kittay, Philosophy, SUNY, Stony Brook ----- MONDAY, JUNE 22, 1987 -------- 9:00-11:30am SYMPOSIUM: CONNECTIONISM AND IMAGE SCHEMATIC STRUCTURES Chair: Patricia Churchland, Philosophy, University California, San Diego Speakers: David Rumelhart, Psychology, University of California, San Diego George Lakoff, Linguistics, University of California, Berkeley Mark Johnson, Philosophy, Southern Illinois University Terrence Sejnowski, Biophysics, Johns Hopkins University 12:30-2:45pm CONCURRENT CONTRIBUTED PAPERS SESSIONS III, IV, AND V Session III: Logic and Reasoning Chair: Ralph Kennedy, Philosophy, Wake Forest Speaker: David Sanford, Philosophy, Duke University "Circumstantial Validity" Commentator: John Rust, Psychology, London School of Education Speaker: Howard Margolis, Committee on Public Policy, University of Chicago "Habits of Mind" Commentator: Stuart Silvers, Philosophy, Tilburg University Session IV: Mentalistic Explanations Chair: Speaker: Joseph Thomas Tolliver, Philosophy, University of Maryland "Knowledge Without Truth" Commentator: Kent Bach, Philosophy, San Fransciso State University Speaker: Louise M. Antony, Philosophy, North Carolina State University "Anomalous Monism and the Problem of Explanatory Force" Commentator: Ken Presting, Philosophy, San Francisco State University Session V: Subjective Experience Chair: Hilary Kornblith, Philosophy, Vermont Speaker: James S. Kelly, Philosophy, Miami University "On Quining Qualia" Commentator: Henry Jacoby, Philosophy, East Carolina University Speaker: Richard J. Hall, Philosophy, Michigan State University "Is An Inverted Pain-Pleasure Spectrum Possible?" Commentator: 3:00-5:30pm SYMPOSIUM: CONCEPTUAL AND SEMANTIC CHANGE IN CHILDHOOD AND SCIENCE Chair: Speakers: Annette Karmiloff-Smith, MRC, Cognitive Development Unit Alison Gopnik, Psychology, University of Toronto Susan Carey, Psychology, Massachusetts Institute of Technology Philip Kitcher, Philosophy, University of California, San Diego 8:00-9:00pm PRESIDENTIAL ADDRESS Chair: Alvin Goldman, Philosophy, Arizona Speaker: Stevan Harnad, Behavioral and Brain Sciences "Uncomplemented Categories, or, What Is It Like To Be a Bachelor?" ------ TUESDAY, JUNE 23, 1987 ------ 9:00-11:00am SYMPOSIUM: SEMANTICS Chair: Richard Jeffrey, Philosophy, Princeton Speakers: Mark Johnston, Philosophy, Princeton Barbara Hall Partee, Linguistics, U. Massachusetts, Amherst Norbert Hornstein, Linguistic, University of Maryland Commentator: Stephen Schiffer, Philosophy, University of Southern California 11:15-12:30pm INVITED LECTURE: Memory and Brain Chair: Speaker: Larry R. Squire, Psychiatry, University of California, San Diego "Memory and Brain: Neural Systems and Behavior" 1:30-3:45pm CONCURRENT CONTRIBUTED PAPER SESSIONS VI AND VII SESSION VI: CONCEPTS Chair: Bernard Kobes, Philosophy, Arizona State University Speaker: Kenneth R. Livingston & Janet Andrews, Psychology, Vassar College "Reflections on the Relationship Between Philosophy and Psychol- ogy in the Study of Concepts?: Is there Madness in our Methods?" Commentator: Robert McCauley, Philosophy, Emory University Speaker: Andrew Woodfield, Philosophy, Bristol "A Two-Tiered Model of Concept Formation" Commentator: SESSION VII: INTENTIONALITY Chair: Douglas G. Winblad, Philosophy, Georgia State University Speaker: Ron Amundson, Philosophy, University of Hawaii at Hilo "Doctor Dennett and Doctor Pangloss" Commentator: Justin Leiber, Philosophy, University of Houston Speaker: Robert Van Gulick, Philosophy, Syracuse "Consciousness, Intrinsic Intentionality, and Self- Understanding Machines" Commentator: Nick Georgalis, Philosophy, East Carolina University 4:00-5:30pm INVITED LECTURE: CONSCIOUSNESS Chair: Speakers: Daniel Dennett, Philosophy, Tufts University Kathleen Akins, Philosophy, Tufts University BEACH PARTY -- Stevan Harnad (609) - 921 7771 {bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad harnad%mind@princeton.csnet harnad@princeton.ARPA harnad@mind.Princeton.EDU ------------------------------ End of AIList Digest ******************** From in%@vtcs1 Tue Apr 14 15:27:37 1987 Date: Tue, 14 Apr 87 15:27:23 est From: vtcs1::in% To: ailist@stripe.sri.com Subject: AIList Digest V5 #99 Status: R AIList Digest Tuesday, 14 Apr 1987 Volume 5 : Issue 99 Today's Topics: Queries - Statistical Expert Systems & Prolog for UNIX System V.2 & Cog Sci Conference & Cog. Psych. Grad. Schools, AI Tools - MS-DOS Expert System Tools, Application - AI in Network Protocols, Funding - Military Funding, Humor - Demon Error & Which Half is Right?, Inference - Clyde the Elephant ---------------------------------------------------------------------- Date: Mon 13 Apr 87 16:19:12-PST From: Ken Laws Subject: Statistical Expert Systems I have received a letter from Dr. D.J. Hand, Institute of Psychiatry, De Crespigny Park, Denmark Hill, London, SE5 8AF. He (or she?) is trying to compile a list of researchers working in statistical expert systems, for use by researchers and conference organizers. If you would like to be listed, and to receive a copy, send your name, address, and a brief description of your work. -- Ken ------------------------------ Date: 13 Apr 87 16:15:39 GMT From: husc8!edwards@husc6.harvard.edu (Bill Edwards) Subject: Good Prolog Interpreter/Compiler for UNIX System V.2 Wanted: Good Prolog Interpreter/Compiler for UNIX System V.2 Please respond by email-thanks. -- Bill Edwards Bill Edwards edwards@harvard.harvard.edu (ARPA) UNIX Systems Programmer/Analyst ...!harvard!edwards (UUCP) Harvard Science Center edwards@harvunxu (BITNET) 1 Oxford Street hucsc::edwards (DECNET) Cambridge, MA 02138 ------------------------------ Date: Mon 13 Apr 87 16:53:40-EDT From: John C. Akbari Subject: cog sci conference anyone have an email address to inquire about attending the cognitive science conference in july (just after aaai-87). john c akbari ARPANET & Internet akbari@CS.COLUMBIA.EDU BITnet akbari%CS.COLUMBIA.EDU@WISCVM.WISC.EDU uucp & usenet ...!seismo!columbia!cs!akbari DECnet akbari@cs PaperNet 380 riverside drive, no. 7d new york, new york 10025 usa SoundNet 212.662.2476 ------------------------------ Date: 10 Apr 87 17:19:04 GMT From: seger@husc4.harvard.edu (carol seger) Subject: Cog. Psych. Grad. School advice sought. I am planning to apply to graduate schools in cognitive psychology / cognitive science in the fall. However, I will be teaching high school in Kenya for a year beginning in July, so I have to decide where I want to apply soon so I can visit places before I leave. I am seeking three sorts of advice: a. Other schools that might be worthwhile but that I have overlooked b. Any information from current students in any of the programs I am considering. c. General advice on the applications process. As far as I am aware, none of the programs I am applying to require interviews. Is there any reason I cannot apply from overseas? I am interested in high-level perception, natural reasoning, categorization, spatial cognition, cognitive development and cognitive neuropsychology. I am not interested in psycholinguistics (at least at the moment) or low-level perception. I prefer to concentrate on experimental psychology -- while I find computer modeling to be interesting, I don't want to do it myself. I am currently a senior in the cognitive science option of the Psychology concentration at Harvard Univeristy. I wrote my senior thesis on intermodality realtions in shape perception, and have worked on a naturalistc study of mental imagery. So far, in order of preference, I am applying to Stanford UC Berkeley UC San Diego UCLA / University of Pennsylvania (tie, so far). I prefer to live in California, but, of course, I'll go to the best program I can get into. If you have any advice, please mail it to me. Many thanks. Carol Seger. carol@borax.lcs.mit.edu seger@wjh12.harvard.edu ------------------------------ Date: 5 Apr 87 04:46:06 GMT From: nbires!isis!csm9a!japplega@ucbvax.Berkeley.EDU (Joe Applegate) Subject: Re: MS-DOS expert system tools? > >I'm looking for expert-systems tools that can be run on PC-class machines > >So far I have looked at GURU, EXSYS, VP-Expert, and KDS. None of these >systems comes close to my needs; most of them are question/answer menu-based >tools best suited for very simple interactive diagnostic or recommendation >ES's. Some of them allow access to external databases, but none of them >(as far as I can tell) allow general user routines to be linked in. > >I have access to TI's PC-Plus and will be looking at it soon. I have read the >advertising blurb on Level Five's Insight-2+, and it sounds very interesting. >But then, the blurbs on some of the other tools sounded good, too. > >Has anyone used these systems, or any others that meet my needs? I would >really appreciate it if you would contact me with any suggestions. > Most PC based Expert Systems shells are oriented towards database type type queries of their knowledge. Though it is possible to acces both ports and the bios from TI's Personal Consultant Plus, I doubt if that or any shell will give the response needed for real time processing. A more feasible method for this type of development is to do it from scratch in an acceptable language... most probably Lisp or Prolog, though C and Pascal can be used in such an environment and have been in the past! At the risk of getting lynched I would recommend you take a look at Turbo Prolog... if your rule base is not dynamic, Turbo Prolog provides a powerful yet inexpensive development engine with graphic primitives and direct access to DOS and BIOS functions as well as the I/O ports of a PC. Joe Applegate - Colorado School of Mines Computing Center {seismo, hplabs}!hao!isis!csm9a!japplega or SYSOP @ (303) 273-3989 300/1200/2400 8-N-1 Minds of Mines AI BBS ------------------------------ Date: 7 Apr 87 13:21:24 GMT From: sundc!cos!duc@seismo.CSS.GOV (Duc Kim Nguyen) Subject: Re: AI in Network Protocols. I think this is a very interesting topic for discussion. Typically a protocol specification contains some BNF notation for the syntactical definition (e.g., X.400, etc...) and the binding (or usage/meaning) of the components' values is burried in the 'english' text of the spec. The effect of this is the lack of a more 'complete' and/or formal notation to capture both the syntax and semantic in order for automating a testing system (to test the protocol) and/or determining a set of test cases to be 'partially' (or even wholly) complete to test a set of functionalities of the protocol (and therefore a result analysis system can be automated). Maybe, a knowledge-based system will solve this, but I prefer not to think about a database-driven approach until no can-do. Duc Kim Nguyen Corporation for Open Systems ------------------------------ Date: 5 Apr 87 06:28:07 GMT From: ubc-vision!calgary!vuwcomp!steve@seismo.CSS.GOV (Steve Cassidy) Reply-to: steve@vuwcomp.UUCP (Steve Cassidy) Subject: Military Funding In article <[A.ISI.EDU]31-Mar-87.15:25:11.DAVSMITH> DAVSMITH@A.ISI.EDU writes: >Without the military applications, who in the commercial sector >would attempt to put together cooperating expert systems >in real-time? [ One could broaden the issue and ask >"Who in their right mind would..?"] Here we assume that the only *possible* applications of real-time cooperating ES are military ones. What is the major difference between a system which sits in a fighter plane monitoring the pilots actions and one which sits in some complex manufacturing plant monitoring the processes there? Too often military research is justified as the only way new ideas can develop, the truth is that they are the only research programmes given sufficient funds to develop new ideas. If research groups had the same level of funding available for civil projects then they would be able to develop real-time cooperative expert systems in domains which may actually be *useful* to mankind. Steve ACSnet: steve@vuwcomp.nz UUCP: {ubc-vision,alberta}!calgary!vuwcomp!steve ------------------------------ Date: Mon, 13 Apr 87 01:33:15 pst From: Eugene Miya N. Subject: Feigenbaum Comment about SDI Last week, Computer Literacy bookstore completed a kick off opening lecture series, some very noted computers scientists spoke over the course of two weeks. I think comments by two speakers would interest recent discussions on both Arms-d and AIlist. In particular, Dr. Ed. Feigenbaum made mention in his words that the SDIO has dropped funding of AI from its budget. His implication appeared to be more software engineering oriented rather than battle management oriented. I don't know if they have or not, but I personally do not get the impression that they have, especially for battle management. >From the Rock of Ages Home for Retired Hackers: --eugene miya NASA Ames Research Center eugene@ames-aurora.ARPA "You trust the `reply' command with all those different mailers out there?" "Send mail, avoid follow-ups. If enough, I'll summarize." {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene ------------------------------ Date: 6 Apr 87 17:48:27 GMT From: "Col. G. L. Sicherman" Subject: Re: AIList Digest V5 #92 In article , MINSKY@OZ.AI.MIT.EDU writes: > The term "demon" comes from Oliver Selfridge, via the paper, > "Pandemonium: A Paradigm for Learning", published in Symposium of the > mechanization of Thought Processes, November 1858. Then we can certainly concede priority to Selfridge. I wonder how much influence he had on the work of Babbage? (Ken, you should have caught this!) -- Col. G. L. Sicherman UU: ...{rocksvax|decvax}!sunybcs!colonel CS: colonel@buffalo-cs BI: colonel@sunybcs, csdsiche@ubvms ------------------------------ Date: 13-Apr-1987 0938 From: kevin%bizet.DEC@decwrl.DEC.COM (Now, if it sounds good, you don't worry what it is: you just go and enjoy it.) Subject: Which half is right? > [I have faith that only half of what I know is wrong. I'll > let you know when I find out which half. -- KIL] Merely determining the halves, let alone figuring out which half is right, would be an astonishing accomplishment! Kevin LaRue ------------------------------ Date: 5 Apr 87 20:36:23 GMT From: "Col. G. L. Sicherman" Subject: Re: Clyde the elephant The problem of Clyde the elephant brings up one of the biggest controversies in statistics, one which is starting to spill over into A.I. To recapitulate: 1. 95% of elephants are grey; 2. 40% of royal elephants are yellow; 3. Clyde is a royal elephant. But we know nothing about what percentage of elephants are royal. The distribution could look like this: | royal common ------+--------------- grey | 15 175 yellow| 10 0 or like this: | royal common ------+--------------- grey | 0 95000 yellow| 2 0 red | 3 4995 Can we assign a valid probability to "Clyde is grey" without knowing the likelihood of either distribution (or any other)? One school of thought says no--the best we can do is follow Boole's suggestion of computing upper and lower bounds for the probability. Other schools, notably that led by A. P. Dempster, say yes. And this topic is all too philosophical enough to be discussed here in mod.ai! -- Col. G. L. Sicherman UU: ...{rocksvax|decvax}!sunybcs!colonel CS: colonel@buffalo-cs BI: colonel@sunybcs, csdsiche@ubvms [SRI is a hotbed of Dempster-Shaferism, so I'll take a chance on clarifying this. Tom Garvey or other readers can correct me if I'm off base. The Dempster-Shafer (D-S) is to track upper and lower bounds for probability. This is controversial in two ways: Dempster's rule for combining contradictory evidence, and the power/appropriateness/usefulness of the interval approach in general. (Conflicting evidence really doesn't enter into the Clyde problem.) It is the Bayesians who generally assign probabilities, although they don't do it is blindly as their "loyal opposition" would imply -- while underlying uniform or even Gaussian distributions are typically assumed for predictive power under random sampling, Bayesians might choose a "pessimal" a priori distribution to model tricky situations such as this one. They can also do symbolic Bayesian analysis with free parameters in order to derive formulas that are valid for any state of the world. Fuzzy logicians use a very similar theory, but are likely to assume that the underlying distributions are typically implied by the manner in which the problem is stated. A fourth group, perhaps led by Tversky and Kahneman, are more interested in the analogy-based reasoning of humans than in optimal decision theory. And others, e.g. Cohen and various expert systems researchers, are willing to consider any type of estimate as long as the justification is given (for use in further reasoning). Intervals are nice because they make no unwarranted statements. (Disclaimer: The endpoints may themselves by subject to sampling errors. Logic-based methods, including D-S, can be very sensitive to errors in the intial evidence -- as can methods based on tightly constrained a priori distributions.) Upper and lower probabilities are also more informative than single point estimates, and can be interpreted as recording what is unknown as well as what is known. In cases where a parametric distribution is appropriate, however, the parameters of that distribution (or optimal estimates thereof) are the most powerful estimates of the state of the world. Intervals are not convenient for representing true Gaussian distributions, for instance, since the intervals must be infinite in extent. (One might want to use intervals for the mean and standard deviation, though.) I tend to believe that all sampled data is Gaussian unless there is evidence to the contrary (either a priori or from examination of the data), partly because that leads to points estimates and distributions thereof that are useful. I would not attempt to impose this assumption on Clyde, however, and there are many situations calling for non-Bayesian reasoning. -- KIL] ------------------------------ End of AIList Digest ********************