From comsat@vpics1 Mon Jun 24 21:59:25 1985
Date: Mon, 24 Jun 85 21:59:18 edt
From: comsat@vpics1.VPI
To: fox@opus   (FRANCE,RDJ,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Status: R

Received: from sri-ai.arpa by csnet-relay.arpa id a001158; 24 Jun 85 18:04 EDT
Date: Mon 24 Jun 1985 09:14-PDT
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #82
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 24 Jun 85 22:44 EST


AIList Digest            Monday, 24 Jun 1985       Volume 3 : Issue 82

Today's Topics:
  Queries - VAX Lisp & PC Lisps & McDonnell Douglas NL Breakthrough,
  Games - Optimal Scrabble,
  Automata - Predation/Cooperation,
  Psychology - Common Sense,
  Analogy - Bibliography,
  Seminar - Evaluating Expert Forecasts (NASA)

----------------------------------------------------------------------

Date: Mon, 24 Jun 85 07:38:35 EDT
From: cugini@NBS-VMS
Subject: VAX Lisp

Just looking for a little consumer information here - does anyone have
any experience with Digital's VAX LISP ?  DEC advertises it as a
full-fledged implementation of CommonLisp.  Any remarks on price,
performance, quality, etc are appreciated.

John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
Bldg 225 Room A-265
Gaithersburg, MD 20899
phone: (301) 921-2431

------------------------------

Date: Sun 23 Jun 85 15:09:12-EDT
From: Jonathan Delatizky <DELATZ%MIT-OZ@MIT-MC.ARPA>
Subject: PC Lisps

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Can some of you out there who have used Lisp implementations on IBM PC
type machines give me some recommendations as to the best PC Lisp? I
plan to run it on a PC/XT and a PC/AT if possible. Also, any expert systems
shells that run on the same machines, real or toy-like.

...jon

------------------------------

Date: 22 Jun 1985 13:20-EST
From: George Cross <cross%lsu.csnet@csnet-relay.arpa>
Subject: McDonnell Douglas NL Breakthrough

         The following is the text of a full page color ad on page 49
         in the June 24, 1985 New Yorker.  It has also been run in the
         Wall Street Journal.  Does anyone know what the breakthrough
         is?  This was mentioned on the ailist some time ago but I
         didn't notice a response.  There is a photo of a hand holding
         the chin of smiling boy.

BREAKTHROUGH: A COMPUTER THAT UNDERSTANDS YOU LIKE YOUR MOTHER

Having to learn letter-perfect software languages can be frustrating to the
average person trying to tap the power of a computer.

But practical thinkers at our McDonnell Douglas Computer Systems Company
have created the first computer that accepts you as you are - human.

They emulated the two halves of the brain with two-level software: One level
with a dictionary of facts and a second level to interpret them.  The
resulting Natural Language processor understands everyday conversational
English.  So it knows what you mean, no matter how you express yourself.  It
also learns your idiosyncrasies, forgives your errors, and tells you how to
find out what you're looking for.

Now, virtually anyone who can read and write can use a computer.

We're creating breakthroughs not only in Artificial Intelligence but also in
health care, space manufacturing and aircraft.

We're McDonnell Douglas.

How can I learn more?
Write
        P.O. Box 19501
        Irvine, CA 92713

------------------------------

Date: 22 Jun 1985 13:07-EDT
From: Jon.Webb@CMU-CS-IUS2.ARPA
Subject: Optimal Scrabble

Anyone interested in computer Scrabble should be aware that Guy
Jacobson and Andrew Appel (some of the people that did Rog-o-matic)
have written a program which in some sense solves the problem.  Using a
clever data structure, their program makes plays in a few seconds and
always makes the best possible play.  Their dictionary is the official
Scrabble dictionary.  The program is not completely optimal because it
doesn't take into account how the placement of its words near things
like triple word scores may help the other player, but in all other
senses it always makes the best play.  I suppose some simple strategic
techniques could be added using a penalty function, but as the program
almost always wins anyway, this hasn't been done.  It regularly gets
bingos (all seven letters used), makes clever plays that create three
or more words, and so on.  The version they have now runs on Vax/Unix.
There was some work to port it to the (Fat) Macintosh but that is not
finished, mainly for lack of interest.

Jon

------------------------------

Date: Fri, 21 Jun 85 17:17:58 EDT
From: David_West%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: Predation/Cooperation (AIL v3 #78)

Re: enquiry of sdmartin@bbng about learning cooperation in predation:
For an extensive investigation of a minimal-domain model (prisoner's
dilemma),see _The Evolution of Co-operation_ (NY: Basic Books, 1984;
LC 83-45255, ISBN 0-465-02122-0) by Robert Axelrod (of the U of Mich).
 He is in the Institute of Public Policy Studies, but one of his more
interesting methods was the use of the genetic algorithms of John
Holland (also of the U of Mich) to breed automata to have improved
strategies for playing Prisoner's dilemma.  A one-sentence summary of
his results is that cooperation can displace non-cooperation if
individuals remember each other's behavior and have a high enough
probability of meeting again. An intermediate-length summary can be
found in Science _211_ (27 Mar 81) 1390-1396.

------------------------------

Date: Fri 21 Jun 85 19:23:03-PDT
From: Calton Pu <CALTON@WASHINGTON.ARPA>
Subject: definition of common sense

I had a discussion with a friend on this exact topic just a few weeks
ago.  My conclusions can be phrased as an elaboration of V. Pratt's
two criteria.

   1.   common knowledge basis (all facts depended on must be
        common knowledge)

I think the (abstract) common knowledge basis can be more concretely
described as "cultural background".  Your Formico's Pizza example
shows clearly that anybody not familiar with San Francisco will not
have the "common sense" to go there.  The term "cultural background"
admits many levels of interpretation (national, provincial, etc.)
so most of REALLY COMMON knowledge basis will be encompassed.

   2.   low computational complexity (easy to check the conclusion).

I think the key here is not the checking (NP), but the finding (P) of
the solution.  So here I differ from Vaughan, in that I believe common
sense is something "obvious" to a lot of people, by their own
reasoning power.

There are two factors involved: the first is the amount of reasoning
power; the second is the amount of deductive processing involved.  On
the first factor, unfortunately usual words to describe people with
the adequate reasoning power such as "sensible", "reasonable", and
"objective" have also the connotation of being "emotionless".  Let's
leave out the emotional aspects and use the term "reasonable" to
include everybody who is able to apply elementary logic to normal
situations.  On the second factor, typical words to picture easy
deductive efforts are "obvious", "clear", and "evident".

So my definition of common sense is: that which is obvious to a
reasonable person with an adequate cultural background.

I should point out that the three parameters of common sense, cultural
background, reasoning power, and deductive effort, vary from place to
place and from person to person.  If we agreed more on each other's
common sense, it might be easier to negotiate peace.

------------------------------

Date: Monday, 24 Jun 85 01:38:08 EDT
From: shrager (jeff shrager) @ cmu-psy-a
Subject: Analogy Bibliography

[Someone asked for an analogy bibliography a while back.  This was compiled
about two years (maybe more) ago so it's partial and somewhat out of date,
but might serve as a starter for people interested in the topic.  I've added
a couple of thing just now in looking it over.  The focus is primarily
psychological, but readers will recognize some of the principle AI work as
well. I've got annotations for quite a few of these, but the remarks are
quite long and detailed so I won't burden AIList with them. -- Jeff]

                                    ANALOGY
                           (A partial bibliography)

                           Compiled by Jeff Shrager
                                CMU Psychology
                                 24 June 1985

               (Send recommendations to Shrager@CMU-PSY-A.)

Bobrow,  D.  G.  &  Winograd,  T.    (1977).  An  Overview  of KRL: A Knowledge
     Representation Language.  Cognitive Science, 1, 3-46.

Bott, R.A.  A study of complex learning: Theories and Methodologies.  Univ.  of
     Calif.  at  San  Diego, Center for Human Information Processing report No.
     7901.

Brown, D.  (1977). Use of Analogy to Acheive New Experience.  Technical  Report
     403, MIT AI Laboratory.

Burstein,  M.  H.    (June,  1983). Concept Formation by Incremental Analogical
     Reasoning  and  Debugging.    Proceedings  of  the  International  Machine
     Learning Workshop.  pp. 19-25.

Carbonell,  J.  G.  (August, 1981). A computational model of analogical problem
     solving.  Proceedings of the Seventh  International  Joint  Conference  on
     Artificial Intelligence, Vancouver.  pp. 147-152.

Carbonell,  J.G.    (1983).  Learning  by Analogy: Formulating and Generalizing
     Plans from Past Experience.    In  Michalski,  R.S.,  Carbonell,  J.G.,  &
     Mitchell,  T.M.  (Ed.),  Machine  Learning,  an  Aritificial  Intelligence
     Approach  Palo Alto:  Tioga Press.

Carnap, R.  (1963).  Variety,  analogy  and  periodicity  in  inductive  logic.
     Philosophy of Science, 30, 222-227.

Darden,   L.     (June,  1983).  Reasoning  by  Analogy  in  Scientific  Theory
     Construction.  Proceedings of the International Machine Learning Workshop.
     pp. 32-40.

de Kleer, J. & Brown, J.S.  Foundations of Envisioning.  Xerox PARC report.

Douglas,  S. A., & Moran, T. P.  (August, 1983). Learning operator semantics by
     analogy.    Proceedings  of  the   National   Conference   on   Artificial
     Intelligence.

Douglas,  S.  A.,  &  Moran,  T.  P.    (December, 1983b). Learning text editor
     semantics by analogy.  Proceedings of  the  Second  Annual  Conference  on
     Computer Human Interaction.  pp. 207-211.

Dunker, K.  (1945). On Problem Solving.  Psychological Monographs, 58, .

Evans,  T.  G.    (1968).  A  program  for the solution of a class of geometric
     analogy intelligence test  questions.    In  Minsky,  M.  (Ed.),  Semantic
     Information Processing  Cambridge, Mass.:  MIT Press.  pp. 271-253.

Gentner,  D.    (July,  1980).  The  Structure of Analogical Models in Science.
     Report 4451, Bolt Beraneck and Newman.

Gentner, D.  (1981).  Generative Analogies as Mental Models.    Proceedings  of
     the  3rd  National Cognitive Science Conference.  pp. 97-100.  Proceedings
     of the 3rd annual conference.

Gentner, D.  (1982). Are Scientific Analogies Metaphors?  In D. S. Miall (Ed.),
     Metaphor:  Problems and Perspectives  New York:  Harvester Press Ltd.  pp.
     106-132.

Gentner, D., & Gentner, D. R.  (1983). Flowing Waters or Teeming Crowds: Mental
     Models  of  Electricity.    In  Gentner, D. & Stevens, A. L. (Ed.), Mental
     Models  Hillsdale, NJ:  Lawrence Earlbaum Associates.  pp. 99-129.

Gick, M. L. & Holyoak, K. J.  (1980).  Analogic  Problem  Solving.    Cognitive
     Psychology, 12, 306-355.

Gick,  M.  L. & Holyoak, K. J.  (1983). Schema Induction and Analogic Transfer.
     Cognitive Psychology, 15, 1-38.

Halasz, F. & Moran, T. P.  (1982).  Analogy Considered Harmful.  Proceedings of
     the Conference on Human Factors in Computer Systems, New York.

Hesse,  Mary.    (1955).    Science  and  the  Human  Imagination.    New York:
     Philisophical Library.

Hesse, Mary.  (1974).  The Structure of Scientific Inference.   Berkeley: Univ.
     of Calif. Press.

Kling,  R.  E.    (1971).  A  Paradigm  for  Reasoning  by Analogy.  Artificial
     Intelligence, 2, 147-178.

Lenat, D.B. & Greiner, R.D.  (1980).  RLL: A representation language  language.
     Proc. of the first annual meeting.  Stanford.

McDermott,  J.    (December,  1978).  ANA:  An  assimilating  and accomodatiing
     production  system.    Technical  Report  CMU-CS-78-156,   Carnegie-Mellon
     University.

McDermott,  J.    (1979).   Learning to use analogies.  Sixth Internation Joint
     Conference on Artificial Intelligence.

Medin, D. L. and Schaffer, M. M.   (1978).  Context  Theory  of  Classification
     Learning.  Psychological Review, 85(3), 207-238.

Minsky,  M.   (1975). A Framework for Representing Knowledge.  In Winston, P.H.
     (Ed.), The Psychology of Computer Vision  New York:  McGraw Hill.

Minsky, M.  (July, 1982). Learning Meaning.  Technical Report, .    Unpublished
     MIT AI Lab techinical report.

Moore, J. & Newell, A.  (1974). How can MERLIN Understand?  In L.W.Gregg (Ed.),
     Knowledge and Cognition  Potomic, Md.:  Erlbaum Associates.

Ortony, A.  (1979). Beyond Literal Similarity.  Psych Review, 86(3), 161-179.

Pirolli, P. & Anderson, J.R. (1985) The role of Learning from Examples in the
     Acquisition of Recursive Programming Skills.  Canadian Journal of
     Psychology. Vol. 39, no. 4; pgs. 240-272.

Polya, G.  (1945).  How to solve it.   Princton, N.J.: Princeton U. Press.

Quine, W. V. O.  (1960).  Word and Object.   Cambridge: MIT Press.

Reed, S. K., Ernst, G. W., & Banerji, R.    (1974).  The  Role  of  Analogy  in
     Transfer  Between  Similar  Problem  States.    Cognitive  Psychology,  6,
     436-450.

Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D.  M.,  &  Boynes-Braem,  P.
     (1976). Basic Objects on Natural Kinds.  Cog Psych, 8, 382-439.

Ross,  B.   (1982). Remindings and Their Effects in Learning a Cognitive Skill.
     PhD thesis, Stanford.

Rumelhart,  D.E.,  &  Norman,  D.A.      (?DATE?).   Accretion,   tuning,   and
     restructuring:  Three  modes  of  learning.    In R.Klatsky and J.W.Cotton
     (Eds.),  Semantic  Factors  in  Cognition    Hillsdale,  N.J.:     Erlbaum
     Associates.

Rumerlhart,  D.E. & Norman, D.A.  (1981). Analogical Processes in Learning.  In
     J.R. Anderson (Ed.), Cognitive Skills and Their  Acquisition    Hillsdale,
     N.J.:  Lawrence Earlbaum Associates.  pp. 335-360.

Schustack, M., & Anderson, J. R.  (1979). Effects of analogy to prior knowledge
     on memory for new information.  Journal  of  Verbal  Learning  and  Verbal
     Behavior, 18, 565-583.

Sembugamoorthy,  V.    (August,  1981). Analogy-based acquisition of utterances
     relating to temporal aspects.  Proceedings of the 7th International  Joint
     Conference on Artificial Intelligence.  pp. 106-108.

Shrager,  J.  &  Klahr,  D.    (December,  1983).  A  Model  of Learning in the
     Instructionless Environment.   Proceedings  of  the  Conference  on  Human
     Factors in Computing Systems.  pp. 226-229.

Shrager,  J.  &  Klahr, D.  Instructionless Learning: Hypothesis Generation and
     Experimental Performance.  In preparation.

Sternberg, R.  (1977).  Intelligence, information  processing,  and  analogical
     reasoning:  The  componential  analysis  of  human abilities.   Hillsdale,
     N.J.: Lawrence Erlbaum Associates.

VanLehn, K., & Brown, J. S.    (1978).  Planning  nets:  A  representation  for
     formalizing  analogies and semantic models of procedural skills.  In Snow,
     R. E., Frederico, P. A. and Montague, W. E. (Ed.), Aptitude  Learning  and
     Instruction:  Cognitive Process Analyses  Hillsdale, NJ:  Lawrence Erlbaum
     Associates.

Weiner, E. J.  A Computational Approach to Metaphore  Comprehension.    In  the
     Penn Review of Linguistics.

Winston,   P.  H.    (December,  1980).  Learning  and  Reasoning  by  Analogy.
     Communications of the ACM, 23(12), 689-703.

Winston, P. H.  Learning and Reasoning by Analogy: The details.   MIT  AI  Memo
     number 520.

------------------------------

Date: Fri, 21 Jun 85 11:42:26 pdt
From: gabor!amyjo@RIACS.ARPA (Amy Jo Bilson)
Subject: Seminar - Evaluating Expert Forecasts (NASA)

                           NASA

            PERCEPTION AND COGNITION SEMINARS

    Who:        Keith Levi
    From:       University of Michigan
    When:       10 am, Tuesday, June 25, 1985
    Where:      Room 177, Building 239, NASA Ames Research Center
    What:       Evaluating Expert Forecasts

    Abstract:   Probabilistic forecasts, often generated by an expert,
                are critical to many decision aids and expert systems.
                The quality of such inputs has usually been evaluated in
                terms of logical consistency.  However, in terms of
                real-world implications, the external correspondence of
                probabilistic forecasts is usually much more important
                than internal consistency.  I will discuss recently
                developed procedures for evaluating external correspondence
                and present research on the topic.


    Non-citizens (except permanent residents) must have prior approval from
    the Directors Office one week in advance. Permanent residents must show
    Alien Registration Card at the time of registration.

    To request approval or obtain further information, call 415-694-6584.

------------------------------

End of AIList Digest
********************

From comsat@vpics1 Thu Jun 27 03:22:01 1985
Date: Thu, 27 Jun 85 03:21:57 edt
From: comsat@vpics1.VPI
To: fox@opus   (FRANCE,RDJ,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Status: R

Received: from sri-ai.arpa by csnet-relay.arpa id a003734; 26 Jun 85 2:06 EDT
Date: Tue 25 Jun 1985 22:22-PDT
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #83
To: AIList@SRI-AI
Received: from rand-relay by vpi; Thu, 27 Jun 85 04:43 EST


AIList Digest           Wednesday, 26 Jun 1985     Volume 3 : Issue 83

Today's Topics:
  Queries - Lisps for VAX,
  Book - Logic Programming Text,
  Seminars - A Situational Theory of Analogy (CSLI) &
    Implementing Dempster's Rule (SU),
  Conference - 2nd ACM N.E. Regional Conference

----------------------------------------------------------------------

Date: Tue, 25 Jun 85 06:51:42 EDT
From: cugini@NBS-VMS
Subject: Lisps for VAX

Does anyone have recommendations for incarnations of Lisp to run
under VAX/VMS, especially ones with features for object-oriented
programming?  Is there something called XLISP which fits this
description, and if so, where does it live?  Thanks for any help.

John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
Bldg 225 Room A-265
Gaithersburg, MD 20899
phone: (301) 921-2431

------------------------------

Date: Tue, 25 Jun 85 09:02:59 mdt
From: cib%f@LANL.ARPA (C.I. Browne)
Subject: Common Lisp on VAX/UNIX (Query)


We would be most grateful for pointers to a source of Common Lisp
for a VAX 11/780 running under UNIX 4.2bsd.

Thank you.

cib
cib@lanl
cib@lanl.arpa

------------------------------

Date: 22 Jun 85  1842 PDT
From: Yoni Malachi <YM@SU-AI.ARPA>
Subject: Logic Programming Text

  [Excerpted from the Prolog Digest by Laws@SRI-AI.  The original
  contained a lengthy abstract for each section of the book; to get a
  copy, FTP file <ailist>logicprog.txt on SRI-AI, or write to me at
  AIList-Request@SRI-AI.ARPA.  -- KIL]


      LOGIC PROGRAMMING: RELATIONS, FUNCTIONS, AND EQUATIONS

                           Doug DeGroot
                          Gary Lindstrom
                             Editors

                       Prentice-Hall, Inc.
                  Publication date:  Summer 1985

                          June 14, 1985


1. Concept

  This book addresses the topical and rapidly developing
areas of logic, functional, and equational programming, with
special emphasis on their relationships and prospects for
fruitful amalgamation.  A distinguished set of researchers
have contributed fourteen articles addressing this field
from a wide variety of perspectives.  The book will be
approximately 500 pages, published in hard cover form, with
foreword by the editors and combined index.

2. Table of Contents

2.1. Setting the Stage

 - Uday Reddy:  On the Relationship between Logic and
   Functional Languages (34 pp.).

- J. Darlington, A.J. Field, and H. Pull: The Unification
  of Functional and Logic Languages (34 pp.).

2.2. Unification and Functional Programming

 - Harvey Abramson: A Prological Definition of HASL, a
    Purely Functional Language with Unification Based
    Conditional Binding Expressions (57 pp.).

 - M. Sato and T.  Sakurai:  QUTE:  a Functional Language
   Based on Unification (24 pp.).

 - P.A. Subrahmanyam and J.-H.  You:  FUNLOG:  a
   Computational Model Integrating Logic Programming and
   Functional Programming (42 pp.).


2.3. Symmetric Combinations

 - R. Barbuti, M. Bellia, G. Levi, and M.  Martelli:
   LEAF:  a Language which Integrates Logic, Equations and
   Functions (33 pp.).

 - Shimon Cohen: The APPLOG Language (38 pp.).


2.4. Programming with Equality

 - Wm. Kornfeld: Equality for Prolog (15 pp.).

 - Joseph Goguen and Jose Meseguer: EQLOG:  Equality,
   Types, and Generic Modules for Logic Programming (69 pp).

 - Y.  Malachi, Z.  Manna and R. Waldinger: TABLOG: a New
   Approach to Logic Programming (30 pp.).



2.5. Augmented Unification

 - Robert G.  Bandes (deceased):  Constraining-Unification
   and the Programming Language Unicorn (14 pp.).

 - Ken Kahn: Uniform -- A Language Based upon Unification
   which Unfies (much of) Lisp, Prolog, and Act 1 (28 pp.).



2.6. Semantic Foundations

 - Joxan Jaffar, Jean-Louis Lassez and Michael J.  Maher:
   A Logic Programming Language Scheme (27 pp.).

 - Gert Smolka: Fresh: A Higher-Order Language with
   Unification and Multiple Results (56 pp.).

------------------------------

Date: Mon 24 Jun 85 16:03:14-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Reply-to: davies@csli
Subject: Seminar - A Situational Theory of Analogy (CSLI)


                     "A Situational Theory of Analogy"

                               Todd Davies
                       Conference Room, Ventura Hall
                         CSLI, Stanford University
                           Monday, July 1, 1985
                                1:15 p.m.


Analogy in logic is generally given the form:

                P(A)&Q(A)
        and     P(B)       are premises
                ---------
  therefore     Q(B)       can be concluded,

where P is a property or set of properties held by the analogous
situation A in common with the present situation B, and where Q is a
property which is initially held to be true of A.  The question is:
What justifies the conclusion?  Sometimes the conclusion is clearly
bogus, but for other pairs of situations and properties it seems quite
plausible. I will give examples of both intuitively good and
intuitively bad analogies as a way to argue that theories of analogy
hitherto proposed are inadequate, and that the rationale for analogy
which has been assumed for most early work on analogy in AI -- namely,
that the inference is good if and only if the situations being
compared are similar enough -- is based on a mistake.  I will also
point to traditional logic's inadequacies as a formal language for
analogy and develop a theory which incorporates ideas from (and finds
its easiest expression in) the theory of situations of Barwise and
Perry.  The theory suggests a general means by which computers can
infer conclusions about problems which have analogues for which the
solution is known, when failing to inspect the analogue would make
such an inference impossible.

------------------------------

Date: Mon 24 Jun 85 15:16:42-PDT
From: Alison Grant <GRANT@SUMEX-AIM.ARPA>
Subject: Seminar - Implementing Dempster's Rule (SU)

              SPECIAL MEDICAL INFORMATION SCIENCES COLLOQUIUM
                          Tuesday, June 25, 1985
                              3:00 - 4:00 P.M.
               Room M-112, Stanford University Medical Center

Speaker: Professor Glenn Shafer
         University of Kansas

Title:  IMPLEMENTING DEMPSTER'S RULE FOR HIERARCHICAL EVIDENCE

Abstract:    Gordon and Shortliffe have asked whether the computational
complexity of Dempster's rule makes it impossible to combine belief
functions based on evidence for and against hypotheses that can be arranged
in a hierarchical or tree-like structure.  In this talk I show that the
special features of hierarchical evidence make it possible to compute
Dempster's rule in linear time.  The actual computations are quite
straightforward, but they depend on a delicate understanding of the
interactions of evidence.

------------------------------

Date: Mon, 24 Jun 85 10:29:34 edt
From: Alan Gunderson <asg0%gte-labs.csnet@csnet-relay.arpa>
Subject: Call For Papers-2nd ACM N.E. Reg. Conf. -- AI Track


                            CALL FOR PAPERS


            SECOND ANNUAL ACM NORTHEAST REGIONAL CONFERENCE
                 Integrating the Information Workplace:
                        the Key to Productivity

                           28-30 October 1985

                          Sheraton-Tara Hotel
                           Framingham, Mass.
                                  and
                          The Computer Museum
                             Boston, Mass.

The conference sessions  are grouped into tracks corresponding to major
areas of interest in the computer field.   Papers are solicited for the
Conference's Artificial Intelligence Track.   The Track's  program will
emphasize "real world" approaches and applications of A. I.

                    Topics of interest include:

                       - Expert Systems
                       - Natural Language
                       - Man-Machine Interface
                       - Tools/Environments
                       - A. I. Hardware
                       - Robotics and Vision


                  Submit papers by: July 22, 1985

             Please send three copies of your paper to:

                     Dr. David S. Prerau
                     Track Chairman
                     Artificial Intelligence Track
                     ACM Northeast Regional Conference
                     GTE Laboratories Inc.
                     40 Sylvan Road
                     Waltham MA 02254

        For additional information on the Conference, write:

                     ACM Northeast Regional Conference
                     P.O. Box 499
                     Sharon MA 02067

------------------------------

End of AIList Digest
********************

From csvpi@vpics1 Mon Jul  1 04:42:45 1985
Date: Mon, 1 Jul 85 04:42:39 edt
From: csvpi@vpics1.VPI
To: fox@opus   (FRANCE,RDJ,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Status: R

Received: from sri-ai.arpa by csnet-relay.arpa id a024546; 1 Jul 85 1:04 EDT
Date: Sun 30 Jun 1985 21:14-PDT
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #84
To: AIList@SRI-AI
Received: from rand-relay by vpi; Mon, 1 Jul 85 04:31 EST


AIList Digest             Monday, 1 Jul 1985       Volume 3 : Issue 84

Today's Topics:
  Queries - Expert System Validation & LISP Productivity,
  Psychology - Predation/Cooperation & Common Sense,
  Business - TI and Sperry Join Forces,
  Games - Chess Programs and Cheating,
  Seminars - Learning in Expert Systems (Rutgers) &
    How to Clear a Block (SRI)

----------------------------------------------------------------------

Date: Sat, 29 Jun 85 01:58:21 edt
From: Walter Maner <maner%bgsu.csnet@csnet-relay.arpa>
Subject: Expert System Validation

I would appreciate pointers to research dealing with answers to questions about
expert system bugs, e.g.,

        How can expert-system advice be validated?
        Are there failure modes specific to expert systems?
        What classes of error can be prevented by consistency enforcers?

I am primarily interested in how these answers would apply to very large
rule-based systems which have evolved under multiple authorship.


                                        Walter Maner

        CSNet           maner@bgsuvax
        UseNet          ...cbosgd!osu-eddie!bgsuvax!maner
        SnailMail       Department of Computer Scinece
                        Bowling Green State University
                        Bowling Green, OH  43403

------------------------------

Date: Wed, 26 Jun 85 13:02 CDT
From: Patrick_Duff <pduff%ti-eg.csnet@csnet-relay.arpa>
Subject: requested: papers concerning LISP programmer man-hours


   I am trying to locate articles which discuss the differences between LISP
and non-AI languages in terms of the time and effort required to create
prototype systems, to make additions or revisions to a design after much of
the programming is completed, total programming time from start to finish,
etc..  My opinion is that in general, it takes fewer man-hours to create a
LISP program than to create a program to do the same task using languages
such as Ada, Pascal, FORTRAN, or an assembly language.  Note that I am
*not* claiming that the program will also be "better", more efficient, or
faster--just that most relatively large programs will take less time to
write in LISP.  I have been asked to come up with justification for using
LISP based upon the total man-hours required.  Does anyone know of a paper
which would support or undercut my opinion?  Has there been a convincing
demonstration or test of the power of LISP (and its powerful programming
environment) versus more traditional languages?

   regards, Patrick

   Patrick S. Duff, ***CR 5621***          pduff.ti-eg@csnet-relay
   5049 Walker Dr. #91103                  214/480-1659 (work)
   The Colony, TX 75056-1120               214/370-5363 (home)
   (a suburb of Dallas, TX)

------------------------------

Date: Saturday, 29 Jun 1985 22:22-EST
From: munnari!psych.uq.oz!ross@seismo
Subject: Predation/Cooperation (AIL v3 #78)

David West (AIL v3 #82) mentioned the work of Robert Axelrod on the
evolution of cooperation. Another good summary of Axelrod's work can
be found in Douglas Hofstadter's Metamagical Themas column in Scientific
American, May 1983, v248 #5, pp 14-20.

     UUCP:    {decvax,vax135,eagle,pesnta}!mulga!psych.uq.oz!ross
     ARPA:    ross%psych.uq.oz@seismo.arpa
     CSNET:   ross@psych.uq.oz
     ACSnet:  ross@psych.uq.oz

     Mail:    Ross Gayler                       Phone:   +61 7 224 7060
              Division of Research & Planning
              Queensland Department of Health
              GPO Box 48
              Brisbane  4001
              AUSTRALIA

------------------------------

Date: Thu, 27 Jun 85 14:08:05 pdt
From: Evan C. Evans <evans%cod@Nosc>
Subject: Common Sense

        Common sense = conclusions reached thru the processes  of
natural  reasoning  (or behaviors resulting from such).  I borrow
heavily from Julian Jaynes, The Origin of  Consciousness  in  the
Breakdown  of  the  Bicameral Mind.  Natural reasoning is neither
conscious nor rigorous in the sense of  formal  logic.   For  in-
stance,  upon  observing a piece of wood floating on a given pond
one will conclude directly that ANOTHER piece of wood will  float
on ANOTHER pond.  This is sometimes called reasoning from partic-
ulars.  More simply, it's expectation based  on  subliminal  gen-
eralization.   A  baby  quickly  concludes that objects will fall
without being AWARE of that  conclusion.   We're  constantly  ex-
ercising  natural  reasoning  to  reach conclusions about others'
feelings or motives based on their expressions or actions.   Such
reasoning  was early recognized as unconscious & called automatic
inference or COMMON SENSE, see John Steward Mill or James Sully.

        Pu's elaboration on Pratt  stands,  but  it  is  well  to
remember that natural reasoning is usually unconscious & does not
necessarily proceed by logical means.  In fact, automatic  infer-
ence  sometimes  achieves correct conclusions by demonstrably il-
logical means.

evans@nosc-cc

------------------------------

Date: Thu 27 Jun 85 15:54:15-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: news: TI and SPERRY join forces to sell AI

[ from Austin American Statesman - June 26, 1985 ]

    TI captures computer deal with Sperry
    =====================================

(Kirk Ladendorf - Statesman staff) - TI has landed what it calls its biggest
ever sales contract in the still-infant artificial intelligence industry - a
three-year, $42 million deal to supply computers and related equipment to
Sperry Corp.

Sperry, one of the largest computer makers with $5.7 billion in sales last
year, plans a large-scale campaign to develop specialized, salable uses for the
TI machine, which Sperry will call the Knowledge Workstation.

For TI the contract gives credibility that its well-regarded artificial
intelligence system, called Explorer, is more than just an esoteric product
with limited sales potential.

.....

Sperry will combine the TI machine with a software system called the Knowledge
Engineering Environment software developed by Intellicorp of Menlo Park, Calif.
Intellicorp software is regarded as a very sophisticated tool for building
specialized AI-programs.

The new system can be used to create so-called "expert systems" which ...

Such programs have been used on a demonstration basis to perform such tasks as
the running of an electrical power-plant and experimental weather forecasting.

Sperry's 26 specialized applications programs will be aimed at areas that have
been difficult to serve with traditional computers.  Those areas include
software development; testing and debugging; navigation; communications sognal
processing; CAD/CAM; and scheduling and resource allocation.

Sperry chose TI over 2 principal competitors in the field, Symbolics and Lisp
Machine Inc, because TI "has the best AI hardware available," a Sperry
spokesman said.

.....

Sales of AI Lisp-machines totaled only about $85 million last year, but Sperry
projects the AI market will mushroom to more than $4 billion by 1990.

.....

TI has announced no major additions to its 3,000 person Austin staff because of
the new contract, but ... it has already begun to build the staff it needs to
support the Explorer and the Business Pro.

TI is already at work developing new features for the Explorer.  They include
developing computer communications links so that the AI machine can interact
with Sperry and other IBM-compatible mainframes.

....

------------------------------

Date: Fri 21 Jun 85 21:44:18-EDT
From: Andrew M. Liao  <WESALUM.A-LIAO-85@KLA.WESLYN>
Reply-to: LIAO%Weslyn.Bitnet@WISCVM.ARPA
Subject: Chess, Programs And Cheating

A Consideration Of "Do Computers Cheat At Chess?"

     I've been giving some thought to the question, "Do computers
actually cheat at chess?".   To start,  I'm going to assume  that
what  is at issue in the first objection is a chess program's use
of  a  game    tree   whose   nodes   are   representations    of
potential   board/piece/move   configurations.    I   think   the
objection   that computers  cheat  because  they  use   "external
boards"   (albeit  represented  internally) can  be  answered  by
saying,  "No - there is no cheating involved because humans 'look
ahead'  in some way  and since  no physical external  boards  are
allowed,   the  only way  to 'look  ahead'  is  to  represent  an
'image'   of  potential  board positions in one's mind [though in
a real limited way].  But isn't this  just what a program does  -
only better?"  I  think that,   in some sense,  the argument that
programs   cheat  at  chess  by  virtue  of  having   "internally
represented 'external boards'" is  just  wrong.  What  a  program
tries  to do on one  aspect  is  to simulate  what  is  going  on
inside  a  person's  mind  and,   in  a limited sense,   this  is
actually achieved (albeit by brute  force game trees).

     The second objection concerns the problems  of  "moves-made-
by-reference".    The  objection,   if I understand it correctly,
is  that  (1)  one  cannot refer to moves that  have  been  pre-
recorded for the player's use during a match and  that (2)   such
moves   are   encoded into a program  (we disallow  an   external
database file of moves since it is,  in some way,  a set of moves
that  have been pre-recorded for future use),  and without  these
encoded   moves,   a   program   does  not  know   what   opening
move(s)/strategy(ies) is optimal. Presumably, the reason for this
rule  is to force a player to rely on his experience and  no  one
else's (i.e.  no outside help) and at the same time,  prevent any
player  being  put  at an unfair disadvantage.   But I  think  it
cannot  be denied that encoding any move into a chess program  is
tantamount  to  making the program dependent  upon  the  author's
experience  and not its own - a clear violation of the spirit  of
the  rule.   The question remains - Is it cheating?  I am of  the
opinion  that  such a program is cheating on the basis  that  the
program  cannot  decide  during  the opening  of  the  game  what
strategy  is optimal for it and hence must rely on outside  help,
in the form of stored data, given to it by its author.

     Although I feel the first objection is easily answered, I am
still  not  happy  with  my reply  to  the  second,  although  my
intuition tells me that my reply to the second objection  is,  at
least in spirit, on the right track. The motivation for my second
reply  is due (in great part) to J.R.  Searle's conception of the
Background which directly relates to the problem of  "experience"
and the like.

------------------------------

Date: Wed 26 Jun 85 09:58:03-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: Chess, Programs And Cheating

Reply to Andrew Liao:

When I open with a king pawn, I am relying on the experience and
knowledge of others -- that doesn't seem to be cheating.  I prefer
an interpretation of the rules as "you run what you brung" -- namely
that you cannot access external help >>during the match<<.  I do
admit that stored book openings seem questionable (although chess
masters certainly memorize such material), but to say that a
computer's superior memory gives it an advantage is no more damning
than to say that its superior speed gives it an advantage.  In just
a few years it will be obvious that computers are inherently better
"chess machines" than people are, and people will stop quibbling
about handicaping the computer in one way or another to make
the contest "fair".

                                        -- Ken Laws

------------------------------

Date: 28 Jun 85 11:07:37 EDT
From: PRASAD@RUTGERS.ARPA
Subject: Seminar - Learning in Expert Systems (Rutgers)


               LEARNING IN SECOND GENERATION EXPERT SYSTEMS


                           Walter Van De Velde
               AI Laboratory, Vrije Universiteit Brussel



      This talk discusses a learning mechanism for second generation expert
systems: rule-learning by progressive refinement. Second generation expert
systems not only use heuristic rules, but also have a model of the domain of
expertise so that deeper reasoning is possible whenever the rules are
deficient. A learning component is described that abstracts new rules out of
the results of deep reasoning. Gradually, the rule set is refined and
restructured so that the expert system can solve more problems in a more
effecient way. The approach is illustrated with concrete implemented
examples.

Date:      Friday, June 28, 1985
Time:      11 AM
Place:     Hill Center, Room 423

------------------------------

Date: Thu 27 Jun 85 12:21:03-PDT
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - How to Clear a Block (SRI)

                         "HOW TO CLEAR A BLOCK"
                                  or
                Unsolved Problems in the Blocks World #17


                   Richard Waldinger -- SRI AI Center
                      11:00 am, WEDNESDAY, July 3
                     Room EJ232, SRI International


ABSTRACT:

Apparently simple problems in the blocks world get more complicated
when we look at them closely. Take the problem of clearing a block.
In general, it requires forming conditionals and loops and even
strengthening the specifications; no planner has solved it.

We consider how such problems might be approached by bending a
theorem prover a little bit.

------------------------------

End of AIList Digest
********************

From comsat@vpics1 Wed Jul  3 10:02:37 1985
Date: Wed, 3 Jul 85 10:02:30 edt
From: comsat@vpics1.VPI
To: fox@opus   (FRANCE,RDJ,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Status: R

Received: from sri-ai.arpa by csnet-relay.arpa id aa00221; 2 Jul 85 13:41 EDT
Date: Tue  2 Jul 1985 09:33-PDT
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #85
To: AIList@SRI-AI
Received: from rand-relay by vpi; Wed, 3 Jul 85 09:52 EST


AIList Digest            Tuesday, 2 Jul 1985        Volume 3 : Issue 85

Today's Topics:
  Query - Othello,
  Games - Hitech Chess Performance & Computer Cheating,
  Psychology & AI Techniques - Contextual Reasoning

----------------------------------------------------------------------

Date: 1 Jul 85 17:36:29 EDT
From: Kai-Fu.Lee@CMU-CS-SPEECH2
Subject: Othello (the game)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

I am leading a PGSS project (for high school students) that will implement
an Othello program in Common Lisp on the IBM PC.  Any program source,
clever tricks, and good evaluation functions that you're willing to share
will be appreciated.

/Kai-Fu

------------------------------

Date: 30 June 1985 2144-EDT
From: Hans Berliner@CMU-CS-A
Subject: Computer Chess (Hitech)

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

CLOSE BUT NO BIG CIGAR is an appropriate summary of the performance
of Hitech in this week-ends Pittsburgh Summer Classic.  In a field
of 24, including 3 master and several experts, Hitech won its
first three games against an 1833 (Class A), 1802 (Class A), and
2256 (Master) before losing in the final round to another Master
(2263) who won the tournament.  This was Hitech's first win against
a Master.  Its overall performance in two tournaments is
6 1/2 - 2 1/2; better than 70 percent.  As it was, it finished
2nd in the tournament.  Its provisional rating is now around
2100; middle expert.

We will hold a show and tell on Friday at a time and place to
be announced.

------------------------------

Date: 1 Jul 85 10:29:54 EDT
From: Murray.Campbell@CMU-CS-K
Subject: More on Hitech Result

           [Forwarded from the CMU bboard by Laws@SRI-AI.]

The 3-1 result Hitech achieved in the chess tournament this
past weekend was a performance rating of about 2300, well above
the master standard of 2200.  And it should be noted that the
last round loss was to Kimball Nedved.
After 2 tournaments, Hitech's performance rating is approximately
2210.

------------------------------

Date: Mon, 1 Jul 85 22:37:20 PDT
From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
Subject: Computer Cheating.

        Computer programs are the consequences of people, like novels, as
ripples are to rocks thrown in a smooth pond.  Hence, it is the authors
of these programs who are the cheaters!
        In fact, computer chess is simply another iteration, as is speed
chess, double bug-house, and probably several other versions I have not
heard of.  Consenting adults can do what every they want as long as they don't
bill the government.  So, questions as to whether computers are cheating
are really questions about whether a programmer who writes a program
and watches it play is on the same footing as the live opponent.  Practically
I think he is, philosophically I think not.
        If the chess program is going to take more responsibility for its
actions, I think it should have to 'learn' as you or I (are all of you,
if expert systems, *learning* ones?).  Of course, the author of the
program still is partly responsible for how efficiently the program
learns, and in finite time how good his creation will become.
        So, to apply the principles of recursion, let the program have
to learn how to learn.  At this level it is easy to see that skilled
opponenets will cause a 2nd-loop learner to learn faster; hence the
product is less a function of its original architecture and more a function
of its environment -- which I guess we can (by default) attribute to the
program (not its creator).  At his point, if we put it in a closet and let
it vegetate, it probably will not be very good at chess.  This is certainly
true in the limit as n (as in nth-loop learner) approaches infinity.
        It is easy to see that man is effectively an n-loop learner
which cannot comprehend an o-loop learner for o>n.  To be precise
I should have said a *single* man.  Groups  of people, similarly and
perhaps even including women, can function at some level p>n.  Hence
it should be possible for teams of people to beat individuals (and
generally is).  I see no reason for p  or q to be bounded (where q
is the class of learning evidenced by machine), and the problem has
been reduced to a point: that is that man is just a transient form
of intelligence which cannot be quantified (by himself anyway), only
*measured*.
        Chess in its various forms does so (well you think when you
win, poorly when you lose): and in its various forms is *fun*.  Just
remember, computer chess wouldn't be around if several smart people
were not whipped at the game through careless errors by people so
dumb that 'even a computer could beat them' or 'except for one
careless mistake...'


RKJ.

------------------------------

Date: Saturday, 29 Jun 1985 22:16-EST
From: munnari!psych.uq.oz!ross@seismo
Subject: Use of context to allow reasoning about time

David Sherman (AIList V3 #71: Suggestions needed for tax rules)
writes:

> I am trying to design a system which will apply the rules of the Income
> Tax Act (Canada) to a set of facts and transactions in the area of
> corporate reorganizations.
> ...
> The passage of time is very important: steps happen in a particular
> sequence, and the "state of the world" at the moment a step is taken is
> crucial to determining the tax effects.
> ...
> The problem I have ... is how to deal with _time_.

The following is just a suggestion. I have not actually tried it and I
am not familiar enough with the literature to even say whether it is an
old idea.  However, it seems plausible to me and might be a useful
approach.

Time is not directly perceptible. It is perceived indirectly by noting
that the environment (physical and cognitive) changes. There is a lot
of survival advantage to believing in causality so the brain likes to
attribute a cause to every change and when there is nothing obvious
around to attribute causality to, we invoke the concept of time.  As
Dave Sherman pointed out, time is bound up with changes in the "state
of the world", what I just called the environment.  Lets shift into
psychology and call it the context.

Context plays a very important role in psychology.  All the human and
animal decision processes that I know of are context dependent.
Consider a classic and simple memory experiment. The subject is given a
list of nonsense words to memorise and is then given a new list of
words some of which are from the memorised list to judge as old or
new.  This process may be repeated a dozen or more times in a session.
How does the subject restrict his definition of old/new to the list he
has just seen?

It seems that the words are not remembered as individual and isolated
objects but are remembered along with associative links to the context,
where the context contains everything else that happened
simultaneously. So, when memorising words in a list the subject links
the words to each other, any extraneous noises or thoughts, even small
changes in posture and discomfort. It has been shown that recognition
and recall are greatly enhanced by reconstruction of the context in
which memorisation occurred.

Context is also evolutionarily important. It obviously enhances
survival to be able to form associative links between the centre of
attention and possibly anything else.  The nasty thing about many
environments is that you can't tell before hand what the important
associations are going to be.

Lets look at how context might be applicable to AI. In MYCIN, data are
stored as <object,attribute,value> triples.  This is also a reasonable
way to do things in PROLOG because it allows the data to be treated in
a more uniform fashion than having each attribute (for instance) as a
separate predicate.  The objects in MYCIN are related by a context
tree, but this has nothing to do with the sense in which I am using
"context" so I will continue to call them objects. An object is a more
or less permanent association of a bundle of attributes. That is, there
is some constancy about it, which is why we can recognize it as an
object (although not necessarily a physical one).  By contrast the
context is an amorphous mass of other things which happen to be going
on at the same moment. There is little constancy to the structure of
the context.

The MYCIN triple cannot be related to it's context other than through
the values of its object, attribute or value fields. There is no
explicit way of showing that a fact was true only for a certain
interval of time or only when a particular goal was active.  I propose
that the triple be extended to explicitly represent the context so it
becomes <context,object,attribute,value>.  The values of the context
variable would normally be unique identifiers to allow a particular
context to be referred to.  The context does not actually store any
information, but many facts may be tied to that context. A context is a
snapshot of the facts at some stage in the computation.

Obviously there needs to be  a lot of thought put into when to take the
snapshots and the appropriate strategy will vary from application to
application.  The context will contain the facts being reasoned about
at the time of the snapshot (probably when they had been whipped into
consistency) but would also contain other relevant information such as
goal states and clock times.  For Dave Sherman's application there
would probably be a new context snapshot when each transaction occurred
(e.g. transfer of property in exchange for shares).  Two additional
facts within the context would be the earliest and latest clock times
for which the context is valid.  This would allow reasoning about
changes of state and elapsing of time because the before and after
states are simultaneously present in the fact base along with the clock
times for which they were true.

A couple of other uses of contexts suggest themselves.  One is the
possibility of implementing "possible worlds" and answering "what if"
questions.  If the system is capable of manipulating contexts it could
duplicate an existing context (but with a new context ID of course),
modify a few of the facts in the new context and then start reasoning
in that context to see what might have happened if things had been a
little different.  Another possibility is that it might be useful in
"truth maintenance systems".  I have heard of them but not had a chance
to study them.  However their references to assumption sets and
dependency directed backtracking sound to me like the idea of tracking
the context, attributing changes in the context to various facts within
the context and then using that information to intelligently manipulate
the context to implement backtracking in a computation.

     UUCP:    {decvax,vax135,eagle,pesnta}!mulga!psych.uq.oz!ross
     ARPA:    ross%psych.uq.oz@seismo.arpa
     CSNET:   ross@psych.uq.oz
     ACSnet:  ross@psych.uq.oz

     Mail:    Ross Gayler                       Phone:   +61 7 224 7060
              Division of Research & Planning
              Queensland Department of Health
              GPO Box 48
              Brisbane  4001
              AUSTRALIA

------------------------------

End of AIList Digest
********************

From comsat@vpics1 Wed Jul  3 10:15:46 1985
Date: Wed, 3 Jul 85 10:15:35 edt
From: comsat@vpics1.VPI
To: fox@opus   (FRANCE,RDJ,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Status: R

Received: from sri-ai.arpa by csnet-relay.arpa id a000336; 3 Jul 85 2:19 EDT
Date: Tue  2 Jul 1985 22:10-PDT
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #86
To: AIList@SRI-AI
Received: from rand-relay by vpi; Wed, 3 Jul 85 09:59 EST


AIList Digest           Wednesday, 3 Jul 1985      Volume 3 : Issue 86

Today's Topics:
  Administrivia - Addresses of Seminar Presenters,
  Queries - Statistics of Syntactic Structures & Spatial Reasoning &
    UNIVAC 1100 LISP & Symbolics's User Interface,
  Expert Systems - Validation,
  AI Tools - Interlisp Comments & C vs. LISP

----------------------------------------------------------------------

Date: Monday, 1 Jul 1985 20:49-EST
From: munnari!psych.uq.oz!ross@seismo.ARPA
Subject: addresses of seminar presenters

Being from out of town I find it a little difficult to get to most of the
seminars advertised in the AIList. However, there are some I would like a
little more information on by contacting the presenter to get a copy of the
talk or, more likely, a related paper or report.

Unfortunately, most of the seminar announcements give no network address for
the presenter and an inadequately specified postal address. Would it be
possible to exhort seminar hosts to put complete addresses in the announcements
or at least make sure that they ask the presenter for an address so that
others may find out from the host?

        -- Ross

------------------------------

Date: Tue, 2 Jul 85 10:26:31 EDT
From: "Ben P. Yuhas" <yuhas@hopkins-eecs-bravo.ARPA>
Subject: Read My Lips

     Here at the Sensory Aids Lab, we are beginning to explore
some of the strategies used by lip readers to decode the visual
speech signal. One of the questions we want to answer is to what
degree does syntactic structure influence a sentence's lip read-
ability.

     In developing a data base of test sentences on laserdisk,
we began to wonder whether anyone had ever attempted to find
the statistical distribution of syntactic structures in spoken
English. I realize this distribution might vary greatly from
group to group. If there are any computational linguists with
references or thoughts on this matter I would appreciate hearing
from you.

    yuhas@hopkins-eecs-bravo

    Snailmail: Ben Yuhas
               Dept EECS
               Johns Hopkins University
               Baltimore, MD   21218

------------------------------

Date: 2 Jul 85 08:27:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms>
Subject: spatial reasoning


Can anyone suggest a good survey article or textbook that covers
AI for spatial reasoning, especially for 3-D?  I have in mind
things like, "will this refrigerator fit thru that door", etc.
Thanks for any help.

John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
Bldg 225 Room A-265
Gaithersburg, MD 20899
phone: (301) 921-2431

------------------------------

Date: Mon, 1 Jul 85 11:51:36 EDT
From: Marty Hall <hall@hopkins-eecs-bravo.ARPA>
Subject: UNIVAC 1100 Series LISP

I am trying to find where I can find documentation or info on the
Univac 1100 LISP.  We are referring to a system that was written in
that dialect, and converting it to Common LISP.  However, there are two
functions that appear that are found in neither MACLISP or LISP 1.5
(which this LSP was supposed to be), that we can't find out what they
do.  The functions are "AMB" and "STACK".  They are used:

        (setq <var> (amb (stack <var2>)))

We have called lots of people, including the local Sperry Corp folks,
and no one seems to know.  Any suggestions on where to look or who
to call/send to ?
                                -Marty Hall

                                hall@hopkins
                                aplvax!m2lab!hall@maryland

------------------------------

Date: Tue, 2 Jul 85 11:26 EDT
From: susan watkins <chaowatkins@SCRC-STONY-BROOK.ARPA>
Subject: Symbolics's application user interface

        I'm working as a developer at Symbolics, Inc. in Cambridge, Ma.
I would like to get opinions, reactions, problems encountered,
constraints the system s/w imposes,  etc.  from programmers who have
use the Symbolics s/w to develop a reasonably sized product. e.g. what
problems they have run into trying to use the window system. I'll be at
IJCAI, so I'll more than happy to meet and talk with anyone who is
interested. My mail-stop is  chaowatkins@SCRC-STONY-BROOK.ARPA.

------------------------------

Date: Mon 1 Jul 85 09:54:22-PDT
From: Bruce Buchanan  <BUCHANAN@SUMEX-AIM.ARPA>
Subject: validation of expert systems

Ted Shortliffe & I tried to address the issues surrounding evaluation
of expert systems in chapter 30 of RULE-BASED EXPERT SYSTEMS.  We
did not specifically talk about what to do in the case of very large
knowledge bases built by multiple experts, but chapter 8 does discuss
some knowledge-base editing facilities that should help.

I would like to know of work on these problems.

B.Buchanan

------------------------------

Date: 26 Jun 1985 1215-PDT (Wednesday)
From: Steven Tepper <greep@Camelot>
Subject: Putting comments in Interlisp programs (flame)

     [Forwarded from the Stanford bboard by Laws@SRI-AI.  This is
      part of an exchange on hacking and software engineering.]


I didn't say that comments are impossible in Interlisp -- merely that it's
painful to put them in.

For the edification of those who have not had the privilege of being
subjected to Interlisp's slavish adherence to the principle that it
should constitute an entire programming environment (as opposed to being
just another programming language living on a general purpose computer
system), one of the concomitant requirements of this philosophy is that all
operations, including editing, be done on Lisp objects.  This means that
comments (which are handled by a function called * that does not evaluate
its arguments) are a part of the running program.  Thus, extreme care is
required in the placement of comments.  For example, the following function
fails:

(DEFINEQ (FOO (LAMBDA (X Y)
   (COND ((GREATERP X Y) X)      (* Return the maximum value)
         (T Y]

because the comment is treated as a clause to the COND.  Similarly, a
comment placed as the last form in a function (Interlisp provides an
implicit PROGN in function definitions) will return the first word of the
comment as the value of the function.  In fact, because Lisp is largely a
functional language, there are relatively few safe places to put comments.

A further indication of the low repute with which comments are held in
Interlisp is the fact that the common way of displaying a function at
the top level, PP (pretty-print), replaces all comments with the symbol
**COMMENT**.  To me, this is backwards -- if anything, the comments should
be given prominence over the Lisp code.  Similarly, in the display editor
on Interlisp-D, comments are kept as far away from the executable code
as possible (on the same line) and displayed in a font which is considerably
less readable than that used for non-comments.

This is the basis on which I justify my earlier claim that Interlisp
"discourages" comments, which I consider an undesirable goal.

------------------------------

Date: Thu 27 Jun 85 11:47:41-PDT
From: Liam Peyton <PEYTON@SU-SUSHI.ARPA>
Subject: Interlisp

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

If Interlisp's approach to comments prevents one from inserting mindless
comments like the following:

(DEFINEQ (FOO (LAMBDA (X Y)
   (COND ((GREATERP X Y) X)      (* Return the maximum value)
         (T Y)))))

then it should be praised not criticized.  A step by step translation
is not helpful commenting and certainly does not give the comments more
prominence than the actual code. (if anything it reduces the relevance of
comments).  A short summary before the cond explaining what the code is
doing is far neater and more useful.

Why would one ever want to have a comment in the last line of a PROG?

(* comment: this is the end of the prog)

This is not to say by any means that Interlisp has the ideal means of
handling comments or that Interlisp doesn't have its problems.  It
does, but they are certainly not a basis for rejecting it as a
programming environment.

Some of the things that result from "Interlisp's slavish adherence to
the principle that it should constitute an entire programming environment"
are incremental execution for debugging purposes, sophisticated mouse
and window system with interactions between windows, online text processing,
and online graphics.

A general purpose computer is a computer that can do anything painfully.

------------------------------

Date: Mon, 1 Jul 85 21:46:21 PDT
From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
Subject: C vs LISP

        We have continuing debates about that subject all the time,
and I think for us we have come to the conclusion (for now) that
C is better than LISP.
        Currently we have MS-DOS 2.0, XLISP 2.0, and Lattice C
compiler version 2.0.  A new man (fish@aerospace) was given
the simple task to write a plotter driver.  He did this by
first writing a plot spooler of sorts in assembly language (after
failing miserably in basic).  At this point, he was a virgin
programmer, save some fortran programming on large machines.
        He then started on another program to help people interactively
develop a 'plotter language' file which his plot spooler could plot.
At this point he was manually generating the plotter language, to
produce vu-graphs (as I said he was *the* new man).  Using a copy
of Winston's Lisp text, he set out with XLISP to produce this
translator.  After a month or so, his incentive to write an
interactive translator (to get him out of the vu-graph making loop)
dissipated.
        About a month ago, he went to NARDAC and attended a 1-day
C tutorial.  Then he brought up the C compiler, and now is just
about done.  There is no doubt in my mind that he prefers Lattice
C to XLISP.

        Before all the LISP people flame let me make a few comments.  I
am familliar with both C and XLISP, and have programmed in each.  Both
are pretty basic, but in my opinion I would chose XLISP to write the
basic program, and then recode it to C if it was going to be maintained.
During this effort I acted as the trouble shooter, and let me say that
if I was going to supervise a programming team, THEY WOULD USE C.  In
fact, from the management level, I think Lisp is only marginally better
than assembly language: perhaps.

        By September we should be using PC-AT's with GC-LISP, and the
new Microsoft C compiler.  By December we will probably have our
Symbolics, without a C compiler (although the salesman evidently has
one to sell which ought to say something).  So shoot me a note about
Jan and I may have changed my tune.

        To conclude, if you have a one man project, and the lisp environment
you plan to use has a lot of functionality you need in you application
built-in, lisp can probably be justified.  The source libraries now
available in C (or Pascal, real soon now for Ada) will be increasing
difficult to beat, especially in the context to C interpreters and
incremental compilers, and the fact C runs on *everything*.  If you
are starting from scratch -- common Lisp -- (even XLISP is extended to
support object oriented programming) ***good luck***!

Richard Jennings
AFSCF/XRP
Sunnyvale       ARPA: jennings@aerospace

->standard disclaimer applies<-

------------------------------

End of AIList Digest
********************

From comsat@vpics1 Mon Jul  8 18:26:40 1985
Date: Mon, 8 Jul 85 18:26:35 edt
From: comsat@vpics1.VPI
To: fox@opus   (FRANCE,RDJ,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Status: RO

Received: from sri-ai.arpa by csnet-relay.arpa id a000702; 6 Jul 85 18:24 EDT
Date: Sat  6 Jul 1985 14:47-PDT
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #87
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sat, 6 Jul 85 21:01 EST


AIList Digest            Saturday, 6 Jul 1985      Volume 3 : Issue 87

Today's Topics:
  AI Scholarship - NMSU,
  Seminars - Logic Programming with Functions (BBN) &
    Shape from Function (GE),
  Conferences - Expert Systems Application in Business &
    Intelligent Simulation Environments &
    North American Fuzzy Information Processing Society

----------------------------------------------------------------------

Date: Thu, 4 Jul 85 11:25:17 mdt
From: yorick%nmsu.csnet@csnet-relay.arpa
Subject: AI Scholarship - NMSU

GRADUATE AND UNDERGRADUATE SCHOLARSHIPS:
New Mexico State University, Computing Research Laboratory, invites
applications from excellent graduate and
undergraduate students interested in
Artificial Intelligence, including Expert Systems, Natural Language,
Cognitive Modelling, Intelligent User-Interfaces, Vision and
Robotics, and interdisciplinary projects that integrate these
fundamental aspects of computing science.  The CRL offers
scholarships of up to $12,000 for graduates and $3,000 for
undergraduates per year including tuition and cash.
Successful applicants will additionally be employed for up to
20 hours per week during the academic year, and 40 hours per week
during the summertime on CRL sponsored research programs.
Applications should include a letter indicating your intent to be
considered for one of these scholarships, a statement of your
experiences, a statement of your interests and future goals,
transcripts of all undergraduate work, and names and addresses of
3 references who know your abilities in computing science.
Please send applications, by 20 July 1985 to: Dr. Yorick
Wilks, Director, Computing Research Laboratory, Box 3CRL,
New Mexico State University,
Las Cruces, NM 88003.

------------------------------

Date: 25 Jun 1985 16:11-EDT
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Logic Programming with Functions (BBN)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


                    BBN Labs SDP AI Seminar

Speaker:  Uday S. Reddy
          University of Utah

Title:    Logic Programming with Functions

Time:     Friday, July 19th, 10:30 a.m.
Place:   3rd Floor Large Conference Room
         10 Moulton Street, Cambridge

While functional programming has been with us for more than two
decades, logic programming is a relatively new programming
language concept.  A comparison of the two styles shows that
functional programming is done by rewriting expressions to
semantically equivalent ones, while, on the other hand, logic
programming is done by solving formulas for values of their free
variables.  Thus, logic programming provides significantly more
expressive power than functional programming.

However, it is possible to perform logic programming in
functional languages.  Whereas Horn-clause logic languages use
resolution as the operational mechanism, functional logic
languages use a mechanism called "narrowing".  Given an
expression with free variables, the narrowing mechanism answers
the question "for what values of the variables does the
expression reduce to a value?".  Narrowing is a generalization of
both rewriting and resolution and so makes it possible to use
both the styles of programming in a unified framework.

------------------------------

Date: Thu, 27 Jun 85 14:14 EST
From: "S. Holland" <holland%gmr.csnet@csnet-relay.arpa>
Subject: Seminar - Shape from Function (GE)


                       SHAPE FROM FUNCTION VIA MOTION ANALYSIS
                     with Application to the Automatic Design of
                    Orienting Devices for Vibratory Part Feeders

                               Dr. Tomas Lozano-Perez
                           MIT Artificial Intelligence Lab
                                Cambridge, MA.  02139

                       Wednesday, August 14, 1985, 11:00 a.m.

                        General Motors Research Laboratories
                           Computer Science Department
                          Warren, Michigan  48090-9057



    This talk explores the premise that the function of many devices can be
    characterized by how they interact with other objects.  I suggest a
    representation of function of these devices in terms of motion constraints.
    These motion constraints are expressed as a diagram in configuration space.
    Combinations of these diagrams serve both in describing a device's function
    and in designing devices with specified behavior.

    This leads to a view of design as an inverse of the motion planning problem
    in robotics.  In both cases we know the shape of the moving part.  In
    motion planning, we are given the obstacles and we must find a legal path
    between the specified origin and distination.  In this view of design,
    however, we are given the desired motion (actually a range of possible
    motions) and are asked to find a legal shape of the obstacle, that is, the
    device.

    I illustrate this approach to design with a case study of mechanical part
    feeders, a class of real devices with an interesting and direct
    relationship between shape and function.

    Dr. Lozano-Perez has authored technical articles in the areas of motion
    planning, robot programming, and model-based object recognition.  He has
    been affiliated with the M.I.T. Artificial Intelligence Laboratory since
    1973.

------------------------------

Date: 07/05/85 15:18:19 MEZ
From: Christian Bader  <BADER%DB0TUI11.BITNET@WISCVM.ARPA>
Subject: Expert systems application in business

A workshop on Expert Systems in business will be held November
26/27 1985 in Berlin (West Germany) as a part of the BIG-TECH fair.
We are interested to know about business applications
of Expert Systems both in Germany and elsewhere. Please let me know
if you have an expert systems application that you could present
at the workshop.

Please contact
     Christian Bader
     Technische Universitaet Berlin
     Sekr. FR 6-7
     Franklinstr. 28/29
     D-1000 Berlin 10    West Germany
              phone: (49-30)-314-4903
                  or (49-30)-314-73260 (leave message)
       Network address:   ARPA : BADER%DB0TUI11.BITNET@WISCVM.ARPA
                        BITNET : BADER at DB0TUI11
                         CSNET : BADER%DB0TUI11.BITNET@WISCVM.ARPA

------------------------------

Date: Tue, 2 Jul 85 16:03 CST
From: Adelsberger%tamu.csnet@csnet-relay.arpa
Subject: Conference - Intelligent Simulation Environments

CALL FOR: PAPERS, PANELISTS, SESSION COORDINATORS

INTELLIGENT SIMULATION ENVIRONMENTS,
1986 SCS MULTICONFERENCE, JAN 23 - 25, SAN DIEGO

    The   Society  for  Computer  Simulatiion  is  sponsoring   a
multiconference  January 23-25,  1985 in San  Diego,  California.
Solicited are papers in the areas of:

    *  User friendly simulation environments.
    *  Knowledge based simulation sytems.
    *  Artificial intelligence applied to simulation
       environments.

    Papers  of  special interest might describe models  that  (1)
have many symbolic processes,  (2) use heuristic search, (3) have
a  command  structure separate from knowledge  domain,  (4)  have
expertise  built  into  the model so that decisions by  the  user
would be minimized.

    AI  papers  dealing  with subjects that are  not  necessarily
directly  simulation  related  but  which  have  a  strong   time
dimension or concern would also be welcome.

    We  are  also interested  in panel  discussions  or  sessions
coordination on a particular aspect of the subject.

    Detailed abstracts (300 words) of proposed papers and special
sessions should be sent direct to the program chairmen not  later
than July 21,  85.   Camera ready copies of accepted papers would
be due October 15, 1985.

Heimo H. Adelsberger
Program Chairmen

Texas A&M University
Computer Science Department
College Station, TX - 77843

Phone: (409) 845-0298

------------------------------

Date: Mon, 24 Jun 85 15:31:56 cdt
From: Don Kraft <kraft%lsu.csnet@csnet-relay.arpa>
Subject: Call for Papers -- NAFIPS Meeting

                         CALL FOR PAPERS

     North American Fuzzy Information Processing Society (NAFIPS)

                    International Meeting

            Monteleone Hotel    New  Orleans, Louisiana
               (In the Heart of the French Quarter)
                       June 1-4, 1986

     Papers on all fuzzy topics  are  encouraged,  and  wide
     international participation is expected.


     Deadlines
          Notice of intent with a title and abstract     9/1/85
          Completed paper  (3 copies)                   10/15/85
          Notification of acceptance                     1/15/86
          Camera-ready copy due                          3/15/86


     Proceedings  will  be  distributed  during   Conference
     registration.


     Send all abstracts and papers to:

          NAFIPS86
          Department of Computer Science
          Florida State University
          Tallahassee, FL  32306


     Abraham Kandel and Wyllis Bandler, Program  Committee Co-Chairs

     Fred Petry and Donald H. Kraft,  General Meeting Co-Chairs

------------------------------

End of AIList Digest
********************

From csvpi@vpics1 Mon Jul  8 18:26:19 1985
Date: Mon, 8 Jul 85 18:26:13 edt
From: csvpi@vpics1.VPI
To: fox@opus   (FRANCE,RDJ,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Status: RO

Received: from sri-ai.arpa by csnet-relay.arpa id a000845; 7 Jul 85 14:56 EDT
Date: Sun  7 Jul 1985 10:51-PDT
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #88
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sun, 7 Jul 85 21:23 EST


AIList Digest             Sunday, 7 Jul 1985       Volume 3 : Issue 88

Today's Topics:
  Queries - Generators in Lisp & Expert Systems for Configuration &
    Natural Language Processing Software,
  Robotics - Spatial Reasoning,
  Games - Learning in Chess Programs,
  Publications - New IEEE AI Journal,
  Review - AI Report Vol 2 No 5 & AI Report Vol 2 No 7

----------------------------------------------------------------------

Date: 3 Jul 1985 14:17-EDT
From: Conal.Elliott@CMU-CS-CAD.ARPA
Subject: Generators in Lisp query

I'd like to implement a simple generator-language (i.e.  functions with
backtracking and able to return more than once) on top of Common Lisp.  It
doesn't need to be anything fancy, as it will be for my own use.  I would
appreciate any hints or pointers.

                                        Conal Elliott
                                        conal@cmu-cs-cad.arpa

------------------------------

Date: Tue 2 Jul 85 22:29:55-PDT
From: Marty Tenenbaum <Tenenbaum@SRI-KL>
Subject: Expert Systems for Configuration

Does anyone know where I might acquire an expert system for solving
configuration problems (ala R1/XCON) on a PC.  I am interested in such
a system, more as a tutorial aid than as a serious application.

Jay M. Tenenbaum, Schlumberger Palo Alto Research.
Please respond to Tenenbaum@SRI-KL, or call me at 415-496-4699.

------------------------------

Date: Wed, 3 Jul 85 12:24:15 cdt
From: Mark Turner <mark%gargoyle.uchicago.csnet@csnet-relay.arpa>
Subject: natural language processing

TIRA at U Chicago is looking for a robust Natural Language
Processing system - actual code - it can obtain and install
on a 4.2BSD Unix system.
To elaborate: Many faculty members from Departments of
Library Science, English, Linguistics, Classics, Romance
Languages, etc. at U Chicago who currently work in searching and
processing natural language text data bases have now formed the
Textual Information Retrieval and Analysis (TIRA) research
center.  The Department of Computer Science at U Chicago is
only a few years old, and although I understand that it would
be interested in hiring an Assistant Professor in AI/NLP,
it has not yet done so.  Consequently, we lack a faculty
member who might focus his energies on installing and tuning
a Natural Language Processing System. Several of us are familiar
with NLP, though, and we have some programmers on staff.
So I am beginning to wonder how we might obtain, for academic
research purposes, the code and documentation for someone
else's NLP system, and install it here with relative ease,
to help us with semantic, grammatical, thematic, and morphological
parsing, in various Indo-European languages, principally
English, French, Greek, Latin, Italian, German, and Spanish.
I would appreciate your responses.
Mark Turner
Department of English
U Chicago 60637
>ihnp4!gargoyle!puck!mark

------------------------------

Date: Wed, 3 Jul 1985  14:14 EDT
From: Juliana Kraft <ROBOT.JULIE%MIT-OZ@MIT-MC.ARPA>
Subject: Spatial Reasoning Query


    From: "CUGINI, JOHN" <cugini at nbs-vms>
    Can anyone suggest a good survey article or textbook that covers
    AI for spatial reasoning, especially for 3-D?  I have in mind
    things like, "will this refrigerator fit thru that door", etc.
    Thanks for any help.

For 3D you must consider 6 degrees of freedom (3 translational and 3
rotational).  I recommend "Motion Planning with Six Degrees of
Freedom," by Bruce Donald, (261 pp), MIT AI-TR 791, available from

Publications Office
MIT AI Laboratory
Room NE43-818
545 Tech Square
Cambridge, MA 02139
(617) 253-6773.

------------------------------

Date: Wed, 3 Jul 85 17:08 pst
From: "furth john%d.mfenet"@LLL-MFE.ARPA
Subject: The Best Chess Program

I would like to add something to Richard Jennings' words on the
responsibility of the author of a chess program for its performance.
The most rigid chess program will play chess only as its author
would at his/her best.  The program that learns has the possibility of
doing better.  Suppose the author wrote his/her program without any
instructions for playing chess but only for learning how to play
chess.  Then the program could learn and execute maneuvers that
the author was unaware of.  Now this program learns only as
its author learns at his/her best.  We may continue this iterative
procedure to some arbitrary degree and declare the author's taint
to be negligible.  In the process, however, we will probably have
accumulated some large overhead.  The time spent passing information
up and down this ladder of learners and the storage required at each
rung of the ladder will make the program unusable.
      To attain an independent and useful intelligence, the learner
must be able to discard signifigant portions of the means by which
it has arrived at its present level of ability.  The original hub
of its actions must fall away and a new one be generated.
So the adult forgets the involvements of childhood and the state
the cares of its early days.  With whatever vestiges remain, the
organism must take on a whole new orientation to meet new
needs with a closer approach to the optimum.  It is better to
forget the past than to live there.  The best chess program
will forget most everything its author ever told it to do.

                            John Furth

------------------------------

Date: Sun  7 Jul 1985 11:11-PDT
From: Laws@SRI-AI
Subject: New IEEE AI Journal

>From IEEE Computer, July 1985, p. 101:

IEEE Expert is the newest addition to the Computer Society's list
of publications, which already includes five magazines.  The Computer
Society Board of Governors gave its approval to the new quarterly at
its May 10 meeting ...

David Pessel of Standard Oil of Ohio will serve as acting editor-in-
chief and will have the responsibility of preparing for the initial
publication in the first quarter of 1986.  The magazine is expected
to treat such AI areas as knowledge engineering, natural language
processing, expert systems, and conceptual modeling.  ...

------------------------------

Date: 4 Jul 1985 10:46-EST
From: leff%smu.csnet@csnet-relay.arpa
Subject: AI Report Vol 2 No 5 Summary

Report on Stanford University AI efforts including Knowledge Systems
Laboratory (VLSI design, MOLGEN, interpretation of nuclear magnetic
resonance data on proteins, computer-aided teaching of diagnostic
reasoning, ONCOCIN for administration of medical treatment protocols,
lymph node pathology diagnosis system, robotic manufacturing strategy
development, financial resource planning).  Basic AI research includes
non-monotonic reasoning, robotics, mechanical construction of computer
programs, design, description and interaction with computer systems,
database retrieval research.  RADIX [formerly RX] is a project which will
use computer programs to examine over 50,000 patient years of accumulated
medical data.

Report on ESPRIT and ALVEY, AI efforts of the European Economic
Community and England respectively:

The following are a list of some books mentioned in the report:
Artificial Intelligence Applications for Manufacturing
Artificial Intellgience Applications for Business Management
The 1985 Handbook of Manufacturing Software (all three by SEAI Techical
Publications)
Machine Vision -- A Summary and Forecast (Tech Tran Corporation)
A Practical Guide to Designing Expert Systems by S. Weiss and C.
Culikowski
William Gevarter: Artificial Intelligence, Expert System, Computer
Vision and Natural Language Processing
William Gevarter: Artificial Intelligence and Robotics: Five Overviews

Mitsubishi Research Institute has inititated a multi-client AI research
project

Report of work done by the Knowledge Information Research Institute of
Computer Services Corporation of Japan

Report on AI at Ohio State: medical systems which infer data from broad
data descriptions and concepts including a red cell antibody
identification system, a system for diagnosing fuel problems in
autombile engines, an air cylinder design system.

Report on Imperial Chemical Industries which has developed an expert
systems shell called Savoir, an agricultural advisor system.

Infologics of Stockholm has announced a PROLOG for IBM-PC costing
295 dollars.

Automata Design Associates has five versions of PROLOG available
for IBM-PC (public domain, educational $29.95, FS Prolog, $49.95,
Virtual memory prolog $99.95 and Large virtual model prolog $300).

TOPSI is selling an OPS-5 for CP/M and MS-DOS for $400.00.

The Automated Reasoning Corporation is selling a fault-diagnosis system.

Odetics (the maker of six legged robots) has announced the
development of an AI center.

Expert Technologies has been developed to sell AI technology to
printers and publishers.

Report on new shareholders of NCC.

Lynn Conway has left Darpa to join University of Michigan

------------------------------

Date: 4 Jul 1985 10:18-EST
From: leff%smu.csnet@csnet-relay.arpa
Subject: The AI Report Vol 2 No 7 Summary

The Artificial Intelligence Report July 1985 Volume 2 No 7

Nippon Telephone and Telegraph

Report on Nippon Telephone and Telegraph, the Japanese AT&T, (NTT)
includes general description of company and its computer related R&D
efforts.  In AI, they are working on a Japanese-English translation
effort, medical expert systems, systems to recognize handwritten
Japanese and Chinese characters, robotics, speech recognition and
speech synthesis.  They have also developed a Lisp machine using the
language Tao, which is a blending of LISP, PROLOG and Smalltalk.  It
40 to 50 times faster than ZetaLisp interpreter, 3 times faster than
Smalltalk-80 on the Xerox Dolphin and five times faster than the
DEC-10 Prolog interpreter.

Also Computer Services Corporation (CSK) is completing work on a LISP
machine prototype which will run Prolog, LISP, UNIX and process
Japanese natural language input.

The AI profits

discusses interest by new and old companies in AI.  based at a Gartner
group forum.  Reports on Lisp machine vendors, Texas Instruments,
Symbolics, Xerox, Lisp Machine Inc (LMI).  Symbolics revenues  are
expected to top 85 million dollars this year and LMI revenues will top
25 million dollars. They predict that Xerox will introduce a 10,000
dollar low-end AI machine.  The Gartner's group of Lisp machine sales in
1990 is over one billion dollars.

Also discusses expert systems.  They feel that natural language
understanding will not be as big a seller as expert system tools.

IBM has over 300 researchers pursueing AI objectives.

Reports on new DEC microvax products, management changes at Lisp
Machine Inc, announcement by Radian Corproation of a IBM PC expert
system shell, an apple Macintosh OPS5 interpreter, R&D expenditures
for various companies, a prediction that a billion transistors will
be packed on a single chip.

The Institut fur Entscheidungstheorie und Unternehmesforschung at der
Universitat Karlsruhe in Germany is conducting an international survey
on Expert Systems in business.

They review the following:

The Fourth Technical Conference of the British Computer Society
Speicalist Group on Expert Systems which has been published as
"Research and Development in Expert Systems"

V. Daniel Hunt's "Smart Robots: A Handbook of Intelligent Robotic
Systems"

Eugene Charniak and Drew McDermott's "Introduction to Artificial
Intelligence"

Also report on the Army AI Center which is doing research on systems to
field new equipment to the army.

The following is a list of some government documents on AI that I found
in this report:

An Overview of Artificial Intelligence and Robotics, NASA-TM-85836
An Overview of Computer Vision PB83-217554, An Overview of
Computer-Based Natural Language Processing PB83-200832, Overview of
Expert Systems PB83-217562 and Flexible Manufacturing System Handbook
ADA-127927.

------------------------------

End of AIList Digest
********************

From csvpi@vpics1 Mon Jul  8 18:25:59 1985
Date: Mon, 8 Jul 85 18:25:52 edt
From: csvpi@vpics1.VPI
To: fox@opus   (FRANCE,RDJ,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Status: RO

Received: from sri-ai.arpa by csnet-relay.arpa id aa00309; 7 Jul 85 20:16 EDT
Date: Sun  7 Jul 1985 16:28-PDT
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #89
To: AIList@SRI-AI
Received: from rand-relay by vpi; Sun, 7 Jul 85 21:26 EST


AIList Digest             Monday, 8 Jul 1985       Volume 3 : Issue 89

Today's Topics:
  AI Tools - Lisp vs. C & Interlisp Comments

----------------------------------------------------------------------

Date: Wed, 3 Jul 1985  15:30 EDT
From: "Scott E. Fahlman" <Fahlman@CMU-CS-C.ARPA>
Subject: Lisp vs. C


I won't try to take issue with Richard Jennings's views on Lisp vs. C,
except to note that he is only in a position to compare one dialect of
Lisp (XLISP) to one dialect of C on some sort of MS-DOS machine --
presumably a tiny one -- for one particular kind of task with one
particular virgin programmer who had been trained in in a different way
on the two languages.  His observations are probably valid for this
case, but I wouldn't draw any sweeping generalizations from this.

Lisp really requires a full-fledged environment in order to be an
attractive language.  A lot of people got turned off very badly back in
the bad old days when the Lisp environment was primitive and the address
space of most machines was too small to hold the kinds of features that
we see today on the various Lisp machines.  Now we are seeing the same
"turn off" among people whose exposure to Lisp consists only of using
very small Lisps on machines with only, say, 512K bytes of memory.  On
such a machine, a language like C (which evolved to fit the PDP-11, a
machine whose address space is even smaller) probably is superior for
getting real work done.  This phase will pass just as soon as machines
with adequate virtual memory systems become as common as PC's are today.

-- Scott Fahlman

------------------------------

Date: Wed 3 Jul 85 13:55:02-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: ~= re: Interlisp comments

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


    [...] Unfortunately at least once programming system I know of
    (Interlisp) actively discourages this by making it painful to put
    comments in programs.  For this reason alone Interlisp disqualifies
    itself as a serious programming environment.

Isn't that a little like saying that Boeing, by having tiny restrooms
disqualifies itself as a serious airplane manufacturer?

    I didn't say that comments are impossible in Interlisp -- merely that
    it's painful to put them in.

Pain is subjective.  For my money, tracking down bogus indentation, missing
semicolons or unbalanced parentheses or misspelled symbols in a
text-editor-oriented language is ever so much more painful than commenting
Interlisp.

All of these problems associated with the syntactic sugar of most
programming languages are non-issues in Interlisp, where the interned
form is edited.  For the non-Interlisper it should be explained that
in DEdit (the Interlisp-D display structure editor) the programmer
selects S-expressions with the mouse and operates on them through a menu.
Note that the user selects S-expressions; not the textual representation
of an S-expression.  Eg., a user can't select a parenthesis because it
is only an artifact of the pretty-printer.  Since parentheses are generated
only by the pretty-printer on redisplaying a transformed S-expression,
they can never be unbalanced.  Since atoms are typically "typed" into a
program by buttoning an existing instance and using a menu command which
copies the interned pointer (not the characters of the PNAME) spelling
errors can't occur.  Similarly, indenting is the job of the pretty printer--
not the programmer--and is done per-window, so editing windows may be of
different sizes.  I am amused to see CADR-sized editor windows on 3600's.

    For the edification of those who have not had the privilege of being
    subjected to Interlisp's slavish adherence to the principle that it
    should constitute an entire programming environment (as opposed to
    being just another programming language living on a general purpose
    computer system), one of the concomitant requirements of this philosophy
    is that all operations, including editing, be done on Lisp objects.

Some of us are willing slaves.

    This means that comments (which are handled by a function called *
    that does not evaluate its arguments) are a part of the running program.
    Thus, extreme care is required in the placement of comments.  [...]

Extreme care must be taken when spelling identifiers in C programs and placing
semicolons and parentheses.  Nobody said programming would be safe!  <:-)

    [...] In fact, because Lisp is largely a functional language, there
    are relatively few safe places to put comments.

I don't think that it is unfair to ask a programmer to learn the semantics
of the programming language he is using.  Given an understanding of PROGN,
COND, AND, OR, and the interpreter, I don't think it is difficult to comment
Interlisp programs at all.  Anyway, the compiler catches instances where
the programmer has used a comment for its value and warns him with a message
like "Warning: value of comment used".  Actually, I haven't seen this message
in a year or two.  When was the last time cc barfed on a semicolon you forgot?

    [...] Similarly, in the display editor on Interlisp-D, comments are
    kept as far away from the executable code as possible (on the same
    line) and displayed in a font which is considerably less readable than
    that used for non-comments.

You have only to rebind COMMENTFONT to a larger/bolder fontclass if you
don't like the default.  What font does EMACS use for comments in C programs?

    This is the basis on which I justify my earlier claim that Interlisp
    "discourages" comments, which I consider an undesirable goal.

As I've said too many ways above, I don't think Interlisp discourages
misplaced comments any more than C discourages misplaced semicolons.

--Christopher Schmidt

------------------------------

Date: Sat 6 Jul 85 00:01:40-PDT
From: Peter Karp <KARP@SUMEX-AIM.ARPA>
Subject: Interlisp comments

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The Interlisp manual contains roughly 25 chapters which average
approximately 25 pages each.  These are big pages with closely
packed type.  The Interlisp environment has been evolving for
almost 20 years.  This should suggest to you that it is a rich
and complex entity.

Are we really expected to take seriously the proposition that
we shouldn't use this language because it doesn't let you put
comments anywhere you please?  [...]

------------------------------

Date: Fri 5 Jul 85 16:24:29-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: ~= re: Interlisp comments

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

[...]


    We're not talking about syntactic sugar or ease of use. We're talking
    about a limitation on the functionality of the programming environment.

My mistake then.  I think that C comments are the epitome of syntactic sugar;
discarded by the cc parser as if they never existed.  And rather than
Interlisp's being a programming environment that "holds comments in contempt"
I would go so far as to claim that it gives them MORE respect than
unix/EMACS/C.  In the unix/EMACS/C environment let's say that we have an
error.  unix gives us one of the many helpful error messages from the set
{bus error core dumped | segmentation error core dumped}.  The user invokes
the break package (cdb) (by hand).  Where are the comments now?  Where is
the symbol table?  The hacker only gets that if he is lucky!  In Interlisp,
by contrast, one of >50 error messages is printed, a break package window
opens up, a stack trace opens up (any frame can be selected and inspected
symbolically in its own window), and a menu of break package commands is
available (in addition to the entire programming language).  If one invokes
the editor from the break package (picking the function from the stack trace
with the mouse), the source code is right there; the variables can be
evaluated in the stack context of the break; and the comments are still
there!  I call that esteem, not contempt.  Now an Interlisp break is not
a post mortem.  Within the break package one can change the variable; rewrite
a function if desired and continue the program with the new definition,
which is now THE definition.  Do fixes made in cdb get incorporated into
the original source automatically?

[...]

--Christopher

------------------------------

Date: Fri 5 Jul 85 18:01:35-PDT
From: Wade Hennessey <WADE@SU-SUSHI.ARPA>
Subject: interlisp comments

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Chris' comments about the ease of debugging in INTERLISP are true of any
lisp system. They are not really relevant to greep's dislike of INTERLISP
or liking of UNIX. For example, ZETALISP uses a commenting style
similar to C, and yet it provides the same debugging functionality
of INTERLISP. There are several useful functions in ZETALISP which get
the system to find the source file where a function is defined, and then let
you start editing/examining the definition, comments and all. Thus, you
needn't make comments part of the code to make them easily accessible at
all times.

Wade

------------------------------

End of AIList Digest
********************

