From comsat@vpics1 Thu Oct 10 22:21:10 1985
Date: Thu, 10 Oct 85 22:21:06 edt
From: comsat@vpics1.VPI
To: fox@opus   (FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: R

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a006708; 10 Oct 85 1:37 EDT
Date: Wed  9 Oct 1985 21:13-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #140
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Thu, 10 Oct 85 22:05 EST


AIList Digest           Thursday, 10 Oct 1985     Volume 3 : Issue 140

Today's Topics:
  Query - Prolog on Macintoshes and IBM-Style PCs,
  AI Tools - Prolog vs. Lisp

----------------------------------------------------------------------

Date: Wed, 9 Oct 85 17:38:09 mdt
From: ted%nmsu.csnet@CSNET-RELAY.ARPA
Subject: prolog's on macintoshes and ibm-style pc's


Currently I know about arity prolog and prolog-2.  These are
reasonably-featured prolog interpreter/compiler packages that
run on the ibm pc.  Does anyone know of other packages which
are available on the pc or the macintosh???

------------------------------

Date: 05 Sep 85 18:29:55 PDT (Thu)
From: Sanjai Narain <Narain@rand-unix.ARPA>
Subject: Response to Hewitt

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

>Prolog (like APL before it) will  fail  as  the  foundation  for
>Artificial   Intelligence  because  of  competition  with  Lisp.
>There are commercially  viable  Prolog  implementations  written
>in Lisp but not conversely.

For the same reason, Lisp should have failed as a foundation for
computing because  of  competition  with  assembly language.
There  are commercially viable implementations of Lisp in assembly
language  but not conversely.

>LOGIC as a PROGRAMMING Language will fail as the foundation
>for AI because:

>1.  Logical inference cannot be used to infer the decisions
>    that need to be taken in open systems because the decisions
>    are not determined by system inputs.

>>2.  Logic does not cope well with the contradictory knowledge
>>    bases inherent in open systems.  It leaves out
>>    counterarguments and debate.

>>3.  Taking action does not fit within the logic paradigm.

1.  Hewitt clearly states in his  recent  BYTE  article  that
traditional notions  of  computation  as  defined,  for example,
by Turing machines or recursive functions cannot model the behavior
of open systems.  Hence even Lisp  is  inadequate for such modeling
(by his reasoning).

2.  The notion of contradiction (i.e. inconsistency) is well
understood in logic.

3.  The statement is too vague for debate.  What do the words
"action" and "fit"  mean?   Certainly,  if  action  can  be  modeled
by  an  effective procedure, it can be modeled by logic, cf. 1.

-- Sanjai Narain

------------------------------

Date: Thu, 5-Sep-85 13:40:43 PDT
From: (Tom Khabaza) mcvax!ukc!warwick!cvaxa!tomk@Seismo
Subject: On Hewitt's "Prolog and logic programming will fail"

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

I have read with interest the discussion following Carl Hewitt's
"Prolog will fail as the foundation for AI and so will Logic
Programming". I particularly enjoyed Vijay Saraswat's reply, most
of which I agree with. However, I would like to add a few comments:

In some ways I was surprised by the original message; I should
have thought that if AI has taught us anything, it is that to solve
a given problem, we need a good representation language.  Why anyone
might think that logic is the BEST representation language for every
problem is beyond me.  (No Kowalskiist flames please, I know the
arguments, and I don't regard the case as proven.)

On the other hand, we don't yet know what the limits of logic
programming are; researchers in the field are constantly coming up
with new techniques.  There is convincing evidence that logic
programming is better than conventional programming for some kinds
of task, at least with regard to ease and clarity (though probably
not yet efficiency).

But I think the basis of the original comment goes deeper than the
virtues and vices of logic programming.  As I understand it (and I
wasn't around at the time) some earlier AI programming languages,
such as perhaps micro-Planner and its successors, WERE expected to
become a "foundation" for AI.  Perhaps this was because people still
had hopes for the notion of some "ultimate" representation language,
or family of languages.

AI is older and perhaps more cynical now; I don't think we expect
some single foundation to the field in the form of a representation
language.  Logic programming may be very useful for some parts of
AI; for example some kinds of rule based systems, but I don't expect
it to be the best tool for all kinds of AI programming.  In fact my
personal opinion is that logic programming will find its forte in
more conventional Computer Science, where formal specification is a
more practical proposition than in the relatively exploratory
activity of AI programming.

But I will say this in its favour: logic programming is IMPORTANT.

Logic programming is as different from conventional programming as
programming is from not programming at all.  I have met people who
have given up on Prolog because it was difficult for them and they
(rightfully) considered themselves competent programmers - and so
thought it must be Prolog's fault!  (I don't mean to imply that
anyone who has posted in this discussion is such a person.) But
logic programming is different in fundamental ways; it's worth
presevering to get to the bottom of it, and as logic programming
languages improve, it will become even more so.

So for all you computer people out there, USE Prolog, and study
how other people have used it.  It really is worth it.

------------------------------

Date: Fri, 13 Sep 85 10:41:27 bst
From: William Clocksin <wfc%computer-lab.cambridge.ac.uk@ucl-cs>
Subject: Lisp/Prolog

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

I received issues 36 and 37 late owing to netproblem somewhere
between Stanford and Cambridge.  I am puzzled by this Lisp/Prolog
debate started by Carl Hewitt.  I use both Prolog and Lisp, and
have never felt the need to use one exclusively.  I suppose they
are like screwdrivers and chisels;  both are roughly the same,
but for slightly different purposes; to a person unfamiliar with
one of them, the other one might seem redundant.  I am also
puzzled about the question of a "foundation" for AI.  How can a
language be a "foundation" for anything?  Was Latin a "foundation"
of Western civilisation?  Seen any fundamental native speakers
lately? Besides, does AI deserve to have foundations attributed
to it anyway?

Another problem is this question about logic.  Prolog is a
programming language.  It was inspired by logic, but it is not
programming in logic.  Proponents of using logic do have a problem
matching impedance with the real world.  But Prolog is to logic
as Lisp is to lambda calculus.  Those who advocate programming in
lambda calculus have the same problem as those who advocate
programming in pure logic.  If Prolog can be said to have any
connection with logic, it is as the FORTRAN of logic programming.
Prolog is useful because you can grow data structures that have
actual variables in them, and because it is easy to define
nondeterministic methods.  I know how Prolog searches for a solution
just as I know how flow of control happens in Lisp, say.  I am not
disappointed with Prolog's strict strategy just as I am not
disappointed with Lisp's inability to run programs backwards, say.
I take it as it comes, and it is useful for some things.  Talking
hypothetically about the "ideal" language is another topic entirely,
and it only muddies the water to bring Prolog and Lisp into it.

------------------------------

Date: Wed 25 Sep 85 11:22:32-PDT
From: Fernando Pereira <PEREIRA%sri-candide@sri-marvin>
Subject: Prolog vs. Lisp (again!)

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

I humbly confess my incompetence at the debating style Carl
Hewitt is using in the Prolog vs. Lisp discussions, which
seems to consist in ignoring the FACTUAL points of other
contributions and just continuing to repeat the same OPINIONS.

It is a FACT that no practical Prolog system is written entirely
in Lisp: Common, Inter or any other. Fast Prolog systems have
been written for Lisp machines (Symbolics, Xerox, LMI) but their
performance depends crucially on major microcode support (so
much so that the Symbolics implementation, for example, requires
additional microstore hardware to run Prolog). The reason for
this is simple: No Lisp (nor C, for that matter...) provides the
low-level tagged-pointer and stack operations that are critical
to Prolog performance.

The fact that Lisp is not good enough as an implementation
language for Prolog should not be considered as a weakness of Lisp,
BECAUSE Lisp was not designed for such low-level operations in the
first place. In fact, NO ``high-level'' programming language that I
know of provides those kinds of operations, and for the very simple
reason that, being high-level languages, they have no business in
exploring the recesses of particular machine architectures. ALL
fast Prolog systems that I have seen (some of which I helped
implement) rely on a careful exploitation of the underlying machine
architecture.

By the same argument, the fact that Lisp cannot be efficiently
implemented in Prolog cannot be the basis of a valid criticism of
Prolog. Prolog is not a systems programming language, and in any
case a good Lisp implementation must be carefully coupled to the
underlying machine architecture -- so much so the the fastest Lisps
rely on specialized architectures!

It seems clear to me that no single existing programming language
can be said to provide a ``foundation'' for AI. In fact, the very
notion of a programming language providing a foundation for a
scientific  subject seems to me rather misguided. Does Fortran
provide  a ``foundation'' for physics?  The relation between AI
problems, formal  descriptions and programming concepts is far too
subtle for us to  expect a ``foundation'' for AI in a mere
programming language.

The crusading tone of Hewitt's comments is also rather
unsettling.  AI researchers will use whatever language they
feel most comfortable  with for the problem they are working
on, without need for any  guidance from on high as to the
ultimate suitability of that language. If more researchers use
Prolog, is that a threat to Lisp users? If I do a piece of AI
research using Prolog, will it not be judged according to its
content, independently of programming language?

That kind of battle might be very important for AI software
companies, but surely we should not let marketing hype get in
the way of our research. I am sitting at a Sun workstation typing
this, with a Prolog window just to the right. Will my research be
useless to someone who sits at a Xerox 1109? If I walk down the
corridor and write a Lisp program on a Symbolics machine (as I
have done and surely will continue to do), will THAT work have
a different value? If I decide to use Hoare's CSP for the
navigation component of our robot, will I be then outside
AI, because I am not using an ``official'' AI language?

With respect to Rick McGeer's points: there are some elegant ways
of compiling Prolog into Lisp, particularly into those statically
scoped variety (or into other statically-scoped languages such as
Algol-68...). I have reason to believe that a compiler along these
lines would produce code considerably faster than the 100 LIPS he
reports, even though still much slower than what is attainable with
a lower-level implementation.  [...]

-- Fernando Pereira

------------------------------

Date: Thu, 26 Sep 85 10:29 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Lisp and Prolog

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]


                           PROLOG/LISP = LISP/C

I'd like to amplify a point of view Clocksin put forth in V3
#39 in the great LISP vs. PROLOG debate.  Prolog (and Logic
Programming in general) and Lisp are both tools which are suited
for different tasks. Luckily, none of us is being forced to use
one to the exclusion of the other.

I've had to give more than my share of introductory AI talks over
the years. When the discussion gets around to Lisp I usually point
out that Lisp is especially attractive for experimental programming
situations, i.e. where you know what you want to accomplish but do
not yet have all of the details as to algorithms, data structures,
etc. worked out.  Once you've worked out the last detail, you can
re-implement your system in C, if you like, and gain the benefits
of a faster and smaller system.

Along these lines, I think that the slogan "Prolog is to Lisp as
Lisp is to C" is not too inaccurate.  I think that Prolog is even
better suited to initial, experimental and exploratory attempts to
attack a problem computationally.  I find that it is a very
convienent paradigm in which to get started.  Once I have a better
idea of how  to reprent a problem and how to manipulate the
representation, I can  re-implement it in Lisp and gain a faster
more steamlined system.

-- Tim

------------------------------

Date: 9 Oct 85 12:56 PDT
From: Kahn.pa@Xerox.ARPA
Subject: Prolog vs Lisp

Apropos the long debate started by Carl Hewitt:
As co-author of LM-Prolog, the best performing Prolog implemented in
Lisp,  I thought some performance numbers might be useful.   Naive
reverse runs at 10K Lips on a CADR Lisp Machine with special micro-code
support, primarily for unification and trailing.  Without any micro-code
support it runs 2 to 3 times slower.  While there are much faster
Prologs available, I would argue that LM-Prolog is commercially viable
without the micro-code.  There have been sales of the 3600 version of
LM-Prolog despite the fact that it is not supported by micro-code.

But part of Fernando's counter to Carl was that a serious Prolog
implementation needs sub-primitives that Lisp does not provide.  It is
true that LM-Prolog even without micro-code relies on Zeta-Lisp's
sub-primitives to manipulate pointers and create invisible pointers.
This makes de-referencing variables very fast.   While I don't have any
figures I don't think that is so important, at least for the naive
reverse benchmark.  In other words I believe a pure Common Lisp
implementation of Prolog on say a 3600 would run 3 or 4 times slower
than Symbolics Prolog (which is fully micro-coded).  Depending upon how
important a factor of 3 or 4 is, one evaluates differently Carl's claim
that Lisp is good for implementing Prolog (and not vice versa).

I think part of this whole debate is confused with the larger debate of
single paradigm vs multi-paradigm languages.  My feeling is that while a
single paradigm system is elegant that too often it doesn't fit the
problem well and ackward cliches are used.  For example, it is widely
believed that for some kinds of problems object-oriented programming is
most appropriate because it encasulates state and behavior so well.

Concurrent Prolog advocates in such situations program objects in a
complex cliche of tail recursive predicates where one argument is a
stream of messages.  No serious object-oriented language requires that
each method list all the instance variables in the head and their new
values again at the end of each "method" (the tail recursive call).  I
am not happy with the argument that goes -- well some problems are best
programmed with Lisp, others with Prolog, others with SmallTalk, and
still others with Ops5.  Any significantly large problem is going to
have sub-problems that are best handled by different paradigms.

The debate should not be Lisp vs. Prolog but how can we combine Lisp and
Prolog (and Smalltalk and ...) in a coherent well-integrated fashion.
Its not easy.  LM-Prolog was one attempt at doing this, as well as
ICOT's ESP,  Prolog-KR and LogLisp.  I tried to integrate Prolog with
Loops.  None of these integrations are perfect but I think this is the
direction to go for BUILDING TOOLS for BUILDING REAL APPLICATIONS.  The
CommonLoops effort at Xerox represents to me the best effort to date to
build a tight integration of two paradigms (object and
procedure-oriented).

In contrast, to what I just said I think the single paradigm approach
can be a great research strategy.  Much of the Logic Programming
community is caught up in the game of finding out how far can one go
with logic programming.  Can one write simulators, text editors,
graphics, operating systems, embedded languages, and so on in Prolog
or a language like it?  It is rightfully considered cheating to
"escape to Lisp" or jump into some object-oriented subsystem.  Their
purpose is to explore the paradigm itself -- its uses, its
limitations, to stretch it and pull it in new directions not to build
real applications.  When building real applications the question is
not can this or that be done in Prolog, we all know that everything
can be written in Prolog, but what language can give the best support
for building the application in the most fitting way.

------------------------------

End of AIList Digest
********************

From LAWS@KL.SRI.COM Fri Dec 18 03:46:52 1987
Mail-From: LAWS created at  9-Oct-85 21:30:41
Date: Wed  9 Oct 1985 21:20-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-To: AIList@SRI-AI
Us-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #141
To: AIList@SRI-AI
Resent-Date: Fri 18 Dec 87 00:08:32-PST
Resent-From: Ken Laws <Laws@KL.SRI.COM>
Resent-To: isr@vtopus.CS.VT.EDU
Resent-Message-Id: <12359392102.20.LAWS@KL.SRI.COM>
Status: R


AIList Digest           Thursday, 10 Oct 1985     Volume 3 : Issue 141

Today's Topics:
  Psychology & Logic - Counterexample to Modus Ponens,
  AI Tools - DO you really need an AI machine?

----------------------------------------------------------------------

Date: Mon, 7 Oct 85 14:07 EDT
From: Stephen G. Rowley <SGR@SCRC-PEGASUS.ARPA>
Subject: Formal Logic

    Date: Wed, 2 Oct 85 14:11:34 edt
    From: John McLean <mclean@nrl-css.ARPA>
    Subject: Re: A Counterexample to Modus Ponens

       (1) If a Republican wins the election then if the winner is not Ronald
           Reagan, then the winner will be John Anderson.

       (1a) If a Republican wins the election and the winner is not Ronald
            Reagan, then the winner will be John Anderson.

    I would like to see some further discussion of this since I'm afraid that
    I don't see the difference between (1) and (1a) either.  Certainly there
    is no difference with respect to inferential power as far as classical
    logic is concerned.


There's no difference.  (At least, not logically.  Interpreting people's
judgements about probabilities from natural language statements is an
extremely subtle art, as I believe Chris said in a reply.)

[Notation: -> means "implies", ~ means "not", & means "and", | means
           "inclusive or".]

Let R  = "a Republican wins the election"
    RR = "Ronald Reagan wins the election"
    JA = "John Anderson" wins the election"

[1] R -> [ ~RR -> JA ]

[1a] [ R & ~RR ] -> JA

Of course, p -> q is the same as ~p | q.  Then [1] and [1a] both
transform to

        ~R | RR | JA

------------------------------

Date: Mon, 7 Oct 85 17:39:03 edt
From: pugh@GVAX.CS.CORNELL.EDU (William Pugh)
Subject: Re: A Counterexample to Modus Ponens


>>  In a recent issue of AIList, John McLean cited an article about
>>  inconsistencies in  public opinion that apparently said:
>>
>>    Before the 1980 presidential election, many held the two beliefs
>>     below:
>>
>>     (1) If a Republican wins the election then if the winner is not
>>         Ronald Reagan, then the winner will be John Anderson.
>>
>>     (2) A Republican will win the election.
>>
>>    Yet few if any of these people believed the conclusion:
>>
>>     (3) If the winner is not Reagan then the winner will be Anderson.
>>
        Just to throw my two bits in:

        First, from probability, some background:

                P(x) is the probability of x

                & P(x|y) is the probability of x given that y is true.

        And by Bayes' theorem,

                         P(x) P(y|x)
                P(x|y) = -----------
                            P(y)

The beliefs stated above can be rephrased as follows:

        Let RW stand for "A Republican wins"
        Let RR stand for "Ronald Reagan wins"
        Let JA stand for "John Anderson wins"

        (1) P( ~RR => JA | RW) = 1
        (2) P(RW) = 0.96 (for the sake of argument - high at any rate)

        and we wish to find

        (3) P(JA|~RR)

        well, from Bayes' theorem,

                    P(JA) P(~RR|JA)       P(JA)
        P(JA|~RR) = ---------------   =  -------
                        P(~RR)            P(~RR)

        For example, if P(JA) = 0.01 and P(RR) = 0.95,
        then P(JA|~RR) = 0.2

Note that we have not used 1 at all, and made the (obvious)
assumption (similar to 1) that if Anderson wins, Reagan can not win.

Now, to try to figure out what went wrong in the original example,
consider:

        Given P(A|B) and P(B),
        P(A&B) = P(A|B)P(B)

        HOWEVER, P(A&B) <= P(A)
        since P(A) = P(A&B)/P(B|A)

        THEREFORE, if
                A=>B with high probability
                and A with high probability
                THEN, B with high probability

        Now, going back to our example, we see the the original
        conclusion:

>>     (3) If the winner is not Reagan then the winner will be Anderson.

        is almost certainly true.  Although non-obvious, this is because
        (3) is true if Reagan wins.

        The problem, therefore, is that people do not use the
        "standard" definition of implication.  By "if A then B"
        people tend to think "given that A is true, B is true" - if
        A is false, the validity of the statement is not
        verified one way or the other.


        You can find more on Bayesian Inference in "Introduction to
        Artificial Intelligence" by Charniak and McDermott, or in many
        other sources.

        I have not yet figured out how to make Bayesian Inference work
        with this style of implication, but it is obvious that it
        requires some form of special treatment.  I'll let you know
        if I figure anything out.


Bill Pugh
Cornell University
..{uw-beaver|vax135}!cornell!pugh
607-256-4934,ext5

------------------------------

Date: Tue, 8 Oct 85 13:03 EDT
From: Carole D Hafner <HAFNER%northeastern.csnet@CSNET-RELAY.ARPA>
Subject: More on Modus Ponens

Several ideas have been proposed to explain the fact that many people
in 1980 would agree to the following:

  1. If a Republican wins the election then if it is not RR then it will be JA.
  2. A Republican will win the election.

But they would not agree to the apparent logical consequence:

  3. If RR does not win the election then JA will win.

Does this mean that modus ponens is not a rule of common sense reasoning? NO.

The problem is due to the fact that "a republican" in the
first sentence has an "intensional" meaning, while in the second it
had (to most people) an extensional meaning.

In other words, people believed:

 (there-exists X) [win(X,election) & party(X,republican)]

                     and not

   (forall X) [win(X,election) --> party(X,republican)]

It is the second interpretation of the indefinite noun phrase that
gives rise to the conclusion in (3).

Carole Hafner
hafner@northeastern

------------------------------

Date: Tue 8 Oct 85 10:26:00-PDT
From: EDWARDS@SRI-AI.ARPA
Subject: Equivocation (?) in "failure" of modus ponens


As far as I have been able to determine, in conversation with Todd
Davies and Marcel Schoppers, the apparent failure of modus ponens rests
on a subtle point about the understanding of "if-then".

If all conditionals are taken as truth-functional, then most people
would have believed the premises *and* the conclusion:

(1) If a Republican wins then if Reagan doesn't win then Anderson will
win.

(2) A Republican will win.

Therefore,

(3) if Reagan doesn't win then Anderson will win.


For (3), when read truth-functionally, is equivalent to:

(4) Either Reagan will win or Anderson will win

which in turn follows truth-functionally from:

(5) Reagan will win

which most people believed.

The problem is due to the fact that (3) is not normally understood
truth-functionally; it is understood as a counterfactual, setting up a
possible situation (or "mental space"--Fauconnier, *Mental Spaces*) in
which Reagan doesn't win and asking what assumption will make that
situation most like the actual one.  The assumption most people would
make, given such a situation, is that the Democrats will win; so they
believed (6), not (3):

(6) If Reagan doesn't win, then the Democrats will win.

The really interesting question here is whether (1) and (2) are
understood truth-functionally.  If they were, then the alleged failure
of modus ponens would rest on a simple equivocation.  But I don't
think they are.  (1) is a counterfactual just like (3) and (6).  The
problem is that (3) is understood quite differently when it appears as
the consequent of (1) than when it appears alone.  The antecedent of
(1) sets up a mental space in which a Republican wins, by hypothesis.
This affects the understanding of (3) in a way in which a mere factual
belief that a Republican will win (such as is expressed in (2)) does
not.  It rules out the consideration of a Democratic victory even in a
counterfactual situation.  So when the antecedent of (3) sets up
another mental space inside the first--where it is presupposed that
Reagan doesn't win--the consequent of (6) is ruled out.  Inside the
second mental space, *by hypothesis*, a Republican wins but Reagan
doesn't win.  Thus, Anderson's victory is the only conclusion
available.

Note that a factual belief with 100% certainty, that a Republican
will win, would have much the same effect as the antecedent of a
counterfactual.  If (2) were believed, not merely as very likely but
with absolutely unshakable confidence, then (3) should follow, even if
(1) is only believed with moderate confidence.  Thus those who
attributed the problem to difficulties about probability were in a way
right, though this misses the point about understanding of the
conditionals.

This does pose a problem for classical treatments such as David
Lewis's *Counterfactuals*.  According to Lewis, modus ponens applied
to a counterfactual conditional is valid.  The argument attributed to
McGee seems to refute this.

P.S.: I write the above without having read Van McGee's article, on
the basis of conversations and reading AIList.  I intend to get to
McGee's article in the near future.

------------------------------

Date: 9 Oct 85 10:55:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: DO you really need an AI machine?


> Date: Sun, 06 Oct 85 15:20:34 EDT
> From: "Srinivasan Krishnamurthy" <1438@NJIT-EIES.MAILNET>
> Subject: Do I really need a AI Machine?
>
>  Dear readers,
>  I work at COMSAT LABS, Maryland. We are getting into AI in a big way
> and would like comments, suggestions and answers to the
> following questions,to justify a big investment on a AI machine.
>
>   * What are the specific gains of using a AI Machine (like symbolics)
>     over developing AI products/packages on general purpose
>     machines like - VAX-11/750-UNIX(4.2BSD), 68000/UNIX etc.
>
      ... [more questions follow]

Well, I'm not qualified to give detailed answers, but let me
rave mildly over some personal experience and indulge in a
little heresy in hopes of provoking some discussion:

So we're gonna do an expert systems project. So we got a Symbolics 3600,
so that me and one other guy can develop an expert system.

problem no. 1 - Geez, what a learning curve, when coming off
VAX/VMS - I mean, sure, ya gotta learn expert system techniques,
ya gotta learn Common Lisp, but I also gotta learn another file
system and operating system (which is somehow never quite
referred to as such), I gotta learn an editor with more features
than I could use in a million years, I gotta learn window
navigation -- after 3 months,  I have 12 dense pages of cheat
sheets, and am just getting to the point of adequacy (operationally
defined as: not getting caught inside the inspector and being unable
to figure out how to get out short of a boot-up).

problem no. 2 - We've got one screen and one keyboard - even with
only two people, it's surprising how often our schedules don't mesh.
Cugini's tentative hypothesis of hardware-sharing: the number of
people wishing to use a shared single-user system at any time is even.

problem no. 3 - So now we've got an AI lab, and whenever I wanna do
something I actually have to STAND UP AND WALK to the machine
(which may or may not be occupied anyway) - so I find myself
conversing with VMS, whose terminal sits conveniently to my right -
and login takes maybe 20 seconds, as opposed to a 3-minute boot-up.

problem no. 4 - and of course we don't have the Symbolics netted
to the VAX yet (or ever?), and so you can kiss data-sharing good-by.
Where should the (English) documentation for our expert system reside?
On the VAX, where I've got editors, formatters (eg runoff), and laser
printers already, or on the Symbolics, where the code lives, but which
at this point has no hard-copy output? What if I get some good Lisp
code over our beloved ailist? Do I key it in at the Symbolics?

Well, you get the idea - there's more to doing expert systems than
telling the forklift where to place your AI machine.  Let me forestall
some rebuttals by saying, sure I know some things we're doing wrong,
and yeah, we should net the Symbolics to the VAX and yeah, we should
buy a 3640 or whatever for additional users.  But point 1 is there's
a larger-than-I(-and-maybe-you)-suspected investment in
hardware, software, and time to be able to exploit these "power
tools" of AI.

Point 2 is it's not so clear when you need all that performance
that an AI workstation is giving you.  If PC's are seen as an
adequate delivery vehicle for an expert system, the assumption
would seem to be that you need performance during development -
but that doesn't seem right either - how fast does an editor
have to be? When you're doing logical testing (as opposed to
performance testing), wouldn't you be dealing with *smaller*
amounts of data, than in operational mode? Why would it not make
more sense to develop code on a relatively small system and
then use the performance of a 3600 for large-scale logic-crunching,
just as you might develop FORTRAN code on a PC, and then run it
on a CYBER?  I'm willing to believe I'm wrong about this, but I
don't understand why.

Point 3 is that the reason a lot of CompScis deride PL/I and
embrace, say, Pascal, is that PL/I is big, complicated, clumsy,
gives you everything you ever wanted and several that you didn't,
etc etc, whereas Pascal is small, elegant, well-designed, etc etc.
Any analogies here?

I should say that the Symbolics itself *is* fast, reliable,
and well-documented (but complicated) so I'm not complaining
about Symbolics per se. This is a generic complaint.

So now I'm using VAX LISP (which - shame! - does not yet have
complex numbers, but is otherwise pretty good), and I find that
the less-sophisticated editor is powerful enough for me, there
are reasonable tools for tracing, debugging, pretty-printing,
and I haven't yet been slowed down by performance problems.

What am I missing here?  Why am I happier on (sneer) a VAX than
on a glamorous Symbolics? (Replies implying coarse sensitivity
on the part of the writer will be, of course, be given the most
serious consideration and then dismissed).

Needless to say, these are my own utterly idiosyncratic views,
and in no way reflect the policy, de jure or de facto, of the
National Bureau of Standards, the Department of Commerce, or the
entire Federal Government.

John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
Bldg 225 Room A-265
Gaithersburg, MD 20899
phone: (301) 921-2431

------------------------------

End of AIList Digest
********************

From csvpi@vpics1 Thu Oct 10 22:34:29 1985
Date: Thu, 10 Oct 85 22:34:25 edt
From: csvpi@vpics1.VPI
To: fox@opus   (FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: R

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a011626; 10 Oct 85 13:35 EDT
Date: Thu 10 Oct 1985 09:26-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #142
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Thu, 10 Oct 85 22:10 EST


AIList Digest           Thursday, 10 Oct 1985     Volume 3 : Issue 142

Today's Topics:
  Seminars - AI Meets Natural Stupidity (CSLI) &
    Learning Expert Knowledge (UT) &
    Interactive Modularity (UCB),
  Seminar Series - Commonsense and Nonmonotonic Reasoning (CSLI),
  Conference - Logic in Computer Science

----------------------------------------------------------------------

Date: Wed 9 Oct 85 16:51:08-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - AI Meets Natural Stupidity (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


          CSLI ACTIVITIES FOR *THIS* THURSDAY, October 10, 1985

   12 noon              TINLunch
     Ventura Hall       ``Artificial Intelligence Meets Natural Stupidity''
     Conference Room    by Drew McDermott
                        Discussion led by Roland Hausser, U. of Munich


   McDermott discusses three `mistakes', or rather bad habits, which are
   frequent in A.I. work.  He speaks from his own experience and cites
   several illuminating and amusing examples from the literature. In this
   TINLunch I will be discussing his thoughts on treating reference in
   A.I., which are discussed in the section entitled `unnatural
   language'.                                           --Roland Hausser

------------------------------

Date: Tue, 8 Oct 85 16:01:56 cdt
From: rajive@sally.UTEXAS.EDU (Rajive Bagrodia)
Subject: Seminar - Learning Expert Knowledge (UT)

                          Machine Learning for
                       Acquiring Expert Knowledge

                                  by
                             Bruce  Porter

                       noon, Friday 11th, Pai 3.38

An important  effort in Artificial Intelligence is the construction of
Expert Systems, but this effort is stymied by the problem of acquiring
knowledge  to guide  problem solving and reasoning.  This talk reviews
efforts  in  Machine  Learning  to  automate knowledge acquisition and
describes our current approach to the problem.

------------------------------

Date: Wed, 9 Oct 85 16:48:25 PDT
From: admin@ucbcogsci.Berkeley.EDU (Cognitive Science Program)
Subject: Seminar - Interactive Modularity (UCB)

                      BERKELEY COGNITIVE SCIENCE PROGRAM
                                  Fall 1985
                    Cognitive Science Seminar -- IDS 237A

        TIME:             Tuesday, October 15, 11:00 - 12:30
        PLACE:            240 Bechtel Engineering Center
        DISCUSSION:       12:30 - 1:30 in 200 Building T-4

        SPEAKER:          Ronald M. Kaplan,
                          Xerox Palo Alto Research Center  and  Center
                          for  the  Study of Language and Information,
                          Stanford University

        TITLE:            ``Interactive Modularity''

        Comprehensible  scientific  explanations  for   most   complex
        natural  phenomena  are  modular  in character.  Phenomena are
        explained in terms of the operation of separate  and  indepen-
        dent  components, with relatively minor interactions.  Modular
        accounts of complex cognitive phenomena, such as language pro-
        cessing,  have  also  been proposed, with distinctions between
        phonological, syntactic, semantic, and pragmatic modules,  for
        example,  and  with  distinctions  among  various rules within
        modules.  But these modular accounts  seem  incompatible  with
        the   commonplace  observations  of  substantial  interactions
        across component boundaries: semantic and  pragmatic  factors,
        for  instance,  can  be shown to operate even before the first
        couple of phonemes in an utterance have been identified.

             In this talk I consider several  methods  of  reconciling
        modular descriptions in service of scientific explanation with
        the apparent  interactivity  of  on-line  behavior.   Run-time
        methods  utilize  interpreters that allow on-line interleaving
        of operations from different modules, perhaps including  addi-
        tional  "scheduling"  components  for  controlling  the cross-
        module flow of information.  But depending on their mathemati-
        cal properties, modular specifications may also be transformed
        by off-line, compile-time operations into  new  specifications
        that  directly  represent  all  possible cross-module interac-
        tions.  Such compilation techniques allow for run-time  elimi-
        nation  of  module  boundaries  and  of intermediate levels of
        representation.  I will illustrate these techniques with exam-
        ples  involving  certain  classes of phonological rule systems
        and structural correspondences in Lexical-Functional Grammar.

------------------------------

Date: Wed 9 Oct 85 16:51:08-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar Series - Commonsense and Nonmonotonic Reasoning (CSLI)

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


            COMMON SENSE AND NON-MONOTONIC REASONING SEMINARS
            Organized by John McCarthy and Vladimir Lifschitz
               Computer Science Dept., Stanford University

      A series of seminars on Common Sense and Non-monotonic reasoning
   will explore the problem of formalizing commonsense knowledge and
   reasoning, with the emphasis on their non-monotonic aspects.
      It is important to be able to formalize reasoning about physical
   objects and mental attitudes, about events and actions on the basis of
   predicate logic, as it can be done with reasoning about numbers,
   figures, sets and probabilities.  Such formalizations may lead to the
   creation of AI systems which can use logic to operate with general
   facts, which can deduce consequences from what they know and what they
   are told and determine in this way what actions should be taken.
      Attempts to formalize commonsense knowledge have been so far only
   partially successful. One major difficulty is that commonsense
   reasoning often appears to be non-monotonic, in the sense that getting
   additional information may force us to retract some of the conclusions
   made before.  This is in sharp contrast to what happens in
   mathematics, where adding new axioms to a theory can only make the set
   of theorems bigger.
      Circumscription, a transformation of logical formulas proposed by
   John McCarthy, makes it possible to formalize non-monotonic reasoning
   in classical predicate logic. A circumscriptive theory involves, in
   addition to an axiom set, the description of a circumscription to be
   applied to the axioms. Our goal is to investigate how commonsense
   knowledge can be represented in the form of circumscriptive theories.
      John McCarthy will begin the seminar by discussing some of the
   problems that have arisen in using abnormality to formalize common
   sense knowledge about the effects of actions using circumscription.
   His paper Applications of Circumscription to Formalizing Common Sense
   Knowledge is available from Rutie Adler 358MJH.  This paper was given
   in the Non-monotonic Workshop, and the present version, which is to be
   published in Artificial Intelligence, is not greatly different. The
   problems in question relate to trying to use the formalism of that
   paper.
      The seminar will replace the circumscription seminar we had last
   year.  If you were on the mailing list for that seminar then you will
   be automatically included in the new mailing list. If you would like
   to be added to the mailing list (or removed from it) send a message to
   Vladimir Lifschitz (VAL@SAIL).

      The first meeting is in 252MJH on Wednesday, October 30, at 2pm.

------------------------------

Date: Wed 9 Oct 85 16:51:08-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: LICS Conference

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                             LICS CONFERENCE

      A new conference, LICS, (an acronym for ``Logic in Computer
   Science'') will meet in Cambridge, Mass, June 16-18, 1986.  The topics
   to be covered include abstract data types, computer theorem proving
   and verification, concurrency, constructive proofs as programs, data
   base theory, foundations of logic programming, logic-based programming
   languages, logics of programs, knowledge and belief, semantics of
   programs, software specifications, type theory, etc.  For a local copy
   of the full call for papers, contact Jon Barwise (Barwise@CSLI) or
   Joseph Goguen (Goguen@SRI-AI), members of the LICS Organizing
   Committee.

------------------------------

End of AIList Digest
********************

From csvpi@vpics1 Mon Oct 14 05:33:59 1985
Date: Mon, 14 Oct 85 05:33:55 edt
From: csvpi@vpics1.VPI
To: fox@opus   (FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: R

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a000736; 14 Oct 85 0:47 EDT
Date: Sun 13 Oct 1985 20:43-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #143
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Mon, 14 Oct 85 05:17 EST


AIList Digest            Monday, 14 Oct 1985      Volume 3 : Issue 143

Today's Topics:
  Queries - Autonomous Vehicles & YAPS & Prolog vs. OPS5 & ES Tools &
    Franz Lisp Behavior,
  Bindings - Symbolics Lisp Machine Mailing List,
  Corrections - Verification Peer Review & TIMM Expert System,
  AI Tools - Workstations

----------------------------------------------------------------------

From: C. Ian Connolly <Connolly@GE-CRD>
Subject: Autonomous Vehicles

Apropos a recent AILIST entry:  Does anyone have more information
on the DARPA Autonomous Vehicle demo that Waxman, et al (I think -
correct me if I'm wrong) gave earlier this year?
I'd *love* a review, if anyone was there that can send one out on
this list.  What speeds were it capable of, what methods were used,
etc...?

------------------------------

Date: Wed, 09 Oct 85 18:30:49 EDT
From: "Srinivasan Krishnamurthy" <1438@NJIT-EIES.MAILNET>
Subject: YAPS - Commercial Version Info.

Dear Folks!
 Does anybody have information on the Commericial Availability
of a Production System called "YAPS"? This was developed at the
University of Maryland and funded by the Goddard Space Flight
Center. I can't seem to find the right people to talk to at
Maryland, regarding this system. Any leads will be greatly
appreciated...

   Thanks in Advance.
                                       .....Vasu

------------------------------

Date: 10 Oct 85 10:02:58 PDT (Thursday)
From: Cornish.EIS@Xerox.ARPA
Subject: Prolog vs. OPS5


Can anyone provide me with a compare-and-constrast discussion of Prolog
vs. OPS5.  To use an analogy from this list, are they both screwdrivers
or both chisels ?


Jan

------------------------------

Date: Thu 10 Oct 85 08:53:24-PDT
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: es tools query


My list of major commercial AI software tools includes:
(1) S.1., (2) KEE, (3) ART, and (4) Knowledge Craft

There is also DUCK which seems to be more like an enhanced logic
programming language than the kind of tools (1)-(4) represent.

There is TIMM which was blasted on this list recently.

One other candidate appears to be KES or KES2.  Do you have any comments
on this system?  Is it a strong competitor to the other major tools?

mark

------------------------------

Date: 10 Oct 1985 14:37-CST
From: leff%smu.csnet@CSNET-RELAY.ARPA
Subject: franz lisp ?

The following demonstrates that enabling the trace facility causes
return values from lisp functions consisting of prog bodies to be set to
nil.  Is this supposed to happen and what does one do about it?

% cat blah1.l
(defun Blah (x y z a)
(prog (v)
(setq y (add 3 5))
(return 1)
))
% lisp
Franz Lisp, Opus 38.79
-> (load "blah1.l")
[load blah1.l]
t
-> (setq A (Blah 3 5 7 9))
1
-> (step e)
[autoload /usr/lib/lisp/step]
[load /usr/lib/lisp/step.l]
t
-> (setq B (Blah 3 5 7 9))
(setq B (Blah 3 5 7 9))
  (Blah 3 5 7 9)
    3
    5
    7
    9
    (prog (v) (setq y (add 3 5)) (return 1))
      (setq y (add 3 5))
        (add 3 5)
          3
          5
        8
      8
      (return 1)
        1
    nil
  nil
nil
->

------------------------------

Date: Fri 11 Oct 85 12:43:55-PDT
From: Richard Acuff <Acuff@SUMEX-AIM.ARPA>
Subject: Re: Symbolics lisp machine mailing list ??

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

   The Symbolics Lisp Users' Group has a mailing list SLUG@R20.UTexas.ARPA.
This can be read from the BBoard SYMBOLICS-LISP-USERS on Sumex.  Send
mail to SLUG-Request@R20.UTexas.ARPA to get on the list.

   There is also BUG-LISPM@MIT-MC which may be of some interest (also a
BBoard on Sumex), as well as {Bug,Info}-TI-Explorer@Sumex for TI Explorers.
Again, use the -Request convention for getting added.

        -- Rich

------------------------------

Date: Wed, 9 Oct 85 21:38:01 PDT
From: Dick Kemmerer <dick@LOCUS.UCLA.EDU>
Subject: Peer Review


        A colleague forwarded a copy of AIList Digest #133 to me. In this
issue Michael Melliar-Smith's response to John Nagle's SIFT verification
message contains a reference to a verification study that is being
sponsored by the DoD Computer Security Center. I am the PI for this
study and would like to comment that the reference is somewhat
misleading. In particular, the study group did not look at the SIFT work.
Also, the suite of tools that we reviewed were the enhanced HDM tools (most of
which have been developed since the SIFT work).

------------------------------

Date: Thu, 10 Oct 85 12:22:01 edt
From: decvax!linus!raybed2!gxm@UCB-VAX.Berkeley.EDU (GERARD MAYER)
Subject: TIMM Correction


I'd like to point out the only error I found in Touretzky's article about TIMM.
According to the General Research technical and sales people at the IJCAI-85
booth TIMM CANNOT call any external functions: Fortran or otherwise. I found
this amazing and asked several of their people the same question. I have sent
a synopsis of my review of TIMM, KEE, ART, etc. to Cowan directly.

                                        Gerard Mayer
                                        Raytheon Research Division

                                        uucp  ..linus!raybed2!gxm

------------------------------

Date: 10 Oct 85 11:01:17 PDT (Thu)
From: Jeff Peck <peck@sri-spam>
Subject: RE: do you REALLY need a AI machine?

  Funny how when you say a "lisp machine" some people just assume you
mean "Symbolics".  My experience has been that you can avoid most of
the problems John Cugini complained about by chosing an alternate
vendor.  For instance, with a $100K investment (equivalent to your
first 3600) you can buy 3 or 4 Xerox lisp machines.  So, now you can
put one in each programmers office (and not have to walk down the hall
to an occupied machine), no standing in line. The Xerox machines
network to VAX/VMS or Unix file servers, so sharing is easy.  The
Xerox user interface is so transparent that you can learn it in a day
or two (the editor in about 5 minutes).  And, although the Xerox may
be 1/3 the speed of a 3600, as John points out, "who cares" when you
are just building and exploring?  If you get into serious production
of large systems, then move it to a Symbolics, or a Cray.  If you are
doing Research on AI technology, you may also want the faster system,
but most industrial labs seem to be more into applications development.
(Also, the Xerox machines support (or soon will) both Quintus Prolog,
and CommonLisp, integrated with the InterLisp-D environment).
 As for a VAX <how fast does an editor need to be?> it's never fast
enough.  If all you do is editing, maybe; but where are the graphics?
And timesharing LISP is a serious development problem: "Is my program
stuck, or is someone else just compiling something?" For $250K, you
can do a lot better with personal workstations.
  This is not intended to be a plug for Xerox, and of course, these
are just my personal observations.

j. peck

------------------------------

Date: Wed, 9 Oct 85 20:56:49 PDT
From: Richard K. Jennings <jennings>
Subject: Big Lisp Machines

        We, as are you, are involved in applying AI to our jobs.  We
have 5 PC-AT's and a Symbolics (which we got from another organization).
The PC's are much more useful, and a VAX would be ideal.  The basic
reason is that PC-AT's and VAXEN are not that much slower than the big
machines, and cost less, and are compatible with alot more applications
programs and peripherals.  Most applications need a big data base to
work of, and the AI portion is relatively small.
        If you already have a great deal of LISP code already written,
and you already have VAXEN and ethernet, then a big machine might be OK.
My advice is to purchase AT's and micro-vax II's with your bucks, and
I sincerely doubt you will regret it.

        Jed Marti (ARPA: marti@rand-unix.arpa ATT:213 393-0411) published
a benchmark of many machines (including a Symbolics 3600, VAX 11/780, 750)
running REDUCE problems (a symbolics math package).  I think the 780 was
a tad faster than the 3600 and the 750 a tad slower (~10%).

        If you are working on the part of COMSAT which flies satellites,
I would like to find out what you are doing -- that's what we are using
AI (planning to use AI) for.

        Hope these comments help,

Richard Jennings,
Air Force Satellite Control Facility
Sunnyvale CA 95051
ATT: 408 744-6427
ARPA: jennings@aerospace

------------------------------

Date: Thu, 10 Oct 85 16:38 EST
From: "Christopher A. Welty" <weltyc%rpicie.csnet@CSNET-RELAY.ARPA>
Subject: AI Workstations


        I don't intend to offend anyone, but after reading  John
Cugini's response to Srinivasan Krishnamurthy's query about AI machines
I was a little taken aback.
        The reason one gets an AI machine is because it is the most powerful
AI/ES development tool around.  It provides the developer with facilities
that make the entire system development process a hundred times easier
(this number may vary depending on what system you are coming from).
Of course you have to learn how to use it.  Some people look forward to
the chance to learn something new, others prefer to know only one or
two systems and hail them as the ultimate.  Those who prefer to learn
know that each different system has a use that makes it helpful for
certain applications, and a hinderance in others.
        If you really make an effort to learn how to use an AI workstation,
you will find (especially if you've had to do development on other systems)
that you will be far more productive, and you will be doing things more in
the way they should be done.  We all know that it is often easier to cheat
than to do things the right way, and often times cheating makes later
development more difficult.  With the extensive support environments for
AI/ES that these workstations provide, doing things the right way is made
easier (almost easier than cheating).
        From Mr. Cugini's statement, his objections to the workstations seemed
no more than laziness...and that seems no reason to dissuade others from
getting them.  If you are really getting into AI "in a really big way," an
AI workstation is a must.  You won't know what your missing if you never
get one, and if you do (and take the time to learn it) you won't know how
you did without it.

                                        -Christopher A. Welty
                                         RPI/CIE Systems Mgr

------------------------------

End of AIList Digest
********************

From comsat@vpics1 Tue Oct 15 00:01:05 1985
Date: Tue, 15 Oct 85 00:01:01 edt
From: comsat@vpics1.VPI
To: fox@opus   (MILLER,FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: RO

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a001216; 14 Oct 85 2:10 EDT
Date: Sun 13 Oct 1985 20:58-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #144
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Mon, 14 Oct 85 23:05 EST


AIList Digest            Monday, 14 Oct 1985      Volume 3 : Issue 144

Today's Topics:
  News - Grace Murray Hopper Award,
  Intelligence & Learning - An Appreciation of Our Own Make-up,
  Archive Services - BITNIC Server for Recent Issues &
   VPI Full Archive (Micro LISP Search)

----------------------------------------------------------------------

Date: Thu 10 Oct 85 22:49:02-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Grace Murray Hopper Award

>From the October issue of CACM:

Cordell Green of Kestrel Institute was chosen "for establishing a
theoretical basis for logic programming and providing a resolution
theorem proven to carry out a programming task by constructing the
result which the computer is to compute.  For proving a constructive
technique correct and for presenting an effective method for
constructing the answer; these contributions providing an early
theoretical foundation for Prolog and logic."

------------------------------

Date: Fri, 11 Oct 85 11:42:41 GMT
From: gcj%qmc-ori.uucp@ucl-cs.arpa
Subject: ``An Appreciation of Our Own Make-up''

>From the Guardian newspaper, 10 October 1985:-
   ``What he (Alan  Kay) would  like  to see AI people trying to
build is "not superhumans or humans,  but mammals," contrivances
that can explore  and learn  but do  not have to use language or
learn differential calculus." Ultimately, he asserts, "the basic
end of AI research is an appreciation of our own make-up." ''

I find the idea  that these "contrivances" do not  have the need
to make use of language rather strange, since there must be some
form of communication required between us and them and that will
be termed language.
The more important message of the short  quote is the repetition
of the  idea that intelligence is learning, starting from square
one  and building a  model of the world  that  is in one's view,
and the view from the chip must be through the language of data.

Gordon Joly
gcj%qmc-ori@ucl-cs.arpa

------------------------------

Date: Thu, 10 Oct 85 12:01 EDT
From: Henry Nussbacher  <HJNCU%CUNYVM.BITNET@WISCVM.ARPA>
Subject: Database service available for back issues of Ai-List

This is to announce that some new services have been added to the
inter-network server running at Bitnic.  Certain selected Arpanet
digests are now being loaded into a Spires database and are therefore
searchable from anywhere as long as you can send RFC822 mail.

If you are interested in using this service, send a piece of mail to:
DATABASE%BITNIC.BITNET@WISCVM.ARPA
or
DATABASE%BITNIC.BITNET@WISCVM.WISC.EDU

and have as the first 3 lines of your file (case does not matter):
help
help arpanet
help design

The server will send back to you 3 help files describing how to use the
internet server, how to search Arpanet digests and how the whole thing
was designed.  Read over the section on "Signup" carefully before making
further use of the Database server.

Presently, the following 5 Arpanet forums are being loaded into the
Database:

Name           Retention Period
=============  ================
Ai-List        2 months
Info-Ibmpc     2 months
Info-Mac       2 months
Info-Graphics  3 months
Info-Nets      3 months

The retention period is set for a short duration in order to see if
Bitnic can handle the volume of data that needs to be stored in Spires.
This service was initialized on October 4th, 1985 so currently there
are just a few items available in the Database.

Example of search command:
FIND TEXT UNIX (IN INFO-IBMPC TABLE
would find all entries in Info-Ibmpc that contain the word UNIX.
An entry is just the section within a "digested" digest that makes
reference to the word UNIX.  For further details read over the
help files.

Henry Nussbacher (Hank@Bitnic.Bitnet)
Bitnet Development and Operations Center

------------------------------

Date: Wed, 9 Oct 85 18:39 EST
From: Ed Fox <fox%vpi.csnet@CSNET-RELAY.ARPA>
Subject: reply to query on micros and LISP

From: france (Robert France)
Date: Mon, 7 Oct 85 14:35:52 edt
To: fox, sharan
Subject: Re:  chance to do a useful search

Ken: Robert Blum on 3 Oct asked for information on "currently
marketed LISPs for micros" including pointers to review articles.
While our new system that will classify and work with components of
messages is not yet ready, our adaptation of the SMART system is
running and ready for queries just like this.  Robert France did
a search with the following results.  Feel free to publish in
AIList or to send directly to Blum.  Send other queries along too!
I need then for experimentation, and only request that the
author of the query be willing to tell me which messages are
relevant to the question. Thanks, Ed Fox

  [At the risk of having this message show up in all future searches,
  I've decided to pass it along.  I deleted one false hit (a query
  from Rene Bach), and I remember at least one other very lengthy
  Lisp review that was not found.  -- KIL]
_______

.I 418
.W Tuesday, 20 Sep 1983
.V Volume 1
.U Issue 59
.D Mon, 19 Sep 1983  11:41 EDT
.N

.A WELD%MIT-OZ@MIT-MC
.S Micro LISPs
.B

 For a survey of micro LISPs see the August and Sept issues of
 Microsystems magazine. The Aug issue reviews muLISP, Supersoft LISP
 and The Stiff Upper Lisp. I believe that the Sept issue will continue
 the survey with some more reviews.

 Dan
_______

.I 1113
.W Thursday, 8 Mar 1984
.V Volume 2
.U Issue 27
.D Tue 6 Mar 84 15:48:55-PST
.N Sam Hahn
.A SHahn@SUMEX-AIM.ARPA
.S IQLISP Source
.B

 The source for IQLisp is:

         Integral Quality, Inc.
         P.O. Box 31970
         Seattle, WA  98103
         (206) 527-2918

 Claims to be similar to UCI Lisp, except function def's are stored in cells
 within identifiers, not on property lists; arg. handling is specified in the
 syntax of the expression defining the function, I/O functions take an explicit
 file argument, which defaults to the console; doesn't support FUNARGS.

 IQLisp does provide:
         32kb character strings,
         77000 digit long integers,
         IEEE format floating point,
         point and line graphics,
         ifc to assembly coded functions,
         31 dimensions to arrays,

 Costs $175 for program and manual, PCDOS only.

 I've taken the liberty to include some of their sales info for those who may
 not have heard of IQLisp.  It's fairly new, and they claim to soon make a
 generic MSDOS version (though probably without graphics support).
_______

.I 1264
.W Thursday, 12 Apr 1984
.V Volume 2
.U Issue 45
.D 11 Apr 1984 0206 PST
.N Reply-to: LARRY@JPL-VLSI.ARPA
.A Larry Carroll <LARRY@JPL-VLSI.ARPA
.S micro LISP review
.B

 There's a good article in the April issue of PC Tech Journal
 about three micro versions of LISP: IQ LISP, muLISP-82, and
 TLC LISP.  It gives a fair amount of implementation detail,
 contrasts them, and compares them to their mini and mainframe
 cousins.  The author is Bill Wong, who's working on his PhD in
 computer science at Rutgers.  [...]

                                 Larry Carroll
                                 Jet Propulsion Lab.
                                    larry@jpl-vlsi
_______

.I 1317
.W Sunday, 22 Apr 1984
.V Volume 2
.U Issue 51
.D 20 Apr 84 22:22:44 EST  (Fri)
.N Wayne Stoffel
.A wes%umcp-cs.csnet@csnet-relay.arpa
.S Review of LISP Implementations
.B

 Re: Bill Wong's article on three LISP implementations

 He also wrote a series on AI languages that appeared in Microsystems.  All
 were 8-bit CP/M implementations.

 August 1983, muLisp-80, SuperSoft Lisp, and Stiff Upper Lisp.

 December 1983, XLISP, LISP/80, and TLC Lisp.

 January 1984, micro-Prolog.

                                 W.E. Stoffel
_______

.I 1326
.W Wednesday, 25 Apr 1984
.V Volume 2
.U Issue 52
.D Sun 22 Apr 84 22:11:14-PST
.N Sam Hahn (Samuel@Score
.A Reply-to: SHahn@SUMEX-AIM.ARPA
.S Another microcomputer Lisp
.B

 In line with the previous mentions of microcomputer implementations of Lisp,
 how about this pointer:

 I saw in the current (May) issue of Microsystems an advertisement for
 Waltz Lisp, from ProCode International.  "Waltz Lisp is not a toy.  It is the
 most complete microcomputer Lisp, including features previously available only
 in large Lisp systems.  In fact, Waltz is substantially compatible with Franz
 ... and is similar to MacLisp and Lisp Machine Lisp."

 Does anyone know anything about Waltz?  How about a review?

 [further claims:        functions of type lambda, nlambda, lexpr, macro
                         built-in prettyprinting and formatting
                         user control over all aspects of the interpreter
                         complete set of error handling and debugging functions
                         over 250 functions in total                     ]

 They're at POBox 7301, Charlottesville, VA  22906.
_______

.I 1753
.W Wednesday, 1 Aug 1984
.V Volume 2
.U Issue 98
.D 28 Jul 1984 2132-CDT
.N

.A Usadacs at STL-HOST1.ARPA
.S LISP in Aztec C, Public Domain
.B

   Ref: AI Digest, V2 #90 "LISP in Aztec C", is available from
 SIMTEL20 via FTP. MICRO:<SIGM.VOL118>

 A.C. McIntosh, USADACS@STL-HOST1.
_______

.I 2512
.W Sunday, 20 Jan 1985
.V Volume 3
.U Issue 5
.D Thu 17 Jan 85 00:33:35-PST
.N Sam Hahn
.A SHahn@SUMEX-AIM.ARPA
.S Lisp for PC
.B

 If you're using PC's and looking for a Lisp, I'd suggest
 TLC-Lisp, from The Lisp Company.  I myself have not used GCLisp,
 but have been quite impressed with TLC-Lisp, which has a compiler,
 an object-class system, packages, auto-load entities,
 and costs less than half what GCLisp costs.

 TLC is John Allen's (The Anatomy of Lisp) company, located in
 Redwood Estates, CA.  I have no connection with TLC except as
 a customer.

                                 -- sam hahn
_______

.I 2773
.W Friday, 8 Mar 1985
.V Volume 3
.U Issue 31
.D Thu 7 Mar 85 08:44:14-PST
.N Ken Laws
.A Laws@SRI-AI.ARPA
.S The Artificial Intelligence Report
.B

 Ted Markowitz recently asked about newsletters.  [...]
 The following are the topics covered in back issues of The
 Artificial Intelligence Report.  I'm told that back issues
 are still available, but I don't know the price.  [...]

 Vol. 1,  No. 3,  March, 1984
 AI and the Personal Computer: Expert systems, natural
 language,  LISP;  [...]

 Vol. 2, No. 3, March, 1985
 LISP on the PC: TLC LISP, GCLISP; [...]

 [...]

 This newsletter was the first one mentioned in AIList.  Since
 that time, it has moved from Los Altos to:

     Artificial Intelligence Publications
     Suite Three
     3600 West Bayshore Road
     Palo Alto, CA  94303 - 4229
     U. S. A.
    (415) 424-1447

                                         -- Ken Laws
_______

You might try using other keywords (names of micros?) or going further.


                                        -- Robert

------------------------------

End of AIList Digest
********************

From csvpi@vpics1 Mon Oct 14 05:33:52 1985
Date: Mon, 14 Oct 85 05:33:48 edt
From: csvpi@vpics1.VPI
To: fox@opus   (FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: R

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a001506; 14 Oct 85 3:42 EDT
Date: Sun 13 Oct 1985 21:08-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #145
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Mon, 14 Oct 85 05:19 EST


AIList Digest            Monday, 14 Oct 1985      Volume 3 : Issue 145

Today's Topics:
  Opinion - Military Support & AI Hype & CS and AI Definitions,
  AI Tools - Workstations & Lisp vs Prolog Implementation Facts,
  Call for Papers - IEEE Systems, Man and Cybernetics

----------------------------------------------------------------------

Date: Thu 10 Oct 85 12:07:50-PDT
From: Rich Alderson <ALDERSON@SU-SCORE.ARPA>
Subject: re: military flame

Like it or not, the military CAN take credit for the interstate highway system,
and the "civilian highway system in place before the interstates" as well--at
least, I ASSUME that you are referring to the U. S. highway system.  Both were
built under the mandate of the Constitution of the United States (Philadelphia,
1787), which did not view "internal defence [as] a standard rationale for
better internal transportation" but rather saw good internal transportation as
vital to both the internal and external defense of a nation.

Please note that the above is not a matter of opinion.  My own opinions are
just that, and I've had enough grief in my life for expressing them publicly.
Obviously, I make NO claim to represent anyone else's opinions if I refuse to
state my own.

                                                                Rich Alderson

------------------------------

Date: Thu 10 Oct 85 10:12:48-PDT
From: WYLAND@SRI-KL.ARPA
Subject: AI Hype is unavoidable

Part of the problem of AI hype is unavoidable: it is a result of
the definition of the field, which assumes that, once the problem
is defined, its solution is trivial.

Artificial Intelligence and Computer Science represent two
complementary approaches to the use of computers.  AI is problem
oriented, and CS is solution oriented.  In CS, one assumes that
the problem is (or is capable of being) well understood, and the
task is to design a good (clean, efficient, fast, etc.)  solution
to the problem.  In AI, the underlying assumption is that the
solution will be trivial once the problem is understood, and that
the task is to understand the problem.

These two approaches are reflected in the names of the field.
Computer Science is the science of using computers, a clear,
hard, objective definition about the use of a tool to solve
problems.  Artificial Intelligence is concerned with the
simulation of intelligence, a fuzzy, open, subjective definition
about the study of the problem of simulating a human behavior
called intelligence for which there is no clear, generally
accepted definition.

Each field encounters problems when its assumption is violated.
In CS, programming disasters can result if you start coding
before you have defined the problem.  Therefore, CS has developed
structured programming, programming specifications, etc. tools to
insure that the problem *is* well defined before the solution is
attempted.

In AI, trouble starts when, after the problem is understood, the
solution is *not* trivial in terms of computer performance, such
as execution time and memory space.  Then, you start hacking the
algorithms in order to get results in a finite amount of time and
memory, hoping that you are not betraying your understanding of
the problem.  Therefore, AI has developed extensive editors,
debuggers, windows, etc. in an attempt to insure that the
implementation of the solution remains trivial.

I believe that much of the AI hype problem stems from the
unstated assumption of trivial solutions.  Profound and
impressive insights expressed as toy problems typically do not
"scale up" to the real world.  An AI researcher involved in this
situation is embarrassed but not humiliated: the original
research on the problem is still valid; there is just a
"temproary problem in creating a practical solution."  This
obviously creates frustration for the customers who thought they
were buying a solution of the problem rather than an
understanding of it.

If the above is true, the problem of AI hype will not go away
until the field develops enough solid understanding of the
problem of intelligence to change its name from AI to a solution
oriented name, like Machine Learning, etc.  We are probably in
the same position now that physical science was before Galileo
and Newton when it was called Natural Philosophy - full of
metaphysics, passionate argument, and conflicting data (i.e.,
where the action is).


Dave Wyland

------------------------------

From: CONNOLLY CHRISTOPHER IAN      <CONNOLLY@ge-crd.arpa>
Subject: AI Definition and Tools

1) I can't help but think that a cause of the recent arguments on AI
hype rests in the question "What is AI?".  Note that I say >>**"A"**<<
cause.  Definitions, anyone?


2) AI Machines - My observation is that the startup time on a 3600 without
help is quite long.  There are a few people here who have taken a Symbolics
Lisp course and seem to have picked up on the stuff much more quickly (2 or
3 months?).  Once you know how to use the machine though, I think it's a
far better programming environment than VAX/VMS.  I've seen nothing on a
VAX that parallels the Window Debugger (wherein the entire stack can be
dissected), the Inspector (wherein your data structures can be dissected),
and the Flavor Examiner (wherein your data types can be inspected).  The
latter two are also a great help when you have no source.  I think it speeds
up my programming by a factor of 5, at least.  Anyway, that's yet another
opinion...

------------------------------

Date: Sat, 12 Oct 85 12:28:27 EDT
From: George J. Carrette <GJC@MIT-MC.ARPA>
Subject: Lisp vs prolog, implementation facts

Actually, all three of the Symbolics, LMI, and TI lispmachines give
the lisp-level system programmer access to extremely low level data
type and stack operations, there is no need to go to microcode for
that sort of thing. The LM-PROLOG that I maintain from time to time in
the Symbolics and LMI environment creates its own datatypes by the
usual lisp punning techniques, fooling with locatives, forwarding
pointers and the internals of CDR-CODED lists without needing
microcode support, and amazingly keeping a good deal of
transportability. Such is the ubiquity of certain hacks of lispmachine
implementation. Microcode is used, optionally, only for hand-coding
functions that are also written in lisp.  The LM-PROLOG technique is
in the class of CONTINUATION PASSING techniques of prolog
implementations, as described for example in a chapter of Sussman and
Ablesons "Structure and Interpretation ..."  book. This kind of
technique has more overhead associated with creation of real function
evaluation frames and such, (at least on a simple stack-machine) and
observably gets about 1/2 or 1/3 the BENCHMARK performance of a lower-level
machine-model implementation such as described by Warren. Overall
performance of a practical prolog "expert-system" may depend more
on virtual memory performance considerations of course rather than
what happens in the register-usage-mostly situation of some benchmarks.
Also consider that commercial systems put into production often have
assembly language coding of important subroutines, so high-level/low-level
language interface issues are important. The continuation passing
technique provides a more natural and efficient interface to
"assembly language" (i.e. LISP on a LISPMACHINE) than other models.

When talking about a commercial lispmachine it is important to think in
terms of LISP as a COMPUTER ENGINEERING technique rather than as having
anything to do with AI programming in particular.

------------------------------

Date: 12 Oct 1985 11:04-EDT
From: milne@wpafb-afita
Subject: call for papers - IEEE SMC

                        CALL FOR PAPERS

        IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS.
                        SPECIAL ISSUE ON
      Causal and Strategic Aspects of Diagnostic Reasoning


Papers are solicited for a special issue of IEEE Transactions on Systems,
Man and Cybernetics that will be devoted to the topic, "Causal and Strategic
Aspects of Diagnostic Reasoning."  Dr. Robert Milne, Army Artificial
Intelligence Center will be the guest editor of the special issue.

While it is expected that the research to be reported will be typically
backed up by concrete analyses or system building for real-world diagnostic
problems, the intent is to collect the most sophisticated ideas for
diagnostic reasoning viewed as a generic collection of strategies.  Articles
should attempt to describe the strategies in a domain-independent manner as
much as possible.  Articles that merely describe a successful diagnostic
expert system in a domain by using well-known languages or strategies will
typically not be appropriate.  Papers reporting on psychological studies,
epistemic analyses of the diagostic process, elucidating the strategies of
first-generation expert systems, descriptions of specific diagnostic systems
that incorporate new ideas for diagnostic reasoning, learning systems for
diagnosis are some examples that will be appropriate.  It is expected that
most articles will typically concentrate on some version or part of the
diagnostic problem, so it is important that the paper state clearly the
problem that is being solved independent of the implementation approaches
adopted.

Papers will be reviewed carefully by referees selected by the Transactions
Editorial Board.  Five copies of the manuscript should be submitted to Dr.
Robert Milne at the following address by January 15.  It will be helpful if
people who intend to submit manuscripts for consideration can let the guest
editor know of their intent as soon as possible via arpanet or telephone.

Submit papers by January 15th, 1986 to:
Dr. Robert Milne
US Army AI Center
HQDA DAIM-DO
Washington,  D.C. 20310-0700

phone:(202)-694-6913
arpa: milne@wpafb-afita

Author's Timeline:
15 January 1986         Papers Due
15 April 1986           Notificationo of acceptance/rejection
1  June 1986            Final Manuscripts due
   November 1986        Publication

------------------------------

End of AIList Digest
********************

From comsat@vpics1 Tue Oct 15 22:09:47 1985
Date: Tue, 15 Oct 85 22:09:42 edt
From: comsat@vpics1.VPI
To: fox@opus   (MILLER,FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: RO

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a009005; 15 Oct 85 1:05 EDT
Date: Mon 14 Oct 1985 21:20-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #146
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Tue, 15 Oct 85 21:59 EST


AIList Digest            Tuesday, 15 Oct 1985     Volume 3 : Issue 146

Today's Topics:
  AI Tools - YAPS Info,
  Bindings - Scheme Mailing List,
  Opinion - Scaling Up,
  Psychology & Logic - Modus Ponens

----------------------------------------------------------------------

Date: 14 Oct 85 15:58:27 EDT (Mon)
From: Liz Allen <liz@tove.umd.edu>
Subject: YAPS info

In response to the question about obtaining YAPS:  YAPS is available
from the Univ of Maryland along with some other packages like a
flavors package.  They run under Franz Lisp (we supply a slightly
hacked version of Opus 38.91) and on Vaxes running Berkeley Unix.

For more information, send mail to me.

                                -Liz

liz@tove.umd.edu or liz@maryland.arpa

------------------------------

Date: Mon, 14 Oct 85 20:37:24 EDT
From: Hal Abelson <HAL@MIT-MC.ARPA>
Subject: Scheme mailing list


Scheme@MIT-MC.ARPA is a network-wide mailing list for discussions
concerning the Scheme dialect of Lisp -- both as a vehicle for
investigating language development and as a vehicle for teaching
about computer science.  To be added to the list, please send mail to
Scheme-Request@MIT-MC.ARPA.  Remote sites with many entries are
encouraged to set up local distribution lists..

------------------------------

Date: Mon 14 Oct 85 07:50:00-PDT
From: Gary Martins <GARY@SRI-CSL.ARPA>
Subject: Scaling Up


In a recent issue of AIList [#145], Dave Wyland expresses a rather
common epistemological error, in his attempt to defend "AI" as we know
and love it today -- hype and all !

Mr. Wyland seems to think that finding problem solutions which "scale up"
is a matter of manufacturing convenience, or something like that.  What
he seems to overlook is that the property of scaling up (to realistic
performance and behavior) is normally OUR ONLY GUARANTEE THAT THE
"SOLUTION" DOES IN FACT EMBODY A CORRECT SET OF PRINCIPLES.

To put it more simply, if the solution doesn't scale up, then it just
plain isn't a solution, even if the inventor feels he has made a lot
of "profound and impressive insights".  The point is, these insights
aren't very profound or impressive (except perhaps to IJCAI referees)
if they fail the scaling test.

This principle is generally understood by persons with real engineering
backgrounds, but seems to come as news to "AI" folks.

G.R. Martins

------------------------------

Date: Wed, 9 Oct 85 16:43 EST
From: Mukhop <mukhop%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Non-contradictory set of beliefs and dependencies


> Date: Thu, 3 Oct 85 18:28:03 PDT
> From: albert@UCB-VAX.Berkeley.EDU (Anthony Albert)
> Subject: Re: Counterexample to modus ponens
>
> As far as beliefs, a non-contradictory set could be:
>
> 1) If a Republican wins then Reagan will win.
> 2) If Reagan doesn't win then a Republican won't win.
> 3) Unless a Democrat wins, a Republican will win.
>


Beliefs 1) and 2) of this non-contradictory set are one and the same as far
as inferential power is concerned:
  P => Q     <=>     ~Q => ~P
Also, this set of non-contradictory beliefs does not include the notion
that Reagan (or a Republican) will probably win. It merely states that
Reagan's chances are better than his Republican opponent.

   Getting back to the original set of statements:

>     (1) If a Republican wins the election then if the winner is not Ronald
>        Reagan, then the winner will be John Anderson.
>
>     (2) A Republican will win the election.
>
>         Yet few if any of these people believed the conclusion:
>
>     (3) If the winner is not Reagan then the winner will be Anderson.
>


My perception of the commonly held beliefs at that time is:

a) There are three contestants in the election.
b) Ronald Reagan has the highest probability of winning.
c) John Anderson has the lowest probability of winning.
d) John Anderson and Ronald Reagan are the only Republicans contesting
   the elections.


The second statement in the original set ("A Republican will win the
election") was a commonly held belief, arrived at from:
 -   Ronald Reagan has the highest probability of winning, and
 -   Ronald Reagan is a Republican.
(Of course, John Anderson increased the odds in favor of a Republican)
If the assumption is now made that Ronald Reagan will not win the election,
then one can no longer make the assumption that a Republican will win.
The conclusion,
" If the winner is not Reagan then the winner will be Anderson,"
can no longer be made.
  It seems that the dependencies of the belief structures need to be
taken into account in order to avoid contradictions. If it was
reasonable to believe that a Republican would win, irrespective of
the chances of Ronald Reagan, then it would be reasonable to believe:
"If the winner is not Reagan then the winner will be Anderson."

------------------------------

Date: Sat, 12 Oct 1985  21:09 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Republicans and Probability

About Republicans and probability.

That paradox comes from all those causes -- ambiguities, shifts in
meanings from "intension" to "extension," and so forth.  In my view,
adult psychology is too complicated to treat in terms of such simple
mathematical models as deduction and probability.  You've heard my
complaints about logic too often to repeat, and surely everyone has
read the critiques of Kahnemann and Tversky about applications of
probabilistic models to human beliefs and reasoning.

I suggest another, simple example to examine.  If you tell a "typical"
person that "Most A's are B's" and that "Most B's are C's," then the
most common inference is that "Most A's are therefore C's."  More
sophisticated people may say --"No, not most, but at least 26 percent
of A's are C's, because they'll notice that "most" might mean 51
percent.  Very few people will recognize that it is possible that no
A's are C's, or be able to construct a counterexample.

------------------------------

Date: Sun, 13 Oct 85 13:39:09 edt
From: pugh@GVAX.CS.CORNELL.EDU (William Pugh)
Subject: Bayesian inference or nested assumptions


        Continuing on the subject of the Modus Ponens example, I have
worked out some results on using Bayesian Inference for nested
assumptions:

        Notation:

        A|B means assuming B is true, A is true
                        if B is false, the statement is neither
                        true nor false, it is untested.

        P(X) - the probability of X

        e.g. P(A|B) = the probability of A being true,
        given that B is true

        For the original example:

>>     (1) If a Republican wins the election then if the winner is not
>>         Ronald Reagan, then the winner will be John Anderson.
>>
>>     (2) A Republican will win the election.
>>
>>    Yet few if any of these people believed the conclusion:
>>
>>     (3) If the winner is not Reagan then the winner will be Anderson.
>>
        Let RW stand for "A Republican wins"
        Let RR stand for "Ronald Reagan wins"
        Let JA stand for "John Anderson wins"

        We have:

        1) P((JA|~RR)|RW) = 1
        2) P(RW) = very high
        3) P(JA|~RR) = ??

        Well, nested assumptions don't really work with the normal
        Bayesian calculations, so we first have to convert (1) to
        a normal form.

        To convert to a normal form, P((A|B)|C) = P(A|(B&C))
        You can show this formally, but informally in English:

                Assuming that C, then assuming that B, then A
                is the same as
                Assuming that C and B, then A

        Alright, so now we have P(JA|(~RR&RW)) = 1.  What can we do
        with this??

                         P(JA)P((~RR&RW)|JA)
        P(JA|(~RR&RW)) = -------------------
                              P(~RR&RW)

                         P(JA)P(RW|JA)P(~RR|(RW&JA))
                       = ---------------------------
                                P(RW)P(~RR|RW)

                                 P(JA)P(RW|JA)P(~RR|(RW&JA))
        so P(JA|(~RR&RW))P(RW) = ---------------------------
                                         P(~RR|RW)

                                                  P(JA)
        (since there can be only one winner) =  ---------
                                                P(~RR|RW)

        Which, by other real world knowledge, you can reduce to P(RW).


        Side note: I was explaining this problem to a friend who
        does not have a background in logic. I when I told her that
        A => B is true when A is false, she said "That's stupid... No
        wonder logicians have trouble with the real world."   :-)

        One moral of this story: Be careful of "if" in english - it
        oftens means something other than the standard logical meaning.


Bill Pugh
Cornell University
..{uw-beaver|vax135}!cornell!pugh
607-257-6994

------------------------------

Date: Sun, 13 Oct 85 20:02:19 -0200
From: Eyal mozes  <eyal%wisdom.bitnet@WISCVM.ARPA>
Subject: Re: Counter-example to Modus Ponens

>>    Before the 1980 presidential election, many held the two beliefs
>>    below:
>>     (1) If a Republican wins the election then if the winner is not
>>         Ronald Reagan, then the winner will be John Anderson.
>>     (2) A Republican will win the election.
>>    Yet few if any of these people believed the conclusion:
>>     (3) If the winner is not Reagan then the winner will be Anderson.

> I would say the problem with this analysis is that people believed,
> instead of statement 1, the following similar statement:
>         (1a) If a Republican wins the election and the winner is not
>                 Ronald Reagan, then the winner will be John Anderson.

In classical philosophical logic, a conditional proposition (i.e., a
proposition of the form 'if A then B') asserts the necessity of a
certain sequence between two statements; the truth of 'if A then B'
does not depend on the truth of A and of B, but on the connection
between them - whether B's truth FOLLOWS NECESSARILY FROM A's truth.
For example, the statement 'if you are alive, then you are now reading
an ARPANET message' is FALSE; the condition and the consequent are both
true, but the consequent does not follow necessarily from the condition
(you could be alive and still be doing something else now).

Now, let us look at the meaning of (1a), (2) and (3).

(1a) asserts that if the winner is a Republican but not Reagan, it
follows necessarily that it will be Anderson. Given that there were
only two Republican candidates, (1a) is true, and everyone knew it is
true. The actual results of the elections determined the truth of the
two components (both turned out to be false), but made no difference
about the necessary connection between them.

(2) is a simple categorical proposition; before the elections, many
people believed it is true, others believed it is false; it turned out
to be true.

(3) asserts that if the winner is not Reagan, it FOLLOWS NECESSARILY
that it will be Anderson. This is obviously false, and I doubt if
anyone ever believed it. Again, the actual results of the elections
determined the truth of the two components (both turned out to be
false), but made no difference about the lack of any necessary
connection between them.

Given the classical logical interpretation, then, (3) does not follow
from (1a) and (2). People who believed (1a) and (2) but not (3) were
perfectly consistent (and they were also correct).

This example demonstrates one of the serious weaknesses of predicate
calculus. Predicate calculus has no way to express necessary
connections (mainly because its originators, Russell and Whitehead,
held a philosophy which denies the existence of such connections); the
result is the truth-functional interpretation of conditional
propositions, which leads to such anti-common-sense results as making
(3) follow from (1a) and (2).

As for (1), I'm not sure about its meaning, but it certainly doesn't
mean exactly the same as (1a). As far as I know, in all discussions, by
classical logicians, of conditional propositions or of Modus Ponens,
they only dealt with the case in which both components of the
conditional are simple categorical propositions. It is an interesting
question whether Modus Ponens remains valid in other cases as well (and
this, of course, depends on what exactly such 'multiple-conditional'
propositions mean).

        Eyal Mozes

        BITNET:                         eyal@wisdom
        CSNET and ARPA:                 eyal%wisdom.bitnet@wiscvm.ARPA
        UUCP:                           ..!decvax!humus!wisdom!eyal

------------------------------

End of AIList Digest
********************

From csvpi@vpics1 Tue Oct 15 06:48:42 1985
Date: Tue, 15 Oct 85 06:48:39 edt
From: csvpi@vpics1.VPI
To: fox@opus   (MILLER,FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: RO

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a009480; 15 Oct 85 2:34 EDT
Date: Mon 14 Oct 1985 21:28-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #147
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Tue, 15 Oct 85 06:35 EST


AIList Digest            Tuesday, 15 Oct 1985     Volume 3 : Issue 147

Today's Topics:
  Reviews - Canadian Artificial Intelligence 4 and 5 &
    Spatial Data Handling and Graphics Interface Conferences,
  Seminars - Intelligent Electronic Mail (MIT) &
    Connectionist Learning (GTE) &
    Connectionist Learning (SU) &
    Probabilistic Interpretation of Certainty Factors (SU)

----------------------------------------------------------------------

Date: 12 Oct 1985 10:59-CST
From: leff%smu.csnet@CSNET-RELAY.ARPA
Subject: Canadian Artificial Intelligence 4

Summary of Canadian Artificial Intelligence 4, June 1985

Should CSCSI/SCEIO Attempt to Influence National Policy
(discusses whether their organization should work to influence national
policy on AI within the Canadian government.)

AI Research Spending and politics

Discusses Canadian research spending and talent shortage.  The Canadian
AI group has been active in opposition to Star Wars.

Discussion of an episode of Magnum P. I. which featured an AI researcher
who developed some formula that would tip the balance in favor of
whoever had it.  The formula was "3 bracket prompt semicolon."

The Canadian National Research Council is starting an inventory of
Canadian robotics research

Announcement of Babbage and Lovelace's BASIC package to expand other
BASIC programs to do natural language parsing.  Runs on an IBM PC
with 64 to 96 K.

French Article on AI and Cognitive Science at the University of Montreal

Canadian Companies
Applied AI Systems, Kanata Ontario doing consulting, marketing of
software, custom software.

Review of LOGICware which sells MPROLOG.

Book reviews of "The Comercial Application of Expert System Technology"
"Artificial Intelligence: Bibliographic Summaries of the Select
Literature Volume I", "L'Intelligence Artificielle: promesses et
Realities"

Humorous Article: A Brief Review of Ignorance Engineering

Simon Fraser University AI tech report list

------------------------------

Date: 12 Oct 1985 11:44-CST
From: leff%smu.csnet@CSNET-RELAY.ARPA
Subject: Canadian Artificial Intelligence 5

Summary of Canadian Artificial Intelligence 5, September 1985

"Canada Prominent in IJCAI Awards" Reports on AIers prominent at
IJCAI: Levesque (Computers and Thought), Best Paper to Fagin and
Halpern and Professor Randy Goebel who stumped the band in
a taping of Mr. Carson's televison show, The Tonight Show.

"NSERC Proposes Major Increase in Research Funding" NSERC
is the Canadian science funding organization

Michel Pilot has started his own AI consulting service

Coast Mountain Intelligence specializes in Resource
Applications.  They completed an expert system on the choice
of statistical packages for geophysical data.  They are working
on expert systems for forest management and interpretation of
snow profiles for avalanche predictions

Xerox Announces Low Cost Workstations

Workshop Report: Theoretical Approaches to Natural Language
         Understanding

Workshop Report: Workshop on the Foundations of Adaptive Information
         Processing


Canada Conquers Los Angeles:  mentions Canadians prominent at IJCAI-85

Directory of Candian AI businesses

Reviews of:

Human Foundations of Advanced Computing Technology:
The Guide to the Select Literature from the Report Store

Readings in Knowledge Representation

Introduction to Artificial Intelligence by Eugene Charniak and Drew McDermott

Artificial Intelligence Applications for Business Management
Artificial Intelligence Applications for Manufacturing

Obituaries for Jeffrey Robert Sampson, Daniel Louis Shalom Berlin
         Donald Grant Kuehner and David Julian Meredith Davies

Tech Report Lists from
University of Calgary, University of Montreal, University of Toronto,
McGill University, University of Alberta, Simon Fraser University

------------------------------

Date: 14 Oct 1985 12:32-CST
From: leff%smu.csnet@CSNET-RELAY.ARPA
Subject: AI at conferences

International Symposium on Spatial Data Handling at the University
of Zurich, August 1984

Order from Symposium Secretariat, Department of Geography,
University of Zurich, Winterthurerstrasse 190 CH-8057 Zurich,
         Switzerland.  Price 30 dollars


Data Structures for a Knowledge-Based Geographic Information System
D. J. Perquet

Symbolic Feature Analysis and Expert Systems
B. Palmer

Autonap -- An Expert System for Automatic Map Name Placement
H. Freemand, J. Ahn

Knowledge Based Control of Search and Learning in a Large Scale GIS
T. R. Smith M. Pazner


____________________________________________________________________________

Graphics Interface 85, 11th Conference of
Candian Man-Computer Society  Montreal May 27-31

Robotics and CAD/CAM Section

Non-rigid Motion
A. R. Dill and M. D. Levine
McGill University

Electronic Assembly by Robots
C. Michaud, A. S. Malowany, M. D. Levine
McGill University

Lo Cost Geometric Modelling System for CAM
W. G. Ngai and Y. K . Chan
Chinese Univeristy of Hong Kong

Panel- Computer Graphics in Environmental Design Artificial Intelligence

Generative Design in Architecture Using an Expert System
E. Chang University of Victoria

Knowledge Engineering Application in Image Processing
K. Mikame, N. Sueda, A. Hoshi, S. Ohoniden Toshiba, Japan

Visual Perception
L. Scholl
Laura Scholl and Associates USA

------------------------------

Date: Fri, 11 Oct 85 13:49 EDT
From: Kahin@MIT-MULTICS.ARPA
Subject: Seminar - Intelligent Electronic Mail (MIT)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

              Massachusetts Institute of Technology
                      Communications Forum


             Making Electronic Mail More Intelligent

                        October 31, 1985

                       Thomas Malone, MIT
          Kenneth Mayers, Digital Equipment Corporation


     Electronic messaging has become a familiar feature of the
office environment and a key element in office automation
strategy for many organizations.  As these systems spread, many
issues must be dealt with, such as accomodating evolving user
requirements, responding to rapid expansion, controlling junk
mail, and incorporating alternative technologies.  One of the
central challenges is how to enhance messaging features so that
users are not swamped by information overload.
     This forum will present the experience of Digital Equipment
Corporation, one of the pioneering users of electronic mail, and
will describe some recent innovative research at MIT which uses
artificial intelligence technology to improve the user's ability
to sort incoming messages by relevance and urgency and to route
outgoing communications to the most appropriate people within the
organizations.


                           4:00 - 6:00
               Bartos Theater for the Moving Image
                      The Wiesner Building
             (Center for Arts and Media Technology)
                   (Building E15 Lower Level)
                         20 Ames Street
              Massachusetts Institute of Technology
                    Cambridge, Massachusetts

           For further information call 617-253-3144.

------------------------------

Date: Mon, 14 Oct 85 11:31:01 EDT
From: Bernard Silver <SILVER@MIT-MC.ARPA>
Subject: Seminar - Connectionist Learning (GTE)


                        GTE LABORATORIES INC
                        MACHINE LEARNING SEMINAR

TITLE:          Learning by Statistical Cooperation
                      in Connectionist Networks


SPEAKER:                Prof. Andrew G. Barto
                University of Massachusetts at Amherst

TIME:                   2pm, Wednesday, October 23

PLACE:                  GTE Laboratories Inc
                        40 Sylvan Rd
                        Waltham MA 02254


Since the usual approaches to cooperative computation in networks of
neuron-like computating elements do not assume that network components
have any ``preferences," they do not make substantive contact with game
theoretic concepts, despite their use of some of the same terminology.
In the approach I describe, however, each network component, or adaptive
element, is a self-interested agent that prefers some inputs over others
and ``works" toward obtaining the most highly preferred inputs.  I
describe some of our work with an adaptive element that is robust enough
to learn to cooperate with other elements like itself in order to
further its self-interests.  It is argued that some of the long-standing
problems concerning adaptation and learning by networks might be
solvable by this form of cooperativity, and computer simulation
experiments are described that show how networks of self-interested
components that are sufficiently robust can solve rather difficult
learning problems.  A secondary aim of this talk is to suggest that
beyond what I explicitly illustrate, there is a wealth of ideas from
game theory and allied disciplines such as mathematical economics that
can be of use in thinking about cooperative computation in both nervous
systems and man-made systems.

For more information contact Bernard Silver (617) 466-2663

------------------------------

Date: Mon, 14 Oct 85 05:45:36 pdt
From: gluck@SU-PSYCH (Mark Gluck)
Subject: Seminar - Connectionist Learning (SU)

       THE COMPUTATION, COGNITION, & NEUROSCIENCE JOURNAL CLUB
                   AND SPORADIC SEMINAR SERIES

                           presents:


       "From Classical Conditioning to Cognitive Computations"

                         Richard S. Sutton
                GTE Fundamental Research Laboratory

Date: Mon, Oct. 28       Time: 12:00-1:15      Place: Room 100, Jordan Hall


One attractive aspect of connectionist models is their ability to make
contact with a wide range of fields from neuroscience to cognitive
science and AI.  In this talk I will review the status of the
"connectionist connection" between such fields and present some of my
own work as an example of a case in which pursuing it has been fruitful.
I will present a sequence of three closely-related connectionist
learning models.  The first was presented by Sutton and Barto in 1981 as
a real-time model of classical conditioning consistent with many
behavioral phenomena including blocking, conditioned inhibition, the ISI
dependency, higher-order conditioning, and serial-compound effects.  The
second model is the result of changes made to the first in trying to use
it in an AI learning problem.  Remarkably, the modified model is not
only very effective on the AI problem, but is also a better match to the
classical conditioning data than the first model.  I am currently
working on a third model that is able to reproduce the
animal learning phenomena of latent learning and sensory
preconditioning.  The AI goal for this model is to build a system that
can learn about and then reason about its environment.  The model shows
promise of being simultaneously very successful as all of 1) a model of
classical conditioning, 2) an aid to AI machine learning systems, and 3)
a model of simple forms of inference and planning.


*****************************************************************************

The CCNJCS^3 was formed in response to a growing interest among
members of the Psychology, Computer Science, and Neuroscience
departments at Stanford in learning about recent advances in the
study of computational approaches to modelling the relationship
between cognition and neuroscience. In addition to organizing
seminars, we also arrange journal club meetings in which graduate
students and post-docs meet to read and discuss current research
articles dealing with: The neural substrates of learning and memory,
computational models of neuronal processes, and the neural bases
of cognitive behavior.

For more information, contact Mark Gluck (gluck%su-psych@sumex-aim).

------------------------------

Date: Mon 14 Oct 85 07:52:23-PDT
From: Ana Haunga <HAUNGA@SU-SCORE.ARPA>
Subject: Seminar - Probabilistic Interpretation of Certainty Factors (SU)


   SIGLUNCH will be held at the Chemistry Gazebo at 12:05-1:00 p.m.


     Probabilistic Interpretations for MYCIN's Certainty Factors

                           David Heckerman

I will show that the original definition of certainty factors (CF's)
is inconsistent with the "defining desiderata" of the CF combination
functions.  I will then show that if this inconsistency is removed by
redefining CF's in terms of the desiderata then CF's have
probabilistic interpretations.  In other words, I will show that
certainty factors are nothing more than transformed probabilistic
quantities.  The construction of the interpretation provides insights
into the assumptions made when propagating CF's through an inference
net.  For example, it can be shown that all evidence which bears
directly on a hypothesis must be conditionally independent on the
hypothesis and its negation.  After presenting the interpretations,
I will discuss several ramifications of the correspondence between
CF's and probabilities.

------------------------------

End of AIList Digest
********************

From comsat@vpics1 Sat Oct 19 06:18:07 1985
Date: Sat, 19 Oct 85 06:18:03 edt
From: comsat@vpics1.VPI
To: fox@opus   (MILLER,FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: RO

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a002637; 18 Oct 85 18:39 EDT
Date: Fri 18 Oct 1985 14:05-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #148
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Sat, 19 Oct 85 06:01 EST


AIList Digest            Friday, 18 Oct 1985      Volume 3 : Issue 148

Today's Topics:
  Queries - Go & VMS vs. UNIX for AI & KR Languages for Semantic Nets &
    Common Lisp Compiler/Interpreter for VAX/750(ULTRIX),
  Literature - Foreign Language Abstracting,
  Applications - ALV Demo,
  AI Tools - YAPS & AI Machines

----------------------------------------------------------------------

Date: 15-Oct-85   22:02-EDT
From: David Nicol   <cscboic%BOSTONU.bitnet@WISCVM.ARPA>
Subject: the ancient oriental game of Go

   I am in the process of writing a program to mediate a Go game, and hopefully
will be able to write an algorithm or two for playing.
   If anyone has done any thinking towards algorithms to play Go, or maybe
has one already, I would appreciate very much hearing from you.

      Cscboic@bostonu

------------------------------

Date: Wed, 16 Oct 85 11:50 EST
From: "Christopher A. Welty" <weltyc%rpicie.csnet@CSNET-RELAY.ARPA>
Subject: VMS vs UNIX for AI development

        I know this may set people at each others throats, but it is a
legitimate concern of mine, so here goes:

        What experience has there been out there with AI (mainly ES)
development on VMS?  We use both UNIX and VMS here at RPI, and I have
found in my experience that VMS makes it more difficult to do work,
and UNIX makes it easier.  But there seems to be a number of people
(who don't even work for DEC) that swear by VMS.  There must be some rational
reason for this.  I don't really want to see a discussion of the Operating
Systems themselves (as that is another Newslist), just what support
they offer for development of Expert Systems (mainly LISP, but feel free
to add other languages).  I know what UNIX offers, let me hear (see) what
VMS offers.

                                        -Christopher Welty
                                         RPI / CIE Systems Mgr.

------------------------------

Date: 16 Sep 85 1615 WEZ
From: U02F%CBEBDA3T.BITNET@WISCVM.ARPA  (Franklin A. Davis)
Subject: Query: Languages for knowledge rep using semantic nets?

We are interested in knowledge representation using semantic nets
and frames, and we would like to know who has experience with
special languages for this purpose.  Furthermore, are there
distributors for such software packages?  Thanks in advance.

Regards,   Franklin Davis  <U02F@CBEBDA3T.BITNET>
Institut fuer Informatik und angewandte Mathematik
Universitaet Bern
Laenggassstrasse 51
CH-3012  Bern
Switzerland

------------------------------

Date: Thu, 17 Oct 85  9:24:26 EDT
From: "Srinivasan Krishnamurthy" <1438@NJIT-EIES.MAILNET>
Subject: Common Lisp Compiler/Interpreter for VAX/750(ULTRIX)

Dear Readers,
 Can somebody give me details about  a CommonLisp Compiler
and interpreter for a VAX/750 running ULTRIX? Heard that
KEE and Knowledge Craft(KC) work only on VMS, want to port
it to the above enviornment. Any ideas, leads are welcome.
 Please message me directly at the following net addreses:

  Mailnet:  srini@NJIT-EIES.MAILNET
  Arpanet:  srini%NJIT-EIES.MAILNET@MIT-MULTICS.ARPA
  USMAIL:   S Krishnamurthy
            COMSAT LABS, NTD.
            22300 Comsat Drive.
            Clarksburg, MD-20871
            (301)428-4531

 Thanks in advance.
 Srini.

------------------------------

Date: 17 Oct 1985 0744-PDT (Thursday)
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Last call for assistance: helping with foreign language abstracting


I would like  to thank all of the people who responded for my first
call for people to help in the translation/abstraction of foreign
language documents.  I have been travelling quite a bit during the
past five weeks, so next week, I will have a chance to lay the
groundwork for determining what journals to monitor and where to
post information.

For those of you who missed this earlier posting: I am seeking people
interested in monitoring foreign language technical documents with
an eye to post significant new articles to various bulletin boards.
This would be prior to translation, and would hopefully speed translation
of potentially significant papers in: AI, graphics, and so forth.
Languages which are particularly critical are Eastern Asian: Japanese and
Chinese, perhaps French, and other western European languages.  We have
a few people of each, but it would help to spread the load out.

If you are interested, or want to hear more, send me mail to a UUCPnet/ARPAnet
gateway listed below.

--eugene miya
  NASA Ames Research Center     [Rock of Ages Home for ...]
  eugene@ames-nas.ARPA
  UUCP: {ihnp4,hao,hplabs,nsc,cray,research,decwrl}!ames!amelia!eugene

------------------------------

Date: 8 Oct 1985 1205-PDT
From: LAWS at SRI-AI.ARPA
Subject: Ogling Overseas Technology

>From the EE's Tools & Toys column,
IEEE Spectrum, Volume 22, No. 10, 10/85, p. 85

The latest research results being published outside the United States
may not be as difficult to monitor as one might think.  The U.S. Dept.
of Commerce publishes a weekly newsletter called Foreign Technology
that abstracts new reports and papers from outside the U.S. that are
available through the National Technical Information Service.  Each
abstract lists the report's title, author(s), date, and NTIS publication
number, along with a brief synopsis.

Topics that the Commerce Department tracks in the newsletter are:
biomedical technology; civil, construction, structural, and building
engineering; communications; computer technology; electro and optical
technology; energy; manufacturing and industrial engineering; materials
sciences; physical sciences; transportation technology; and mining and
minerals technology.

An annual subscription to the weekly newsletter, which is indexed
every January, costs $90 in North America.  To subscribe or to request
subscription prices for other areas, write to U.S. Dept. of Commerce,
National Technical Information Service, 5285 Port Royal Rd., Springfield,
VA  22161; telephone (703) 487-4630.

------------------------------

Date: 17 Oct 1985 20:51:51 EDT
From: Spacely's Space Sprockets <MMDA@USC-ISI.ARPA>
Subject: ALV DEMO


In response to the recent post regarding the Autonomous Vehicle demo:

The Autonomous Land Vehicle project, sponsored by DARPA through the
Army ETL, is part of DARPA's Strategic Computing Program, sort of
the US's answer to Japan's Fifth Generation effort.  Martin Marietta
Denver Aerospace ( the Advanced Automation Technology Section ) is
the prime contractor of the project -- we are actually developing the
system and performing the demos.  The project started in late '84
(actually early '85 for most of us), and the first demo was in May '85.
We have another demonstration set for next month, and will have about
one per year for the next four years, I believe, each demo being more
ambitious.

The May demo was a preliminary road-following demonstration, with
the main point being that we actually got the vehicle going, hardware mounted,
some communications figured out, and made it follow a road autonomously.
It traversed a 1 km track of road at a speed of about 3 km/h (yep, pretty
slow).  The vision system (based on a Vicom image processing computer)
produced scene models about every 3 seconds for the navigator/pilot
to interpret and control the vehicle.  The scene model is basically 3D
road centerpoints.

In November, the vehicle will travel about 5 times as far at speeds up to
10 km/hr and handle things such as shadows on the road, intersecting roads,
and sharp curves.  We will also be using an ERIM laser range scanner as well
as the camera we used in May to provide road images.  In later demos we will
avoid obstacles, go over cross-country terrain, and other neato tricks.

Martin Marietta is officially the integrator of this project, and other
universities and companies also have research contracts -- University of
Maryland, Carnegie-Mellon University, SRI, AI&DS, Hughes, Honeywell,
and maybe some others that I'm not aware of.  So far, most of the work
contributing to the demos has been done here at Martin Marietta.  These other
folks will be contributing a lot in the future.

A paper describing the project and the May configuration will come out soon
in the proceedings of the SPIE Conference on Intelligent Robotics and Computer
Vision (Sept. '85) [1].

                        Matthew Turk
                        MMDA@USC-ISI.arpa

[1] Lowrie, Thomas, Gremban, Turk
    The Autonomous Land Vehicle Preliminary Road-following Demonstration

------------------------------

Date: 18 Oct 1985 09:29-EDT
From: Hans.Tallis@ML.RI.CMU.EDU
Subject: YAPS

Srinivasan,
I'm working at Mitre for the summer (tallis@mitre) and we have a
         version of YAPS which is source-code runnable under Franz,
         Lambda Zetalisp and Symbolics Zetalisp.  Since YAPS is
         practically public domain, Liz Allen at Maryland probably
         wouldn't mind our giving you a copy.  Send mail if you're
         intersted.
--Hans

------------------------------

Date: 14 Oct 1985 23:48:39-BST
From: Aaron Sloman <aarons%svga@ucl-cs.arpa>
Subject: DO you really need an AI machine?

Mon Oct 14 23:47:48 BST 1985

To John and Srinivasan,
At Sussex University we have been involved in AI teaching and
research for many years. Being British we have a relatively
small budget, and for this reason we have resisted going for
machines like Symbolics, since a wonderful tool is not much
use if you have to spend most of your time queueing up to use
it. Instead we have mostly been using VAXen for a range of AI
projects.

But we did not like the AI software available, so we developed
our own - POPLOG.. We've found that with suitable software 10
to 14 AI MSc students can be kept happy most of the time on a
4 Mbyte VAX 750 running Berkeley Unix or VMS. For more
advanced researchers the number drops, as it does if you get
someone doing image or speech processing. We can support this
number because most of the time most people are editing, not
running their programs. Of course, we are then stuck with a
terrible human-machine interface: a 24 by 80 VDU. So we are
now trying to shift as much as possible of our research onto
SUN workstations - cheaper than Symbolics. At least SUNs run
Unix, unlike most purpose-built AI workstations, and for us
that's a big advantage. We use POPLOG, but there's also
Quintus Prolog, Common Lisp, and other AI tools available on
the SUN. Of course it will be a little while before these
machines have the mature interfaces available on Lisp
machines.
Aaron Sloman, Cognitive Studies Programme,
University of Sussex, Brighton, England.

------------------------------

Date: Tue, 15 Oct 85 21:22:55 PDT
From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
Subject: AI Hype & Big AI Machines.

        I think Martins (#146) missed the point of Wylands (#145) salient
observation that AI researchers focus on *problems* while disciples of
other forms of CS focus on *solutions*.  For those of you near good
college libraries let me suggest that you look up the "Collected Works
of John VonNeumann" and read what he has to say about computers.  In
short, he advocates (see Vol 5) that computers be used to obtain
insights into problems, which are then presumably solved in closed
form.
        It makes sense, then, to use AI to develop an understanding
of problems which are really too difficult to deal with without AI
techniques, and then close in on a CS *solution*, and finally with
insights so obtained on a closed form, mathematically verifiable
true solution.
        In my area of interest we have to deal with lots of nasty
solutions to differential equations.  At last, I have sold my
bosses to get Macsyma to help us beat these monster's into a form
which we can implement in reasonable time on our mainframes.
[Macsyma is a super duper symbolic algebra package which costs
5K for a Symbolics 3670 from Symbolics -- it was developed over
the last 20 years at MIT -- and yes, the price just dropped].

        With regard to messages from Peck (#144) suggesting small
Xerox AI machines and Connolly (#145) and Welty (#144) singing the
praises of large AI machines -- there is no doubt that if you have
experienced AI investigators, and a network of solid general
purpose processing the large AI (and small ones) are worth their
cost.
        The original questions was, however, where does one start?
Cugini was correct in pointing out the risks of jumping in to
the AI culture too quickly for the reasons he stated and two others:
1) *solutions* often are often dependant upon CS techniques, 2) if
you don't know that AI is part of your solution (eg. you are
not part of an research AI group) why commit yourself prematurely?
        Coupled with the availability of good learning systems (and
adequate but perhaps not best) development systems on the VAX and
PC-AT there is really little need to invest initially in a
dedicated AI machine of any size (although Xerox's 6085 sure looks
nice).  Slow as Golden Hill's Common Lisp is, it is the Lisp system
of choice for beginning Lisper's at our organization (we also have
a Symbolics 3670 as I implied above).  In fact, rolling out of
bed in the morning does not qualify one to appreciate the Symbolics
development environment.  {if it did it probably would not be
worth using}

[flame off]

Richard Jennings
Arpa: jennings@aerospace

I don't work for them, just use their arpanet port.

------------------------------

Date: Wed, 16 Oct 85 10:21 CDT
From: Joseph_Tatem <tatem%ti-eg.csnet@CSNET-RELAY.ARPA>
Subject: AI machines

I have been reading with interest the discussion about AI (LISP)
machines and their usefulness. Since I have been thinking about this
myself, I will take this oppurtunity to throw my two cents in.

>From what I hear, the average time that it takes to come up to speed
on one of these LISP machines is about 3 months. This corresponds
roughly to my own experience. Of course, some vendors provide online
services that can aid you to various degrees. However, I have found
that John Cugini's sentiments are common and not altogether
ill-founded. Have you ever done any work with the Window System on
one of these beasts?? If you have you know that it is a mess and is
not well-documented. You find flavors like STREAM-MIXIN-WITH-HACKS.
When you look these up in the manual, you will likely-as-not read
something like, "This function does not work reliably, don't use it."

On the other hand, once you have learned your way around a little
bit, you find that you are using a very powerful machine with a nice
development environment. I can get at lots of imformation in the
debugger and I can incrementally develop my systems, etc, etc. I
find that I have become spoiled. Things that I once did fine without
(or with) now seem essential (unnecessary): I don't know how I lived
without (with) them before.

I believe that the problem is a design issue. Most of these machines
are based on software that was developed at (and licensed by) MIT.
It seems to me that the system was never really designed (at least
the user-interface), but that it was a concatenation of Master's
Theses (and other grad. student type work). This is not to say that
it is not a good system. There has been a lot of good work put into
these machines. It is just that there needs to be some consistency.

I see a need to redesign at two levels. First, I  would like to see
a consistent set of functions (and flavors, etc). Secondly, I would
like to see a well-designed user interface. A mouse and a few
windows do not make a good interface just by being there. By now, we
should know the kinds of things that make computers easy to use. There
certainly is no dearth of ideas in the literature. At the very least,
I would like for a brand-new user to be able to sit down at one of
these beasts and at least be able to figure out which mouse button to
click or which function to enter to get himself started.

So whaddya think??              Joe Tatem
                                tatem%ti-eg@csnet-relay

Note: The opinions expressed herein are strictly my own and in no
way reflect those of my employer.

------------------------------

End of AIList Digest
********************

From comsat@vpics1 Sat Oct 19 06:16:29 1985
Date: Sat, 19 Oct 85 06:16:21 edt
From: comsat@vpics1.VPI
To: fox@opus   (MILLER,FRANCE,JOSLIN,ROACH,FOX)
Subject: From: AIList Moderator Kenneth Laws <AIList-REQUEST%sri-ai.arpa@CSNET-RELAY>
Status: RO

Received: from sri-ai.arpa by CSNET-RELAY.ARPA id a003564; 18 Oct 85 21:06 EDT
Date: Fri 18 Oct 1985 14:16-PDT
Reply-to: AIList%sri-ai.arpa@CSNET-RELAY
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V3 #149
To: AIList%sri-ai.arpa@CSNET-RELAY
Received: from rand-relay by vpi; Sat, 19 Oct 85 06:03 EST


AIList Digest            Friday, 18 Oct 1985      Volume 3 : Issue 149

Today's Topics:
  Projects - University of Aberdeen & CSLI,
  Literature - New Complexity Journal,
  AI Tools - Lisp vs. Prolog,
  Opinion - AI Hype & Scaling Up,
  Cognition & Logic - Modus Ponens,
  Humor - Dognition

----------------------------------------------------------------------

Date: Thu 17 Oct 85 12:44:41-PDT
From: Derek Sleeman <SLEEMAN@SUMEX-AIM.ARPA>
Subject: University of Aberdeen Program

                UNIVERSITY of ABERDEEN
                Department of Computing Science


The University of Aberdeen is now making a sizeable committment to
build a research group in Intelligent Systems/Cognitive Science.
Following the early work of Ted Elcock and his co-workers, the
research work of the Department has been effectively restricted to
databases. However, with the recent appointment of Derek Sleeman
to the faculty from summer 1986, it is anticipated that a sizeable
activity will be (re)established in AI.

        In particular we are anxious to have a number of visitors at
any time - and funds have been set aside for this. So we would be
particularly interested to hear from people wishing to spend Sabbaticals,
short-term Research fellowships etc.

        Please contact Derek Sleeman at 415 497 3257 or SLEEMAN@SUMEX
for further details.

------------------------------

Date: Wed 16 Oct 85 17:12:46-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Projects

         [Excerpted from the CSLI Newsletter by Laws@SRI-AI.]


                              CSLI PROJECTS

   The following is a list of CSLI projects and their coordinators.

    AFT Lexical Representation Theory.          Julius Moravcsik
        (AFT stands for Aitiuational Frame Theory)
    Computational Models of Spoken Language.    Meg Withgott
    Discourse, Intention, and Action.           Phil Cohen.
    Embedded Computation Group.                 Brian Smith (3 sub groups)
        sub 1: Research on Situated Automata.   Stan Rosenschein
        sub 2: Semantically Rational
               Computer Languages.              Curtis Abbott
        sub 3: Representation and Reasoning.    Brian Smith
    Finite State Morphology.                    Lauri Karttunen
    Foundations of Document Preparation.        David Levy.
    Foundations of Grammar.                     Lauri Karttunen
    Grammatical Theory and Discourse
        Structures.                             Joan Bresnan
    Head-driven Phrase Structure Grammar.       Ivan Sag and Thomas Wasow
    Lexical Project.                            Annie Zaenen
    Linguistic Approaches to Computer
        Languages.                              Hans Uszkoreit
    Phonology and Phonetics.                    Paul Kiparsky
    Rational Agency.                            Michael Bratman
    Semantics of Computer Language.             Terry Winograd
    Situation Theory and Situation
        Semantics (STASS).                      Jon Barwise
    Visual Communication.                       Sandy Pentland

   In addition, there are some interproject working groups.  These
   include:

    Situated Engine Company.                    Jon Barwise and Brian Smith
    Representation and Modelling.               Brian Smith and Terry Winograd

------------------------------

Date: Wed 16 Oct 85 09:56:32-EDT
From: Susan A. Maser <MASER@COLUMBIA-20.ARPA>
Subject: NEW JOURNAL


                        JOURNAL OF COMPLEXITY

                            Academic Press

               Editor: J.F. Traub, Columbia University


                       FOUNDING EDITORIAL BOARD

                    K. Arrow, Stanford University
            G. Debreu, University of California, Berkeley
                    Z. Galil, Columbia University
                 L. Hurwicz, University of Minnesota
                J. Kadane, Carnegie-Mellon University
             R. Karp, University of California, Berkeley
                        S. Kirkpatrick, I.B.M.
                H.T. Kung, Carnegie-Mellon University
          M. Rabin, Harvard University and Hebrew University
             S. Smale, University of California, Berkeley
                         S. Winograd, I.B.M.
               S. Wolfram, Institute for Advanced Study
    H. Wozniakowski, Columbia University and University of Warsaw


 YOU ARE INVITED TO SUBMIT YOUR MAJOR RESEARCH PAPERS TO THE JOURNAL.
                  See below for further information.

Publication Information and Rates:
Volume 1 (1985), 2 issues, annual institutional subscription rates:
In the US and Canada: $60
All other countries: $68
Volume 2 (1986), 4 issues, annual institutional subscription rates:
In the US and Canada: $80
All other countries: $93

Send your subscription orders to:   Academic Press, Inc.
                                    1250 Sixth Avenue
                                    San Diego, CA 92101
                                    (619) 230-1840


Contents of Volume 1, Issue 1:

"A 71/60 Theorem for Bin Packing" by Michael R. Garey & David S. Johnson

"Monte-Carlo Algorithms for the Planar Multiterminal Network
 Reliability Problem" by Richard M. Karp & Michael Luby

"Memory Requirements for Balanced Computer Architectures" by H.T. Kung

"Optimal Algorithms for Image Understanding: Current Status and
 Future Plans" by D. Lee

"Approximation in a Continuous Model of Computing" by K. Mount & S. Reiter

"Quasi-GCD Computations" by Arnold Schonhage

"Complexity of Approximately Solved Problems" by J.F. Traub

"Average Case Optimality" by G.W. Wasilkowski

"A Survey of Information-Based Complexity" by H. Wozniakowski



                         SUBMISSION OF PAPERS

        The JOURNAL OF COMPLEXITY is a multidisciplinary journal which
covers complexity as broadly conceived and which publishes research
papers containing substantial mathematical results.

        In the area of computational complexity the focus is on
problems which are approximately solved and for which optimal
algorithms or lower bound results are available.  Papers which provide
major new algorithms or make important progress on upper bounds are
also welcome.  Papers which present average case or probabilistic
analyses are especially solicited.  Of particular interest are papers
involving distributed systems or parallel computers for which only
approximate solutions are available.

        The following is a partial list of topics for which
computational complexity results are of interest: applied mathematics,
approximate solution of hard problems, approximation theory, control
theory, decision theory, design of experiments, distributed computation,
image understanding, information theory, mathematical economics,
numerical analysis, parallel computation, prediction and estimation,
remote sensing, seismology, statistics, stochastic scheduling.

        In addition to computational complexity the following are
among the other complexity topics of interest: physical limits of
computation; chaotic behavior and strange attractors; complexity in
biological, physical, or artificial systems.

        Although the emphasis is on research papers, surveys or
bibliographies of special merit may also be published.

To receive a more complete set of authors' instructions (with format
specifications), or to submit a manuscript (four copies please),
write to:
                          J.F. Traub, Editor
                        JOURNAL OF COMPLEXITY
                    Department of Computer Science
                    450  Computer Science Building
                         Columbia University
                       New York, New York 10027

------------------------------

Date: Tue, 15 Oct 85 22:15 EDT
From: Hewitt@MIT-MC.ARPA
Subject: Lisp vs. Prolog (reply to Pereira)

I would like to reply to Fernando Pereira's message in which he wrote:

    It is a FACT that no practical Prolog system is written entirely
    in Lisp: Common, Inter or any other. Fast Prolog systems have
    been written for Lisp machines (Symbolics, Xerox, LMI) but their
    performance depends crucially on major microcode support (so
    much so that the Symbolics implementation, for example, requires
    additional microstore hardware to run Prolog). The reason for
    this is simple: No Lisp (nor C, for that matter...) provides the
    low-level tagged-pointer and stack operations that are critical
    to Prolog performance.

It seems to me that the above argument about Prolog not REALLY being
implemented in Lisp is just a quibble.  Lisp implementations from the
beginning have provided primitive procedures to manipulate the likes
of pointers, parts of pointers, invisible pointers, structures, and
stack frames.  Such primitve procedures are entirely within the spirit
and practice of Lisp.  Thus it is not surprising to see primitive
procedures in the Lisp implementations of interpreters and compilers
for Lisp, Micro-Planner, Pascal, Fortran, and Prolog.  Before now no
one wanted to claim that the interpreters and compilers for these
other languages were not written in "Lisp".  What changed?

On the other hand primitive procedures to manipulate pointers, parts
of pointers, invisible pointers, structures, and stack frames are
certainly NOT part of Prolog!  In FACT no one in the Prolog community
even professes to believe that they could EVER construct a
commercially viable (i.e. useful for applications) Common Lisp in
Prolog.

I certainly realize that interesting research has been done using
Planner-like and Prolog-like languages.  For example Terry Winograd
implemented a robot world simulation with limited natural language
interaction using Micro-Planner (the implementation by Sussman,
Winograd, and Charniak of the design that I published in IJCAI-69).
Subsequently Fernando did some interesting natural language research
using Prolog.

My chief chief concern is that some AILIST readers might be misled by
the recent spate of publicity about the "triumph" of Prolog over Lisp.
I simply want to point out that the emperor has no clothes.

------------------------------

Date: Thu, 10 Oct 85 11:03:00 GMT
From: gcj%qmc-ori.uucp@ucl-cs.arpa
Subject: AI hype

A comment from Vol 3 # 128:-
``Since AI, by definition, seeks to replicate areas of human cognitive
  competence...''
This should perhaps be read in the context of the general discussion which
has been taking place about `hype'. But it  is still slightly off the mark
in my opinion.
I suppose this all rests on what one means by human cognitive competence.
The thought processes  which make  us human are far  removed from the cold
logic of algorithms which are the basis for *all* computer software, AI or
otherwise.  There is  an element  in all human  cognitive  processes which
derives from the emotional part of our psyche. We reach decisions not only
because we `know' that they are right, but also because  we `feel' them to
be correct. I think really  that AI must be seen as an important extension
to the thinking process, as a way of augmenting an expert's scope.

Gordon Joly     (now gcj%qmc-ori@ucl-cs.arpa
                (formerly gcj%edxa@ucl-cs.arpa

------------------------------

Date: Fri 18 Oct 85 10:13:10-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Scaling up AI solutions


>From: Gary Martins <GARY@SRI-CSL.ARPA>
>Subject: Scaling Up

>Mr. Wyland seems to think that finding problem solutions which "scale up"
>is a matter of manufacturing convenience, or something like that.  What
>he seems to overlook is that the property of scaling up (to realistic
>performance and behavior) is normally OUR ONLY GUARANTEE THAT THE
>"SOLUTION" DOES IN FACT EMBODY A CORRECT SET OF PRINCIPLES.  [...]


The problem of "scaling up" is not that our solutions do not work
in the real world, but that we do not have general, universal
solutions applicable to all AI problems.  This is because we only
understand *parts* of the problem at present.  We can design
solutions for the parts we understand, but cannot design the
universal solution until we understand *all* of the problem.
Binary vision modules provide sufficient power to be useful in
many robot assembly applications, and simple word recognizers
provide enough power to be useful in many speech control
applications.  These are useful, real-world solutions but are not
*universal* solutions: they do not "scale up" as universal
solutions to all problems of robot assembly or understanding
speech, respectively.

I agree with you that scientific theories are proven in the lab
(or on-the-job) with real world data.  The proof of the
engineering is in the working.  It is just that we have not
reached the same level of understanding of intelligence that
Newton's Laws provided for mechanics.

Dave Wyland

------------------------------

Date: Tue 15 Oct 85 13:48:28-PDT
From: Mike Dante <DANTE@EDWARDS-2060.ARPA>
Subject: modus ponens

Seems to me that McGee is the one guilty of faulty logic.  Consider the
following example:

    Suppose a class consists of three people, a 6 ft boy (Tom), a 5 ft girl
(Jane), and a 4 ft boy (John).  Do you believe the following statements?

    (1) If the tallest person in the class is a boy, then if the tallest is
        not Tom, then the tallest will be John.
    (2) A boy is the tallest person in the class.
    (3) If the tallest person in the class is not Tom then the tallest
        person in the class will be John.

 How many readers believe (1) and (2) imply the truth of (3)?
                                                 -  Mike

------------------------------

Date: Thu, 17 Oct 85 21:22:26 pdt
From: cottrell@nprdc.arpa (Gary Cottrell)
Subject: Seminar - Parallel Dog Processing


                                       SEMINAR

                              Parallel Dog Processing:
                   Explorations in the Nanostructure of Dognition

                                Garrison W. Cottrell
                              Department of Dog Science
                Condominium Community College of Southern California


               Recent advances in neural network modelling have led to  its
          application  to  increasingly  more trivial domains.  A prominent
          example of this line of research has  been  the  creation  of  an
          entirely new discipline, Dognitive Science[1], bringing  together
          the  insights  of  the  previously  disparate fields of obedience
          training, letter carrying, and vivisection on such questions  as,
          "Why  are  dogs  so  dense?"  or,  "How many dogs does it take to
          change a lightbulb?"[2]

               This talk will focus on the first question.   Early  results
          suggest   that  the  answer  lies  in  the  fact  that  most  dog
          information processing occurs in their brains.   Converging  data
          from various fields (see, for example, "A vivisectionist approach
          to dog sense manipulation", Seligman, 1985) have shown that  this
          "wetware"  is  composed  of  a  massive  number  of  slow,  noisy
          switching elements, that are  too  highly  connected  to  form  a
          proper  circuit.  Further, they appear to be all trying to go off
          at the same time like  popcorn,  rather  than  proceeding  in  an
          orderly fashion.  Thus it is no surprise to science that they are
          dumb beasts.

               Further  impedance  to   intelligent   behavior   has   been
          discovered  by  learning  researchers.   They have found that the
          connections between the elements have  little  weights  on  them,
          slowing   them   down  even  more  and  interfering  with  normal
          processing. Indeed, as the dog grows, so do these weights,  until
          the processing elements are overloaded.  Thus it is now clear why
          you can't teach an old dog new  tricks,  and  also  explains  why
          elderly  dogs  tend  to  hang their heads.  Experience with young
          dogs appears to bear this out.  They seem  to  have  very  little
          weight  in  their  brains,  and  their behavior is thus much more
          laissez faire than older dogs.

               We have  applied  these  constraints  to  a  neural  network
          learning  model  of  the dog brain.  To model the noisy signal of
          the actual dog neurons, the units of the model are restricted  to
          communicate by barking to one another.  As these barks are passed
          from one unit to another, the weights on the units are  increased
          by  an amount proportional to the loudness of the bark.  Hence we
          ____________________
             [1]A flood of researchers finding Cognitive Science  too  hard
          are switching to this exciting new area.  It appears that trivial
          results in this unknown field will beget journal papers and  TR's
          for several years before funding agencies and reviewers catch on.
             [2]Questions from the Philosophy of dognitive science (dogmat-
          ics),  such  as  "If a dog barks in the condo complex and I'm not
          there to hear it, why do the neighbors claim it makes  a  sound?"
          are beyond the scope of this talk.






          term this learning mechanism bark propagation.  Since the weights
          only  increase,  just  as  in  the  normal  dog, at asymptote the
          network has only one stable state, which we  term  the  dead  dog
          state.   Our model is validated by the fact that many dogs appear
          to achieve this state while still breathing.  We will demonstrate
          a live simulation of our model at the talk.

------------------------------

End of AIList Digest
********************
