1-Jul-87 23:16:56-PDT,9496;000000000001
Mail-From: LAWS created at  1-Jul-87 22:51:25
Date: Wed  1 Jul 1987 22:49-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #165
To: AIList@STRIPE.SRI.COM


AIList Digest            Thursday, 2 Jul 1987     Volume 5 : Issue 165

Today's Topics:
  Queries - Expert Systems in Marketing & KCL on ISI's,
  Psychology - $6M Man & Methodology

----------------------------------------------------------------------

Date: 1 Jul 87 19:28:15 GMT
From: shire.dec.com!morand@decwrl.dec.com
Subject: EXPERT SYSTEMS IN MARKETING

I'm working on the definition of an DSS for Product pricing positioning and
I'm considering the EXPERT SYSTEMS as a potential answer to my problem.

Does any body had an experience or know an application of the expert system
to marketing ?

I would like to take in consideration :

        - Product life cycle
        - Price elasticity parameters
        - Internal competition
        - External competition

Thanks in advance,

Jean-claude MORAND      DTN 7 821 4782 or (41 22) 87 47 82
DEC Europe
decvax!decwrl!rhea!shire!morand

------------------------------

Date: Wed, 1 Jul 87 10:54:49 edt
From: Connie Ramsey <ramsey@nrl-aic.ARPA>
Subject: KCL on ISI's


Has anybody tried to install the latest (documentation dated July 1986)
version of KCL on an ISI?  We tried, but found that some code was missing
when machine=ISI.  If anybody knows anything about this problem, we would
appreciate a response.

                                        Thank you,
                                        Connie Ramsey
                                        ramsey@nrl-aic.arpa

------------------------------

Date: Tue, 30 Jun 87 15:57:23 MDT
From: Raul Machuca  STEWS-ID-T 678-4686 <rmachuca@wsmr06.ARPA>
Subject: 6Mil man


        The six-million dollar man has an explanation which is
biological rather than psychological.
        The center on/off receptors of the eye are arranged
in a discrete matrix. An edge gives the greatest signal when
the edge passes thru the center of a cell. When there
is not enough of a signal the edge cannot be seen. An object
moving at a fast rate of speed will be seen by the mind as
a sequence of snapshots. These snapshots take place when the
edge is lined up with the centers of a group of receptors. I an object
is moving at a fast rate of speed the neurons will not recover
to take another snapshot until the object has moved a considerable
distance.

        The slow motion still frame technique is simulating on
film exactly this process. The brain reacts in the same way
as if wewere seeing a quickly moving object and thus the neurons
generate the same signals as caused by actually looking at something
moving at a fast rate of speed.

------------------------------

Date: 1 Jul 87 06:36:44 GMT
From: umix!itivax!chinet!lee@RUTGERS.EDU (Lee Morehead)
Reply-to: umix!itivax!chinet!lee@RUTGERS.EDU (Lee Morehead)
Subject: Re: Why did $6M man run so slowly?


It is interesting to note that in the recent sequel movie to the $6M man,
his son could run with speeds measured in the hundreds of mph. While Steve
and Jamie retained the slow motion special effect, his son was given the
video blur special effect to indicate the several times greater speed of
his father. Interesting.
--

                                        Lee Morehead
                                        ...!ihnp4!chinet!lee

"One size fits all."
Just who is this "all" person anyway,
and why is he wearing my clothes?

------------------------------

Date: Tue, 30 Jun 87 07:18:40 pdt
From: norman%ics@sdcsvax.ucsd.edu (Donald A. Norman)
Subject: On how AI answers psychological issues


A comment on sin in AI, or  "Why did the $6M man run so slowly

AI researchers seem to like the sin of armchair reasoning.  It's a
pleasant sin: comfortable, fun, stimulating.  And nobody can ever be
proven right or wrong.  Most scientists, on the other hand, believe
that real answers are generated through the collection of data,
interpreted by validated theories.

The question "why did the $6M man run so slowly" is a case in point,
but my answer is also stimulated by the conference on "Foundation of
AI" that I just attended (held at MIT, arguing about the several
theoretical approaches to the representationa and simulation of
intelligence).  In AIlist, many folks have let forth their theories.
Some are clever, some are interesting.  Some are probably right, some
are probably wrong.  How would one ever know which?  Letting forth
with opinions is no way to answer a scientific question.

At the conference, many of AI's most famed luminaries let forth with
their opinions.  Psychological phenomena made up and explained faster
than the speed of thought.  Same observation applies.  The only thing
worse is when a researcher (in any discipline) becomes a parent.  then
the theories spin wildly and take the form: my child did the following
thing; therefore, all children do it; and therefore here is how the
mind works.

Same for why the $6M man ran so slowly.  If you really want to know
why slow motion was used, ASK THE FILM MAKER !  (producer, camerman,
editor, director).  The film maker selected this method for one of
several possible reasons, and armchair reasoning about it will get you
nowhere.  It might have been to stretch out the film, for budgetary
reasons, because they didn't know anything else to do, because they
accidentally hit the slow-motion switch once and, once they got
started on this direction, all future films had to be consistent, etc.
One suspects that filmmakers did not go through the long elaborated
reasoning that some of the respondents assumed.  Whatever the reason,
the best (and perhaps only) way to find out is to ask the people who
made the decision.  Of course, they themselves may not know, given
that much of our actions are not consciously known to us and do not
necessarly follow from neat declarative rurles stored in some nice
simple memory format (which is why expert systems methodology is
fundamentally flawed, but that is another story), but at least the
verbally described reasons can give you a starting point.

Note that the discussion has confounded several different questions.
One question is "why did the film makers chose to use slow motion?"  A
second question is, given that they made that choice, "Why does the
slow motion presentation of speeded motion produce a reasonable
efffect on the viewer?"  Here the answer can only come about through
experimentation.  However, for this question, the armchair
explanations make more sense and can start out as a plausible set of
hypotheses to be examined.

A third question has gotten raised in the discusion, which is "during
times of stress, or incipient danger, or doing a rapid task when very
well skilled, does subjective time pass more slowly?"  This is an
oft-reported finding.  Damn-near impossible to test.  (Possible,
though: subjective time, for example, changes with body temperature,
going faster when body temperature is raised, slower when lowered, and
since it is possible to determine that fact experimentally, you should
be able to determine the other).  The nature of subjective time is
most complex, but evidence would have it that filled time passes quite
differently than unfilled time, and the expert or person intensly
focusssed upon events is apt to attend to details not normally
visible, hence filling the time interval with numerous more activity
and events, hence changing th perception of time.

But before you all bombard the net with lots of anectodes about what
it felt like when in you auto accient, or skiing incident or ..., let
me remind you that the experience you have DURING the event itself, is
quite different from your memory of that experience.  The
esdperimental research on time perception shows that subjective
durations can reverse.  ( Events that may be boring to experence --
time passes every so slowly -- may be judged to have taken almost no
time at all in future retrospections -- no remembered events.  Events
with numerous things happening -- so quickly that you didn't have time
to respond to most of them -- in retropsect may seem to have taken
forever.)

The moral is that understanding the human (or animal) mind is most
difficult, it is apt to come about only through a combination of
experimental study, theoretical modeling, and simulation, and armchair
thinking, while fun, is pretty irrelevant to the endeavor.
Psychology, the field, can be frustrating to the non-participant.
Many tedious experiments.  Dumb experiments.  An insistence on
methodology that borders on the insane. And an apparent inability to
answer even the simplest questions.  Guilty.  But for reason.  Thinking
about "how the mind works" is fun, but not science, not the way to get
to the correct answer.

don norman


Donald A. Norman
Institute for Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
norman@nprdc.arpa       {decvax,ucbvax,ihnp4}!sdcsvax!ics!norman
norman@sdics.ucsd.edu   norman%sdics.ucsd.edu@RELAY.CS.NET

------------------------------

End of AIList Digest
********************
 2-Jul-87 00:13:54-PDT,14084;000000000000
Mail-From: LAWS created at  1-Jul-87 22:56:02
Date: Wed  1 Jul 1987 22:54-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #166
To: AIList@STRIPE.SRI.COM


AIList Digest            Thursday, 2 Jul 1987     Volume 5 : Issue 166

Today's Topics:
  Theory - Perception,
  Policy - Quoting

----------------------------------------------------------------------

Date: 29 Jun 87 22:46:31 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU  (Stephen
      Smoliar)
Subject: Re: The symbol grounding problem....

In article <1194@houdi.UUCP> marty1@houdi.UUCP (M.BRILLIANT) writes:
>
>I was just looking at a kitchen chair, a brown wooden kitchen
>chair against a yellow wall, in side light from a window.  Let's
>let a machine train its camera on that object.  Now either it
>has a mechanical array of receptors and processors, like the
>layers of cells in a retina, or it does a functionally
>equivalent thing with sequential processing.  What it has to do
>is compare the brightness of neighboring points to find places
>where there is contrast, find contrast in contiguous places so
>as to form an outline, and find closed outlines to form objects.
>There are some subtleties needed to find partly hidden objects,
>but I'll just assume they're solved.  There may also be an
>interpretation of shadow gradations to perceive roundness.
>
I have been trying to keep my distance from this debate, but I would like
to insert a few observations regarding this scenario.  In many ways, this
paragraph represents the "obvious" approach to perception, assuming that
one is dealing with a symbol manipulation system.  However, other approaches
have been hypothesized.  While their viability remains to be demonstrated,
it would be fair to say that, in the broad scope of perception in the real
world, the same may be said of symbol manipulation systems.

Consider the holographic model posed by Karl Pribram in LANGUAGES OF THE
BRAIN.  As I understand it, this model postulates that memory is a collection
of holographic transforms of experienced images.  As new images are
experienced, the brain is capable of retrieving "best fits" from this
memory to form associations.  Thus, the chair you see in the above
paragraph is recognized as a chair by virtue of the fact that it "fits"
other images of chairs you have seen in the past.

I'm not sure I buy this, but I'm at least willing to acknowledge it as
an alternative to your symbol manipulation scenario.  The biggest problem
I have has to do with retrieval.  As far as I understand, present holographic
retrieval works fine as long as you don't have to worry about little things
like change of scale, translation, or rotation.  If this model is going to
work, then the retrieval process is going to have to be more powerful than
the current technology allows.

The other problem relates to concept acquisition, as was postulated in
Brilliant's continuation of the scenario:
>
>Now the machine has a form.  If the form is still unfamiliar,
>let it ask, "What's that, Daddy?"  Daddy says, "That's a chair."
>The machine files that information away.  Next time it sees a
>similar form it says "Chair, Daddy, chair!"  It still has to
>learn about upholstered chairs, but give it time.
>
The difficulty seems to be in what it means to file something away if
one's memory is simply one of experiences.  Does the memory trace of the
chair experience include Daddy's voice saying "chair?"  While I'm willing
to acknowledge a multi-media memory trace, this seems a bit pat.  It
reminds me of Skinner's VERBAL BEHAVIOR, in which he claimed that one
learned the concept "beautiful" from stimuli of observing people saying
"beautiful" in front of beautiful objects.  This conjures up a vision
of people wandering around the Metropolitan Museum of Art mutttering
"beautiful" as they wander from gallery to gallery.

Perhaps the difficulty is that the mind really doesn't want to assign a
symbol to every experience immediately.  Rather, following the model of
Holland et. al., it is first necessary to build up some degree of
reinforcement which assures that a particular memory trace is actually
going to be retrieved relatively frequently (whatever that means).
In such a case, then, a symbol becomes a fast-access mechanism for
retrieval of that trace (or a collection of common traces).  However,
this gives rise to at least two questions for which I have no answer:

        1.  What are the criteria by which it is decided that such a
                symbol is required for fast-access?

        2.  Where does the symbol's name come from?

        3.  How is the symbol actually "bound" to what it retrieves?

These would seem to be the sort of questions which might help to tie
this debate down to more concrete matters.

Brilliant continues:
>That brings me to a question: do you really want this machine
>to be so Totally Turing that it grows like a human, learns like
>a human, and not only learns new objects, but, like a human born
>at age zero, learns how to perceive objects?  How much of its
>abilities do you want to have wired in, and how much learned?
>
This would appear to be one of the directions in which connectionism is
leading.  In a recent talk, Sejnowski talked about "training" networks
for text-to-speech and backgammon . . . not programming them.  On the
other hand, at the current level of his experiments, designing the network
is as important as training it;  training can't begin until one has a
suitable architecture of nodes and connections.  The big unanswered
questions would appear to be:  will all of this scale upward?  That
is, is there ultimately some all-embracing architecture which includes
all the mini-architectures examined by connectionist experiments and
enough more to accommodate the methodological epiphenomenalism of real
life?

------------------------------

Date: 1 Jul 87 16:14:41 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem: Against Rosch &
         Wittgenstein

In article <949@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>>      There is no reliable, consensual all-or-none categorization performance
>>      without a set of underlying features?  That sounds like a restatement of
>>      the categorization theorist's credo rather than a thing that is so.
>
>If not, what is the objective basis for the performance? And how would
>you get a device to do it given the same inputs?

I think there's some confusion as to whether Harnad's claim is just an empty
tautology or a significant empirical claim. To wit: it's clear that we can
reliably recognize chairs from sensory input, and we don't do this by magic.
Hence, we can perhaps take it as trivially true that there are some
"features" of the input that are being detected. If we are taking this line
however, we have remember that it doesn't really say *anything* about the
operation of the mechanism -- it's just a fancy way of saying we can
recognize chairs.

On the other hand, it might be taken as a significant claim about the nature
of the chair-recognition device, viz., that we can understand its workings as
a process of actually parsing the input into a set of features and actually
comparing these against what is essentially some logical formula in
featurese.  This *is* an empirical claim, and it is certainly dubitable:
there could be pattern recognition devices (holograms are one speculative
suggestion) which cannot be interestingly broken down into feature-detecting
parts.

Anders Weinstein
BBN Labs

------------------------------

Date: 1 Jul 87 22:33:50 GMT
From: teknowledge-vaxc!dgordon@unix.sri.com  (Dan Gordon)
Subject: Re: The symbol grounding problem: Against Rosch &
         Wittgenstein

In article <949@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>
>dgordon@teknowledge-vaxc.ARPA (Dan Gordon)
>of Teknowledge, Inc., Palo Alto CA writes:
>
>>      There is no reliable, consensual all-or-none categorization performance
>>      without a set of underlying features?  That sounds like a restatement of
>>      the categorization theorist's credo rather than a thing that is so.
>
>If not, what is the objective basis for the performance? And how would
>you get a device to do it given the same inputs?

Not a riposte, but some observations:

1) finding an objective basis for a performance and getting a device to
do it given the same inputs are two different things.  We may be able
to find an objective basis for a performance but be unable (for merely
contingent reasons, like engineering problems, etc., or for more funda-
mental reasons) to get a device to exhibit the same performance.  And,
I suppose, the converse is true: we may be able to get a device to mimic
a performance without understanding the objective basis for the model
(chess programs seem to me to fall into this class).

2) There may in fact be categorization performances that a) do not use
a set of underlying features; b) have an objective basis which is not
feature-driven; and c) can only be simulated (in the strong sense) by
a device which likewise does not use features.  This is one of the
central prongs of Wittgenstein's attack on the positivist approach to
language, and although I am not completely convinced by his criticisms,
I haven't run across any very convincing rejoinder.

Maybe more later, Dan Gordon

------------------------------

Date: 1 Jul 87 14:02:28 GMT
From: harwood@cvl.umd.edu  (David Harwood)
Subject: Re: The symbol grounding problem - please start your own
         newsgroup

In article <950@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:

[...replying to M.B. about something...]

>................................................ I do not see this
>intimate interrelationship -- between names and, on the one hand, the
>nonsymbolic representations that pick out the objects they refer to
>and, on the other hand, the higher-level symbolic descriptions into
>which they enter -- as being perspicuously described as a link between
>a pair of autonomous nonsymbolic and symbolic modules. The relationship is
>bottom-up and hybrid through and through, with the symbolic component
>derivative from, inextricably interdigitated with, and parasitic on the
>nonsymbolic.

        Uh - let me get this straight. This is the conclusion for your
most recent posting on "the symbol grounding problem." In the first
poorly written sentence you criticize to your bogeyman, saying he ain't
"perspicuous." Small wonder - you invent him for purposes of obsurantist
controversy; no one else even believes in him so far as I can tell.
        But wait - there is more. You say your bogeyman - he ain't
"perspicuous." (as if you aren't responsible for this) Then you go on
with what you consider, apparently, to be a "perspicuous" account of
the meaning of "names." So far as I can tell, this sentence is the most
full and "perspicuous" accounting yet, confirmed by everything you've
written on this subject (which I shall not need quote, since it is fresh
on everyone's mind). You say, with inestimatable "perspicuity," concerning
your own superior speculations about the meaning of names (which I quote
since we have all day, day after day, for this): "The relationship is
bottom-up and hybrid through and through, with the symbolic component
derivative from, inextricably interdigitated with, and parasitic on the
symbolic." A mouthful all right. Interdigitated with something all right.
        Could you please consider creating your own newsgroup, Mr. Harnad?
I don't know what your purpose is, except for self-aggrandizement, but
I'm fairly sure your purpose has nothing to do with computer science. There's
no discussion of algorithms, computing systems, not even any logical
formality in all this bullshit. And if we have to hear about the meaning of
names - why couldn't we hear from Saul Kripke, instead of you? Then we
might learn something.
        Why not create your own soapbox? I will never listen or bother.
I wouldn't even bother to read BBS, which you apparently edit - with
considerable help no doubt, except that you don't write all the articles
(as you do here).

-David Harwood

------------------------------

Date: Wed, 1 Jul 1987  13:28 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest   V5 #163

Too much, already.  This "symbol grounding" has gotten out of hand.
This is a network, not a private journal.

------------------------------

Date: Wed 1 Jul 87 22:02:55-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Reply-to: AIList-Request@STRIPE.SRI.COM
Subject: Policy on Quoting

Perhaps the discussion of philosophy/theory/perception would be
more palatable -- or even concise and understandable -- if we
refrained from quoting each other in the style of the old
Phil-Sci list.  Quotations are often necessary, of course, but
the average reader can follow a discussion without each participant
echoing his predecessors.  Those few who are really interested
in exact wordings can save the relevant back issues; I'll even
send copies on request.

On the whole, I think that this interchange has been conducted
admirably.  My hope in making this suggestion is that participants
will spend less bandwidth attacking each other's semantics and more of
it constructing and presenting their own coherent positions.  (It's OK
if we don't completely agree on terms such as "analog", as long as
each contributor builds a consistent world view that includes his own
Humpty-Dumpty variants.)

                                        -- Ken

------------------------------

End of AIList Digest
********************
 6-Jul-87 01:01:14-PDT,12836;000000000000
Mail-From: LAWS created at  6-Jul-87 00:47:32
Date: Mon  6 Jul 1987 00:46-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #167
To: AIList@STRIPE.SRI.COM


AIList Digest             Monday, 6 Jul 1987      Volume 5 : Issue 167

Today's Topics:
  Seminars - Planning Actions with Context-Dependent Effects (SRI) &
    Automated Process Planning using Abstraction (CMU),
  Conference - SLUG '87 Reminder &
    Simulation and AI &
    Expert Systems in the ADP Environment
----------------------------------------------------------------------

Date: Tue, 30 Jun 87 11:52:11 PDT
From: Amy Lansky <lansky@venice.ai.sri.com>
Subject: Seminar - Planning Actions with Context-Dependent Effects (SRI)


                 SYNTHESIZING PLANS THAT CONTAIN ACTIONS
                     WITH CONTEXT-DEPENDENT EFFECTS

          Edwin P.D. Pednault (VAX135!EPDP@UCBVAX.BERKELEY.EDU)

                  Knowledge Systems Research Department
                         AT&T Bell Laboratories
                          Crawfords Corner Road
                            Holmdel, NJ 07733

                       11:00 AM, MONDAY, July 6
              SRI International, Building E, Room EJ228

In this talk, I will present an approach to solving planning problems
that involve actions whose effects depend on the state of the world at
the time the actions are performed.  To solve such problems, the idea
of a secondary precondition is introduced.  A secondary precondition
for an action is a condition that must be true at the time the action
is performed for the action to have its desired effect.  By imposing
the appropriate secondary precondition as an additional precondition
to an action, we can coerce that action to preserve a desired
condition or to cause a desired condition to become true.  I will
demonstrate the use of secondary preconditions and show how they can
be derived from the specification of a planning problem in a
completely general and domain-independent fashion.

VISITORS:  Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk.  Thanks!

------------------------------

Date: 30 Jun 87 12:21:51 EDT
From: Marcella.Zaragoza@isl1.ri.cmu.edu
Subject: Seminar - Automated Process Planning using Abstraction (CMU)

                             SPECIAL SEMINAR


TOPIC:  AUTOMATED PROCESS PLANNING USING HIERARCHICAL ABSTRACTION *

WHO:    Dana S. Nau
        Computer Science Department and Institute for
        Advanced Computer Studies, University of Maryland, and
        Factory Automation Systems Division, National Bureau of Standards

WHEN:   Monday, July 6, 10:00-11:30 a.m.

WHERE:  WeH 4623

                                ABSTRACT

     SIPS is a system which uses AI techniques to decide what machining
operations to use in the creation of metal parts.  SIPS generates its
plans completely from scratch, using the specification of the part to be
produced and knowledge about the intrinsic capabilities of each
manufacturing operation.

     Rather than using a rule-based approach to knowledge representation,
SIPS uses a hierarchical abstraction technique called hierarchical knowledge
clustering.  Problem-solving knowledge is organized in a taxonomic hierarchy
using frames, and problem solving is done using an adaptation of Branch and
Bound.

     The development of SIPS was done with two long-term goals in mind:
the use of AI techniques to develop a practical generative process planning
system, and the investigation of fundamental AI issues in representing and
reasoning about three-dimensional objects.  SIPS represents an important
step toward these goals, and a number of extensions and enhancements to SIPS
are either underway or planned.  SIPS is currently being integrated into the
Automated Manufacturing Research Facility (AMRF) project at the National
Bureau of Standards.


* This work has been supported in part by the following sources:  an NSF
Presidential Young Investigator Award to Dana Nau, NSF Grant NSFD
CDR-85-00108 to the University of Maryland Systems Research Center, IBM
Research, General Motors Research Laboratories, and Martin Marietta
Laboratories.

------------------------------

Date: Sat, 27 Jun 1987  14:13 CDT
From: CS.PURVIS@R20.UTEXAS.EDU
Subject: Conference - SLUG '87 Reminder

This is a reminder that the national meeting of the

                      Symbolics Lisp Users Group

will be held in Seattle, July 6-10th.  You may register in advance by
calling the University of Washington at (206) 543-2300.

The conference schedule is listed below.  Note particularly the  panel
discussions  on  Thursday  and  Friday  that  will  examine  available
alternatives   to   the   Symbolics   Lisp   development   environment
architecture and consider what trade-offs are involved.

This is THE Lisp machine conference.  Don't miss it!

                          SLUG '87 Schedule

                  July 6-10, 1987 - Seattle, Washington

  MONDAY -- (tutorials)

      8:00

           Registration desk opens

      9:00 to 12:30

          * AI Program Design
          * Overview of Site Administration
          * Color Graphics I

      2:00 to 5:30

          * AI Program Design (cont'd)
          * Overview of Site Administration (cont'd)
          * Color Graphics II
          * Color Graphics III

  TUESDAY -- (tutorials)

      8:00

           Registration desk opens

      9:00 to 12:30

          * Programming Productivity I
          * Introduction to ART
          * Building Knowledge System Interfaces

      2:00 to 5:30

          * Programming Productivity II
          * Introduction to ART (cont'd)

      7:00 - 9:00

           Reception

  WEDNESDAY -- (conference sessions)

      8:00

           Registration desk opens

      9:00 to 12:30

          * Welcome & Opening remarks
          * State of SLUG
          * Symbolics Corporate Status Report
          * Software & Hardware Support
          * Technical Status Report
          * New Product Announcements
          * General and Reverse Q & A

      2:00 to 6:00

          * Software Engineering on LISP Machines
          * Symbolic Computing for New Users
          * General Technical Q & A

      Evening --  BOAF (Birds Of A Feather)

          * Critique of the Symbolics User Interface -- GNU EMACS and HP's
              NMODE both present a novel way of interacting with LISP.
              Is the LISP machine paradigm better?  This meeting will drive
              tomorrow afternoon's
            session.
          * New user training: Sharing insights, techniques, and introductory
            materials for new users.
          * Symbolics maintenance issues.

    THURSDAY -- (conference sessions)

      9:00 to 12:30

          * Common LISP -- What is the status of Common LISP the Language?
              Classes?  Common Windows?  Error handling?
          * SLUG Library -- What's new and available?
          * Networks -- VMS, UNIX, DECNET, IP-TCP, Namespaces,
              Domain Resolution, etc.
          * Non-LISP Language Support -- PROLOG, ADA, FORTRAN, PASCAL, C, etc.

      2:00 to 5:30

          * LISPM pearls -- An informal presentation of useful but little
              known LISP machine features and capabilities.
          * Critique of the Symbolics User Interface -- See yesterday's BOAF.
          * Technical Q & A

    FRIDAY -- (conference sessions)

      9:00 to 12:30

          * Trade-offs in LISP (development) environments -- This is a panel
              discussion of the differences between developing LISP software
              on different workstation architectures.
          * Conference Summary & Feedback
          * SLUG Business Meeting

      2:00 to 3:30

          * Expert Systems Session

------------------------------

Date: Thu, 25 Jun 87 10:50:35 edt
From: Paul Fishwick <fishwick%bikini.cis.ufl.edu@RELAY.CS.NET>
Subject: Conference - SIMULATION AND AI


                     ANNOUNCEMENT AND CALL FOR PAPERS

              SIMULATION AND ARTIFICIAL INTELLIGENCE CONFERENCE
                   Part of the 1988 SCS MultiConference
                       San Diego, CA Feb 3-5, 1988


Paper and Special Session Proposals should be sent to SCS (Society for
Computer Simulation) by July 15, 1987 [note: the deadline has been extended].
Some suggested topics are listed below:

Relation between AI and Simulation
Intelligent Simulation Environments
Knowledge-Based Simulation
Decision Support Systems
Qualitative Simulation (there will be a panel discussion on this topic)
Simulation in AI
Ada and AI and Simulation
Aerospace Applications
Biomedical Applications
Expert Systems in Emergency Planning
Automatic Model Generation
Expert Systems
Learning Systems
Natural Language Processing
Robotics
Speech Recognition
Vision
AI Hardware/Workstations
AI Programming Languages
AI/ES Software Tools

A paper proposal should be submitted (approx. 300 words) to:

SCS
P.O. Box 17900
San Diego, CA 92117-7900
------------------------------------------------------------------------------

People attending the AI and Simulation workshop at AAAI and others interested
in AI and Simulation are strongly encouraged to attend!

Paul Fishwick
University of Florida
CSNET: fishwick@ufl.edu

------------------------------

Date: 24 Jun 87 15:35:00 EST
From: "LFA" <lfa@ornl-stc10.arpa>
Reply-to: "LFA" <lfa@ornl-stc10.arpa>
Subject: Conference - Expert Systems in the ADP Environment

                           CALL FOR PAPERS
                           ==== === ======


                 NARDAC Washington/ORNL/DSRD Conference
                                  on
             Expert Systems Technology in the ADP Environment
                          to be held in
                         Washington, D.C.
                        November 2-3, 1987

THE CONFERENCE
=== ==========

     The Naval Regional Data Automation Center in Washington, D.C., the Oak
Ridge National Laboratory, and the Data Systems Research and Development
Program, Martin Marietta Energy Systems, Inc., are sponsoring a conference whose
primary focus is on the use of Artificial Intelligence in traditional computing
domains and its potential for further exploitation.  Both invited talks and
contributed papers will be given at the conference.


INVITED SPEAKERS
======= ========

        Several individuals have tentatively accepted invitations to speak at
this conference on the various aspects of Artificial Intelligence as it
pertains to traditional computing problems.  Scheduled speakers and their
topic areas include:

        Prof. James Slagle (Minnesota)       -  Keynote speaker
        Prof. Brian Gaines (Calgary)         -  Intelligent Interfaces for
                                                Knowledge-Based Systems
        Prof. Larry Henschen (Northwestern)  -  Logic and Databases
        Dr. Sukhumay Kundu (Louisiana State) -  AI in Software Engineering


CONTRIBUTED PAPERS
=========== ======

        In addition to the invited talks, papers are being solicited from
researchers in academia, government and industry in the following areas:

        ADP Project and Systems Management,
        Knowledge-Based Simulation and Modeling,
        Intelligent Man-Machine Interfaces,
        Intelligent Databases,
        AI in Software Engineering,
        AI as a Tool for Decision-Making, and
        Innovative Applications in MIS or Scientific Computing.


SUBMISSION DETAILS
========== =======

        Authors are asked to submit five (5) copies of their paper, which is to
be single-spaced and between five to seven pages in length.  Both finished and
ongoing research will be considered by the program committee and referees.
Authors should adhere to the following submission schedule:

        August  1, 1987    -   Submission Deadline
        August 15, 1987    -   Notification of acceptance
        September 15, 1987 -   Camera-ready copies due


Send papers, requests for additional information, and all other correspondence
to

        Lloyd F. Arrowood
        Program Chairman
        Oak Ridge National Laboratory
        Building 4500-North, Mail Stop 207
        Oak Ridge, TN 37831
                or
        BITNET:  LFA@ORNLSTC

------------------------------

End of AIList Digest
********************
 6-Jul-87 01:02:25-PDT,19407;000000000000
Mail-From: LAWS created at  6-Jul-87 00:51:49
Date: Mon  6 Jul 1987 00:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #168
To: AIList@STRIPE.SRI.COM


AIList Digest             Monday, 6 Jul 1987      Volume 5 : Issue 168

Today's Topics:
  Policy - Hard Limit on Quotations,
  Theory - The Symbol Grounding Problem & Against Rosch and Wittgenstein

----------------------------------------------------------------------

Date: Thu 2 Jul 87 09:41:54-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Subject: Hard Limit on Quotations

The "quotation problem" has become so prevalent across all of the
Usenet newsgroups that the gateway now rejects any message with more
quoted text than new text.  If a message is rejected for this reason,
I am unlikely to clean it up and resend.

As I indicated last week, I think we could get along just fine
with more "I say ..." and less "You said ...".  Paraphrases are
fine and even beneficial, but trying to reestablish the exact
context of each comment is not worth the hassle to the general
readership.  Perhaps some of the hair splitting could be carried
on through private mail, with occasional reports to the list on
points of agreement and disagreement.  Discussions of perception
and categorization are appropriate for AIList, but we cannot give
unlimited time and attention to any one topic.

I've engaged in "interpolated debate" myself, and have enjoyed
this characteristic mode of net discussions.  I won't object to
occasional use, but I do get very tired of seeing the same text
quoted in message after message.  I used to edit such repetitions
out of the digest, but I can't manage it with this traffic volume.
Please keep in mind that this is a broadcast channel and that many
readers have slow terminals or have to pay intercontinental
transmission fees.  Words are money.

It seems that a consistent philosophy cannot be put forth in less
than a full book, or at least a BBS article, and that meaningful
rebuttals require similar length.  We have been trying to cram this
through a linear channel, with swirls of debate spinning off from each
paragraph [yes, I know that's a contradiction], and there is no
evidence of convergence.  Let's try to slow down for a while.

I would also recommend that messages be kept to a single topic,
even if that means (initially) that a full response to a previous
message must be split into parts.  Separate discussion of grounding,
categorization, perception, etc., would be more palatable than the
current indivisible stream.  I would like to sort the discussions,
if only for ease of meaningful retrieval, but can't do so if they
all carry the same subject line and mix of topics.

                                        -- Ken

------------------------------

Date: Thu, 2 Jul 87 09:37:21 EDT
From: Alex Kass <kass-alex@YALE.ARPA>
Subject: AIList Digest   V5 #163


Can't we bag this damn symbol grounding discussion already?

If it *must* continue, how about instituting a symbol grounding news
group, and freeing the majority of us poor AILIST readers from the
burden of flipping past the symbol grounding stuff every morning.


                                 -Alex

ARPA:    Kass@yale
UUCP:    {decvax,linus,seismo}!yale!kass
BITNET:  kass@yalecs
US:      Alex Kass
         Yale University Computer Science Department
         New Haven, CT 06520

------------------------------

Date: 2 Jul 87 05:19:05 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


smoliar@vaxa.isi.edu (Stephen Smoliar)
Information Sciences Institute writes:

>       Consider the holographic model proposed by Karl Pribram in LANGUAGES
>       OF THE BRAIN... as an alternative to [M.B. Brilliant's] symbol
>       manipulation scenario.

Besides being unimplemented and hence untested in what they can and can't
do, holographic representations seem to inherit the same handicap as
all iconic representations: Being unique to each input and blending
continuously into one another, how can holograms generate
categorization rather than merely similarity gradients (in the hard
cases, where obvious natural gaps in the input variation don't solve
the problem for you a priori)? What seems necessary is active
feature-selection, based on feedback from success and failure in attempts
to learn to sort and label correctly, not merely passive filtering
based on natural similarities in the input.

>       [A] difficulty seems to be in what it means to file something away if
>       one's memory is simply one of experiences.

Episodic memory -- rote memory for input experiences -- has the same
liability as any purely iconic approach: It can't generate category
boundaries where there is significant interconfusability among
categories of episodes.

>       Perhaps the difficulty is that the mind really doesn't want to
>       assign a symbol to every experience immediately.

That's right. Maybe it's *categories* of experience that must first be
selectively assigned names, not each raw episode.

>       Where does the symbol's name come from? How is the symbol actually
>       "bound" to what it retrieves?

That's the categorization problem.

>       The big unanswered question...[with respect to connectionism]
>       would appear to be:  will [it] all... scale upward?

Connectionism is one of the candidates for the feature-learning
mechanism. That it's (i) nonsymbolic, that it (ii) learns, and that it
(iii) uses the same general statistical algorithm across problem-types
(i.e., that it has generality rather than being ad hoc, like pure
symbolic AI) are connectionism's plus's. (That it's brainlike is not,
nor is it true, on current evidence, nor even relevant at this stage.)
But the real question is indeed: How much can it really do (i.e., will it
scale up)?
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 2 Jul 87 04:36:37 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem: Against Rosch &
         Wittgenstein


dgordon@teknowledge-vaxc.ARPA (Dan Gordon)
of Teknowledge, Inc., Palo Alto CA writes:

>       finding an objective basis for a performance and getting a device to
>       do it given the same inputs are two different things. We may be able
>       to find an objective basis for a performance but be unable...to get a
>       device to exhibit the same performance. And, I suppose, the converse
>       is true: we may be able to get a device to mimic a performance without
>       understanding the objective basis for the model

I agree with part of this. J.J. Gibson argued that the objective basis of much
of our sensorimotor performance is in stimulus invariants, but this
does not explain how we get a device (like ourselves) to find and use
those invariants and thereby generate the performance. I also agree that a
device (e.g., a connectionist network) may generate a performance
without our understanding quite how it does it (apart from the general
statistical algorithm it's using, in the case of nets). But the
point I am making is neither of these. It concerns whether performance
(correct all-or-none categorization) can be generated without an
objective basis (in the form of "defining" features) (a) existing and
(b) being used by any device that successfully generates the
performance. Whether or not we know know what the objective basis is
and how it's used is another matter.

>       There may in fact be categorization performances that a) do not use
>       a set of underlying features; b) have an objective basis which is not
>       feature-driven; and c) can only be simulated (in the strong sense) by
>       a device which likewise does not use features.  This is one of the
>       central prongs of Wittgenstein's attack on the positivist approach to
>       language, and although I am not completely convinced by his criticisms,
>       I haven't run across any very convincing rejoinder.

Let's say I'm trying to provide the requisite rejoinder (in the special case of
all-or-none categorization, which is not unrelated to the problems of
language: naming and description). Wittgenstein's arguments were not governed
by a thoroughly modern constraint that has arisen from the possibility of
computer simulation and cognitive modeling. He was introspecting on
what the features defining, say, "games" might be, and he failed to
find a necessary and sufficient set, so he said there wasn't one. If
he had instead asked: "How, in principle, could a device categorize
"games" and "nongames" successfully in every instance?" he would have had
to conclude that the inputs must provide an objective basis
which the device must find and use. Whether or not the device can
introspect and report what the objective basis is is another matter.

Another red herring in Wittegenstein's "family resemblance" metaphor was
the issue of negative and disjunctive features. Not-F is a perfectly good
feature. So is Not-F & Not-G. Which quite naturally yields the
disjunctive feature F-or-G. None of this is tautologous. It just shows
up a certain arbitrary myopia there has been about what a "feature" is.
There's absolutely no reason to restrict "features" to monadic,
conjunctive features that subjects can report by introspection. The
problem in principle is whether there are any logical (and nonmagical)
alternatives to a feature-set sufficient to sort the confusable
alternatives correctly. I would argue that -- apart from contrived,
gerrymandered cases that no one would want to argue formed the real
basis of our ability to categorize -- there are none.

Finally, in the special case of categorization, the criterion of "defining"
features also turns out to be a red herring. According to my own model,
categorization is always provisional and context-dependent (it depends on
what's needed to successfully sort the confusable alternatives sampled to date).
Hence an exhaustive "definition," good till doomsday and formulated from the
God's-eye viewpoint is not at issue, only an approximation that works now, and
can be revised and tightened if the context is ever widened by further
confusable alternatives that the current feature set would not be able to
sort correctly. The conflation of (1) features sufficient to generate the
current provisional (but successful) approximation and (2) some nebulous
"eternal," ontologically exact "defining" set (which I agree does not exist,
and may not even make sense, since categorization is always a relative,
"compared-to-what?" matter) has led to a multitude of spurious
misunderstandings -- foremost among them being the misconception that
our categories are all graded or fuzzy.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 2 Jul 87 15:51:40 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


On ailist cugini@icst-ecf.arpa writes:

>       why say that icons, but not categorical representations or symbols
>       are/must be invertible? Isn't it just a vacuous tautology to claim
>       that icons are invertible wrt to the information they preserve, but
>       not wrt the information they lose?... there's information loss (many
>       to one mapping) at each stage of the game: 1. distal object...
>       2. sensory projection... 3. icons... 4. categorical representation...
>       5. symbols... do you still claim that the transition between 2
>       and 3 is invertible in some strong sense which would not be true of,
>       say, [1 to 2] or [3 to 4], and if so, what is that sense?... Perhaps
>       you just want to say that the transition between 2 and 3 is usually
>       more invertible than the other transitions [i.e., invertibility as a
>       graded category]?

[In keeping with Ken Laws' recommendation about minimizing quotation, I have
compressed this query as much as I could to make my reply intelligible.]

Iconic representations (IRs) must perform a very different function from
categorical representations (IRs) or symbolic representations (SRs).
In my model, IRs only subserve relative discrimination, similarity
judgment and sensory-sensory and sensory-motor matching. For all of
these kinds of task, traces of the sensory projection are needed for
purposes of relative comparison and matching. An analog of the sensory
projection *in the properties that are discriminable to the organism*
is my candidate for the kind of representation that will do the job
(i.e., generate the performance). There is no question of preserving
in the IR properties that are *not* discriminable to the organism.

As has been discussed before, there are two ways that IRs could in
principle be invertible (with the discriminable properties of the
sensory projection): by remaining structurally 1:1 with it or by going
into symbols via A/D and an encryption and decryption transformation in a
dedicated  (hard-wired) system. I hypothesize that structural copies are
much more economical than dedicated symbols for generating discrimination
performance (and there is evidence that they are what the nervous system
actually uses). But in principle, you can get invertibility and generate
successful discrimination performance either way.

CRs need not -- indeed cannot -- be invertible with the sensory
projection because they must selectively discard all features except
those that are sufficient to guide successful categorization
performance (i.e., sorting and labeling, identification). Categorical
feature-detectors must discard most of the discriminable properties preserved
in IRs and selectively preserve only the invariant properties shared
by all members of a category that reliably distinguish them from
nonmembers. I have indicated, though, that this representation is
still nonsymbolic; the IR to CR transformation is many-to-few, but it
continues to be invertible in the invariant properties, hence it is
really "micro-iconic." It does not invert from the representation to
the sensory projection, but from the representation to invariant features of
the category. (You can call this invertibility a matter of degree if
you like, but I don't think it's very informative. The important
difference is functional: What it takes to generate discrimination
performance and what it takes to generate categorization
performance.)

Finally, whatever invertibility SRs have is entirely parasitic on the
IRs and CRs in which they are grounded, because the elementary SRs out
of which the composite ones are put together are simply the names of
the categories that the CRs pick out. That's the whole point of this
grounding proposal.

I hope this explains what is invertible and why. (I do not understand your
question about the "invertibility" of the sensory projection to the distal
object, since the locus of that transformation is outside the head and hence
cannot be part of the internal representation that cognitive modeling is
concerned with.)

--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 2 Jul 87 01:19:35 GMT
From: ctnews!pyramid!prls!philabs!pwa-b!mmintl!franka@unix.sri.com 
      (Frank Adams)
Subject: Re: The symbol grounding problem: Correction re.
         Approximationism

In article <923@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
|In responding to Cugini and Brilliant I misinterpreted a point that
|the former had made and the latter reiterated. It's a point that's
|come up before: What if the iconic representation -- the one that's
|supposed to be invertible -- fails to preserve some objective property
|of the sensory projection? For example, what if yellow and blue at the
|receptor go into green at the icon? The reply is that an analog
|representation is only analog in what it preserves, not in what it fails
|to preserve.

I'm afraid when I parse this, using the definitions Harnad uses, it comes
out as tautologically true of *all* representations.

"Analog" means "invertible".  The invertible properties of a representation
are those properties which it preserves.  Is there some strange meaning of
"preserve" being used here?  Otherwise, I don't see how this statement has
any meaning.
--

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

------------------------------

Date: 2 Jul 87 01:07:00 GMT
From: ctnews!pyramid!prls!philabs!pwa-b!mmintl!franka@unix.sri.com 
      (Frank Adams)
Subject: Re: The symbol grounding problem

In article <917@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
|Finally, and perhaps most important: In bypassing the problem of
|categorization capacity itself -- i.e., the problem of how devices
|manage to categorize as correctly and successfully as they do, given
|the inputs they have encountered -- in favor of its fine tuning, this
|line of research has unhelpfully blurred the distinction between the
|following: (a) the many all-or-none categories that are the real burden
|for an explanatory theory of categorization (a penguin, after all, be it
|ever so atypical a bird, and be it ever so time-consuming for us to judge
|that it is indeed a bird, is, after all, indeed a bird, and we know
|it, and can say so, with 100% accuracy every time, irrespective of
|whether we can successfully introspect what features we are using to
|say so) and (b) true "graded" categories such as "big," "intelligent,"
|etc. Let's face the all-or-none problem before we get fancy...

I don't believe there are any truely "all-or-none" categories.  There are
always, at least potentially, ambiguous cases.  There is no "100% accuracy
every time", and trying to theorize as though there were is likely to lead
to problems.

Second, and perhaps more to the point, how do you know that "graded"
categories are less fundamental than the other kind?  Maybe it's the other
way around.  Maybe we should try to understand to understand graded
categories first, before we get fancy with the other kind.  I'm not saying
this is the case; but until we actually have an accepted theory of
categorization, we won't know what the simplest route is to get there.
--

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

------------------------------

End of AIList Digest
********************
 6-Jul-87 01:03:32-PDT,17610;000000000000
Mail-From: LAWS created at  6-Jul-87 01:00:15
Date: Mon  6 Jul 1987 00:59-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #169
To: AIList@STRIPE.SRI.COM


AIList Digest             Monday, 6 Jul 1987      Volume 5 : Issue 169

Today's Topics:
  Theory - "Fuzzy" Categories?

----------------------------------------------------------------------

Date: 2 Jul 87 01:44:00 GMT
From: ctnews!pyramid!prls!philabs!pwa-b!mmintl!franka@unix.sri.com 
      (Frank Adams)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <936@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
|The question was: Do all-or-none categories (such as "bird") have "defining"
|features that can be used to sort members from nonmembers at the level of
|accuracy (~100%) with which we sort? However they are coded, I claim that
|those features MUST exist in the inputs and must be detected and used by the
|categorizer. A penguin is not a bird as a matter of degree, and the features
|that reliably assign it to "bird" are not graded.

I don't see how this follows.  It is quite possible to make all-or-none
judgements based on graded features.  Thermostats, for example, do it all
the time.  People do, too.  The examples which come to mind as being
obviously in this category are all judgements of actions to take based on
such features, not of categorization.  But then, we don't understand how we
categorize.

But to take an example of categorizing based on a graded feature.  Consider
a typical, unadorned, wooden kitchen chair.  We have no problem categorizing
this as a "chair".  Consider the same object, with no back.  This is
clearly categorized as a "stool", and not a "chair".  Now vary the size of
the back.  With a one inch back, the object is clearly still a "stool"; with
a ten inch back, it is clearly a "chair"; somewhere in between is an
ambiguous point.

I would assert that we *do*, in fact, make "all-or-none" type distinctions
based precisely on graded distinctions.  We have arbitrary (though vague)
cut off points where we make the distinction; and those cut off points are
chosen in such a way that ambiguous cases are rare to non-existent in our
experience[1].

In short, I see nothing about "all-or-none" categories which is not
explainable by arbitrary cutoffs of graded sensory data.

---------------
[1] There are some categories where this strategy does not work.  Colors are
a good example of this -- they vary over all of their range, with no very
rare points in it.  In this case, we use instead the strategy of large
overlapping ranges -- two people may disagree on whether a color should be
described as "blue" or "green", but both will accept "blue-green" as a
description.  The same underlying strategy applies: avoid borderline
situations.
--

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

------------------------------

Date: 3 Jul 87 12:43:39 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <958@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> On ailist cugini@icst-ecf.arpa writes:
> >     why say that icons, but not categorical representations or symbols
> >     are/must be invertible? Isn't it just a vacuous tautology to claim
> >     that icons are invertible wrt to the information they preserve, but
> >     not wrt the information they lose?... there's information loss (many
> >     to one mapping) at each stage of the game ...

In Harnad's response he does not answer the question "why?"  He
only repeats the statement with reference to his own model.

Harnad probably has either a real problem or a contribution to
the solution of one.  But when he writes about it, the verbal
problems conceal it, because he insists on using symbols that
are neither grounded nor consensual.  We make no progress unless
we learn what his terms mean, and either use them or avoid them.

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 3 Jul 87 19:26:40 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?


In Article 176-8 of comp.cog-eng: franka@mmintl.UUCP (Frank Adams)
of Multimate International, E. Hartford, CT.writes:

>       I don't believe there are any truly "all-or-none" categories. There are
>       always, at least potentially, ambiguous cases... no "100% accuracy
>       every time"...  how do you know that "graded" categories are less
>       fundamental than the other kind?

On the face of it, this sounds self-contradictory, since you state
that you don't believe "the other kind" exists. But let's talk common sense.
Most of our object categories are indeed all-or-none, not graded. A
penguin is not a bird as a matter of degree. It's a bird, period. And
if we're capable of making that judgment reliably and categorically,
then there must be something about our transactions with penguins that
allows us to do so. In the case of sensory categories, I'm claiming
that a sufficient set of sensory features is what allows as to make
reliable all-or-none judgments; and in the case of higher-order
categories, I claim they are grounded in the sensory ones (and their
features).

I don't deny that graded categories exist too (e.g., "big," "smart"), but
those are not the ones under consideration here. And, yes, I
hypothesize that all-or-none categories are more fundamental in the
problem of categorization and its underlying mechanisms than graded
categories. I also do not deny that regions of uncertainty (and even
arbitrariness) -- natural and contrived -- exist, but I do not think that
those regions are representative of the mechanisms underlying successful
categorization.

The book under discussion ("Categorical Perception: The Groundwork of
Cognition") is concerned with the problem of how graded sensory continua
become segmented into bounded all-or-none categories (e.g., colors,
semitones). This is accomplished by establishing upper and lower
thresholds for regions of the continuum. These thresholds, I must
point out, are FEATURES, and they are detected by feature-detectors.
The rest is a matter of grain: If you are speaking at the level of
resolution of our sensory acuity (the "jnd" or just-noticeable-difference),
then there is always a region of uncertainty at the border of a category,
dependent on the accuracy and sensitivity of the threshold-detector.

But discrimination grain is not the right level of analysis for
questions about higher-order sensory categories, and all-or-none
categorization in general. The case for the putative "gradedness" of
"penguin"'s membership in the category "bird" is surely not being
based on the limits of sensory acuity. If it is, I'll concede at once,
and add that that sort of gradedness is trivial; the categorization
problem is concerned with identification grain, not discrimination grain.
All categories will of course be fuzzy at the limits of our sensory
resolution capacity. My own grounding hypothesis BEGINS with
bounded sensory categories (modulo threshold uncertainty) and attempts
to ground the rest of our category hierarchy bottom-up on those.

Finally, as I've stressed in responses to others, there's one other
form of category uncertainty I'm quite prepared to concede, but that
likewise fails to imply that category membership is a matter of
degree: All categories -- true graded ones as well as all-or-none ones
-- are provisional and approximate, relative to the context of
interconfusable members and nonmembers that have been sampled to date. If
the sample ever turns out to have been nonrepresentative, the feature-set that
was sufficient to generate successful sorting in the old context must
be revised and updated to handle the new, wider context. Anomalies and
ambiguities that had never occurred before must now be handled. But what
happens next (if all-or-none sorting performance can be successfully
re-attained at all) is just the same as with the initial category learning
in the old context: A set of features must be found that is sufficient to
subserve correct performance in the extended context. The approximation
must be tightened. This open-endedness of all of our categories, however, is
really just a symptom of inductive risk rather than of graded representations.

>       "Analog" means "invertible". The invertible properties of a
>       representation are those properties which it preserves...[This
>       sounds] tautologically true of *all* representations.

For the reply to this, see my response to Cugini, whose criticism you
cite. Sensory icons need only be invertible with the discriminable properties
of the sensory projection. There is no circularity in this. And in a dedicated
system invertibility at various stages may well be a matter of degree, but
this has nothing to do with the issue of graded/nongraded category membership,
which is much more concerned with selective NONinvertibility.

>       It is quite possible to make all-or-none judgements based on graded
>       features [e.g., thermostats]

Apart from (1) thresholds (which are features, and which I discussed
earlier), (2) probabilistic features so robust as to be effectively
all-or-none, and (3) gerrymandered examples (usually playing on the
finiteness of the cases sampled, and the underdetermination of the
winning feature set), can you give examples?

>       "chair"... with no back... [is a] "stool"... Now vary the size
>       of the back

The linguist Labov, with examples such as cup/bowl, specialized in
finding graded regions for seemingly all-or-none categories.
Categorization is always a context-dependent, "compared-to-what"
task . Features must reliably sort the members from the nonmembers
they can be confused with. Sometimes nature cooperates and gives us
natural discontinuities (horses could have graded continuously into
zebras). Where she does not, we have only one recourse left: an
all-or-none sensory threshold at some point in the continuum. One can
always generate a real or hypothetical continuum that would foil our
current feature-detectors and necessitate a threshold-detector. Such
cases are only interesting if they are representative of the actual
context of confusable alternatives that our category representation
must resolve. Otherwise they are not informative about our actual
current (provisional) feature-set.

>       I see nothing about "all-or-none" categories which is not explainable
>       by arbitrary cutoffs of graded sensory data... [and] avoid[ing]
>       borderline situations.

Neither do I. (Most feature-detection problems, by the way, do not
arise from the need to place thresholds along true continua, but from
the problem of underdetermination: there are so many features that it
is hard to find a set that will reliably sort the confusable
alternatives into their proper all-or-none categories.)
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 5 Jul 87 00:51:01 GMT
From: sher@cs.rochester.edu  (David Sher)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <967@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>Most of our object categories are indeed all-or-none, not graded. A
>penguin is not a bird as a matter of degree. It's a bird, period.

Just for the record is this an off hand statement or are you speaking
as an expert when you say most of our categories are all or none.  Do
you have some psychology experiments that measure the size of human
category spaces and using a metric on them shows that most categories
are of this form?  Can I quote you on this?  Personally I have trouble
imagining how to test such a claim but psychologists are clever
fellows.
--
-David Sher
sher@rochester
{ seismo , allegra }!rochester!sher

------------------------------

Date: 5 Jul 87 04:52:30 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?


In Article 185 of comp.cog-eng sher@rochester.arpa (David Sher) of U of
Rochester, CS Dept, Rochester, NY responded as follows to my claim that
"Most of our object categories are indeed all-or-none, not graded. A penguin
is not a bird as a matter of degree. It's a bird, period." --

>       Personally I have trouble imagining how to test such a claim...

Try sampling concrete nouns in a dictionary.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 5 Jul 87 05:29:02 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem


In Article 184 of comp.cog-eng: adam@gec-mi-at.co.uk (Adam Quantrill)
of Marconi Instruments Ltd., St. Albans, UK writes:

>       It seems to me that the Symbol Grounding problem is a red herring.
>       If I took a partially self-learning program and data (P & D) that had
>       learnt from a computer with 'sense organs', and ran it on a computer
>       without, would the program's output become symbolically ungrounded?...
>       [or] if I myself wrote P & D without running it on a computer at all?

This begs two of the central questions that have been raised in
this discussion: (1) Can one speak of grounding in a toy device (i.e.,
a device with performance capacities less than those needed to pass
the Total Turing Test)? (2) Could the TTT be passed by just a symbol
manipulating module connected to transducers and effectors? If a
device that could pass the TTT were cut off from its transducers, it
would be like the philosophers' "brain in a vat" -- which is not
obviously a digital computer running programs.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 5 Jul 87 02:47:25 GMT
From: ihnp4!twitch!homxb!houdi!marty1@ucbvax.Berkeley.EDU 
      (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <605@gec-mi-at.co.uk>, adam@gec-mi-at.co.uk (Adam Quantrill) writes:
> It seems to me that the Symbol Grounding problem is  a   red   herring.

As one who was drawn into a problem that is not my own, let me
try answering that disinterestedly.  To begin with, a "red
herring" is something drawn across the trail that distracts the
pursuer from the real goal.  Would Adam tell us what his real
goal is?

Actually, my own real goal, from which I was distracted by the
symbol grounding problem, was an expert system that would (like
Adam's last example) ground its symbols only in terminal I/O.
But that's a red herring in the symbol grounding problem.

  .....  If I took  a  partially self-learning program and data (P & D)
  that had learnt from a computer with 'sense organs',  and  ran it  on a
  computer  without,  would  the program's output become symbolically
  ungrounded?

No, because the symbolic data was (were?) learned from sensory
data to begin with - like a sighted person who became blind.

  Similarily, if I myself wrote P & D without running it on a  computer
  at  all, [and came] up with identical P & D by analysis.  Does  that
  make the  original  P  & D running on  the  computer with
  'sense organs' symbolically ungrounded?

No, as long as the original program learned its symbolic data
from its own sensory data, not by having them defined by a
person in terms of his or her sensory data.

  A computer can  always  interact  via  the  keyboard  &  terminal
  screen,   (if those   are  the only 'sense organs'), grounding its
  internal symbols via people who react to the output, and  provide
  further stimulus.

That's less challenging and less useful than true symbol
grounding.  One problem that requires symbol grounding (more
useful and less ambitious than the Total Turing Test) is a
seeing-eye robot: a machine with artificial vision that could
guide a blind person by giving and taking verbal instructions.
It might use a Braille keyboard instead of speech, but the
"terminal I/O" must be "grounded" in visual data from, and
constructive interaction with, the tangible world.  The robot
could learn words for its visual data by talking to people who
could see, but it would still have to relate the verbal symbols
to visual data, and give meaning to the symbols in terms of its
ultimate goal (keeping the blind person out of trouble).

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

End of AIList Digest
********************
 6-Jul-87 01:05:39-PDT,14968;000000000000
Mail-From: LAWS created at  6-Jul-87 01:04:24
Date: Mon  6 Jul 1987 01:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #170
To: AIList@STRIPE.SRI.COM


AIList Digest             Monday, 6 Jul 1987      Volume 5 : Issue 170

Today's Topics:
  Theory - Symbol Grounding Metadiscussion

----------------------------------------------------------------------

Date: 3 Jul 87 01:02:48 GMT
From: mnetor!utzoo!utgpu!water!watmath!watcgl!ksbooth@seismo.css.gov
Subject: Re: The symbol grounding problem - please start your own
         newsgroup

Hooray for David Harwood.

------------------------------

Date: 5 Jul 87 05:39:38 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem - please start your own
         newsgroup


In Article 186 of comp.cog-eng, ksbooth@watcgl.waterloo.edu (Kelly Booth)
of U. of Waterloo, Ontario writes:

>       Hooray for David Harwood.

David Harwood has made two very rude requests that I stop the symbol grounding
discussion, which I ignored. But perhaps it's time to take a poll. Please send
me e-mail indicating whether or not you find the discussion useful and worth
continuing. I promise to post and abide by the results.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 5 Jul 87 05:05:53 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: The symbol grounding problem: 3 routes to grounding
         needed?


In Article 181 of comp.cog-eng berleant@ut-sally.UUCP (Dan Berleant)
of U. Texas CS Dept., Austin, Texas writes:

>       may not be much difference between a classical view augmented to...
>       *arbitrary* boolean expressions of features...and a probabilistic view

I agree that such a probabilistic representation is possible. Now the question
is, will it work, is it economical (and is it right)? Note, though, that even
graded (probabilistic) individual features must yield an all-or-none feature
SET. So even this would not be evidence of graded membership. (I don't think
you'd disagree.)

>       need to...explain...typicality and reaction time results...interpreted
>       as supporting probabilistic and exemplar-based category representations

Yes, but it seems only appropriate that we should account for the
categorization performance capacity itself before we worry about its
fine tuning. (Experimental psychology has a long history of bypassing
the difficult but real problems underlying our behavioral capacities
and fixating instead on fine-tuning.)

>       may [be] 2 representations for categories: a 'core' of defining features
>       and a heuristic categorizer... 2 pathways [grounding] categories

You may be right. It's an empirical question whether the heuristic component
will be necessary to generate successful performance. If it is, it is still not
obvious that the need for it would be directly related to the grounding problem.

>       [Re:] Anders Weinstein [on] the semantic meaning of...thunder/...`angry
>       gods nearby'...: The terms in the definition presumably are grounded
>       via the 2 routes discussed above... [now] Consider a sentence with 2
>       variables, e.g. FISH SWIM... Obviously, many bindings would satisfy
>       the sentence. [But]...by adding many more true sentences, the possible
>       bindings of the variables become much more constrained.

I accepted this argument the first time you made it. I think it's
right; I've made similar degrees-of-freedom arguments against Quine myself,
and I've cross-referenced your point in my response to Weinstein. I
don't believe, though, that this reduction of the degrees of freedom
of the interpretation (even to zero) is sufficient to ground a symbol
system. Even if there's only one way to interpret an entire language,
the decryption must be performed; and it's not enough that the mapping
should be into a natural language (that's still a symbol/symbol
relation, leaving the entire edifice hanging by a skyhook of derived
rather than intrinsic meaning). The mapping must be into the world.

But, in any case, you seem to rescind your degrees-of-freedom
argument immediately after you make it:

>       On the other hand... Maybe a Martian [or] your neighbor... could
>       figure out [an alternative] way to do it consistently... but as long
>       as you both agree on the truthfulness of all the sentences you are
>       mutually aware of, there is no way to tell! Shades of the Turing test...

This is standard Quinean indeterminacy again! So you don't believe
your degrees-of-freedom argument! Well I do. And it's partly because
of degrees-of-freedom and convergence considerations that I am so
sanguine about the TTT. (I called this the "convergence" argument in
"Minds, Machines and Searle": There may be many arbitrary ways to
successfully model a toy performance, but as you move toward the TTT,
the degrees of freedom shrink.)

>       would this method of 'grounding' the semantics of categories be
>       sufficient to do the job? Only in theory? Potentially in practice? ...

I think it would not (although it may simplify the task of grounding
somewhat). Even if only one interpretation is possible, it must be
intrinsic, not derivative.

>       Are you assuming a representation of episodes (more generally,
>       exemplars) that is iconic rather than symbolic?

Yes, I am assuming that episodic representations would be iconic. This is
related to the distinction in the human memory literature concering
"episodic" vs. "semantic" memory. The former involves qualitative
recall for when something happened (e.g., Kennedy's assassination) and
the particulars of the experience; the latter involves only the
*product* of past learning (e.g., knowing how to ride a bicycle, do
calculus or speak English). It's much harder to imagine how the former
could be symbolic (although, of course, there are "constructive" memory
theories such as Bartlett's that suggest that what we remember as an
episode may be based on reconstruction and logical inference...).

>       *no* category representation method can generate category boundaries
>       when there is significant interconfusability among categories!

I would be very interested to know your basis for this assertion
(particularly as "significant interconfusability" is not exactly a
quantitative predicate). If I had said "complete indeterminacy," or even
"radical underdetermination" (say, features that would require
exponential search to find), I could understand why you would say this
-- but significant interconfusability... Can you remember first
looking at cellular structures under a microscope? Have you seen Inuit snow
taxonomies? Have you ever tried serious mushroom-picking? Or chicken
sexing? Or tumor identification? Art classification? Or, to pick some
more abstract examples: paleolinguistic taxonomy? ideological
typologizing? or problems at the creative frontiers of pure mathematics?
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 5 Jul 87 18:34:37 GMT
From: bloom-beacon!bolasov@husc6.harvard.edu  (Benjamin I Olasov)
Subject: Re: The symbol grounding problem - please start your own
         newsgroup

I personally   don't  feel that  it's   Harwood's  place  to   make  a
recommendation  such as the  one he made (rude or  otherwise). If  the
discussion   is germaine to  the  stated purpose(s)  of the  newsgroup
(which it  is),  and is carried on   in an intellectually  responsible
manner (which it certainly has been), why should  it not be allowed to
continue?

Isn't the solution for those who don't  find the topic  interesting to
simply not read the messages bearing  that topic on  the subject line?
After all, any number of discussions can be carried on concurrently.

------------------------------

Date: 5 Jul 87 17:31:15 GMT
From: harwood@cvl.umd.edu  (David Harwood)
Subject: Re: The symbol grounding problem - please start your own
         newsgroup

In article <977@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>David Harwood has made two very rude requests that I stop the symbol grounding
>discussion, which I ignored. But perhaps it's time to take a poll. Please send
>me e-mail indicating whether or not you find the discussion useful and worth
>continuing. I promise to post and abide by the results.

        As I have told others, I don't really want you to quit posting
altogether to this or other newsgroups. And I would be glad for you to
form your own group for your "dialogues," such as they are. But I have
to complain about your insufferable postings on two grounds: (i) they
have nearly nothing to do with computer science, nevertheless preoccupy
comp.ai with your various and sundry self-referential, just vaguely
intelligible musings; (ii) your postings, in my opinion, are the heighth,
width, and breadth of unresponsive, presumptuous, and condescending
twaddle. Worse than anything which I've read which was contributed as
an original article to BBS, for example. Of course, as my colleagues
advise, BBS does not publish my research - and is unlikely to in the
near distant future. Such are the wages of public sin.)
        Yes, my two replies to you were sarcastic (more than "very rude,"
I think; I never recieved any serious complaint about either, perhaps
because others knew what I meant, even if they did not quite agree with
me.)
        Let me give you back an illustration of how you talk. You just
a moment ago replied to D.S. who question what psychological evidence you
have that perceptual categorization is usually "all-or-none." He seemed to
question your expertese as a perceptual psychologist. (I might add that
you have tried to impress us with generally slighting remarks about
psychologists as well as computer scientists, but this may be a "policy
of controversy" (perhaps used to secure competitive funding - who knows;-).
        Anyway, your one line reply did not answer the question, but was
more of a silly riposte, something like, "Check the concrete nouns in
your dictionary." He asks you something, and you ignore this. Or, taking
you seriously, you tell him to go supply his own evidence for your claims.
(I suppose that if he were your research assistant, that you would sagely
explain that a "concrete" noun is one admitting "all-or-none" categorization.)
        I have no prejudice concerning your views - to be sure, I rarely
can make sense of them. But I wish you would simply take your own advise,
"Check the concrete nouns of your dictionary," and use them sometimes to
good effect in your postings. Define your abstractions. Cite evidence for
your speculations. Do not cite your own damn article like a parrot. If you
prefer, post the damn thing, which has got to be more intelligible than
your recent stuff, and we will be done with this particular "symbol grounding
problem."
        Then I will look forward to your new occasional postings, even
in this newsgroup.

David Harwood

------------------------------

Date: 5 Jul 87 21:48:28 GMT
From: harwood@cvl.umd.edu  (David Harwood)
Subject: Re: The symbol grounding problem - please start your own
         newsgroup


        Letter sent by email to Stevan Harnad (with postscript added)
re his postings to comp.ai about "the symbol grounding problem."

        I don't want you to quit posting altogether - I would just like
you to realize that you are hogging comp.ai with what seems, to me at least,
to be mostly pompous and unintelligible postings, that have very little to
do with computer science. I heard from a student colleague, who is not opposed
to a "cognitive science" viewpoint (if this means anything to you), that the
first thing you did to explain your views at a recent colloquim was make
reference to your net discussions.
        My oh my, either you are an modest comedian, or these dialogues of
yours - why - if even they be blarney and posing of feathers - why they be
verily verily immortal.
        You have made your views, whatever these are, resoundingly reknown
- by, I suppose, half or more of the recent volume of comp.ai. I simply wish
you'd pipe down for awhile, especially about your "symbol grounding problem."
        I will be especially verily verily glad to see you post the source
code which implements your theoretical improvements; this should keep us off
the streets for awhile; and I will try to be first to applaud your success.

        David Harwood

        Computer Vision Laboratory
        Center for Automation Research
        University of Maryland

My views are simply my own. Please note all typos and mistakes, as I prepare
to publish an edition (with permission which is surely forthcoming) of
_Recent Contributions to the Dialogue de Problem Profundo Symbo-Grundo:
New Foundations and New Vocations in Computer Science_.
[This postscript added to my letter emailed S.H.]

------------------------------

Date: 6 Jul 87 02:19:01 GMT
From: bloom-beacon!bolasov@husc6.harvard.edu  (Benjamin I Olasov)
Subject: Re: The symbol grounding problem - please start your own
         newsgroup

In article <2328@cvl.umd.edu> harwood@cvl.UUCP (David Harwood) writes:
>       I don't want you to quit posting altogether - I would just like
>you to realize that you are hogging comp.ai with what seems, to me at least,
>to be mostly pompous and unintelligible postings, that have very little to
>do with computer science.
         ^^^^^^^^ ^^^^^^^

This point should not need to be made, but this newsgroup doesn't deal
exclusively with computer science issues per se.  Many important
contributions to AI, after all, have come from outside the field of CS,
as conventionally understood- much of Marvin Minsky's research for example,
is not restricted to CS, and yet has significant implications for AI.

Some of the most challenging and interesting problems of AI are philosophical
in nature.  I frankly don't see why this fact should disturb anyone.

Perhaps if more of us pursued our theoretical models with comparable rigor
to that with which Mr. Harnad pursues his, the balance of topics represented
on comp.ai might shift .....

------------------------------

End of AIList Digest
********************
 6-Jul-87 01:11:40-PDT,14518;000000000001
Mail-From: LAWS created at  6-Jul-87 01:09:35
Date: Mon  6 Jul 1987 01:07-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #171
To: AIList@STRIPE.SRI.COM


AIList Digest             Monday, 6 Jul 1987      Volume 5 : Issue 171

Today's Topics:
  Queries - AI Expert Source for Hopfield Nets &
    Liability of Expert System Developers,
  Programming - Software Reuse,
  Scientific Method - Psychology vs. AI & Why AI is not a Science

----------------------------------------------------------------------

Date: 2 Jul 87 20:45:24 GMT
From: ucsdhub!dcdwest!benson@sdcsvax.ucsd.edu  (Peter Benson)
Subject: AI Expert source for Hopfield Nets

I am looking for the source mentioned in Bill Thompson's
article on Hopfield Nets in the July, 1987 issue of
AI Expert magazine.  At one time, someone was posting all the
sources, but has, apparently, stopped.  Could that person,
or some like-minded citizen post the source for this
Travelling Salesman solution.

Thanks in advance !!

--
Peter Benson                    | ITT Defense Communications Division
(619)578-3080                   | 10060 Carroll Canyon Road
ucbvax!sdcsvax!dcdwest!benson   | San Diego, CA 92131
dcdwest!benson@SDCSVAX.EDU      |

------------------------------

Date: 5 Jul 87 22:00:58 GMT
From: bloom-beacon!bolasov@husc6.harvard.edu  (Benjamin I Olasov)
Subject: Liability of Expert System Developers

I'm told that a hearing is now underway which would set a legal precedent
for determining the extent of liability to be borne by software developers
for the performance of expert systems authored by them. Does anyone have
details on this?

------------------------------

Date: 4 Jul 87 21:19:48 GMT
From: jbn@glacier.stanford.edu  (John B. Nagle)
Subject: Re: Software Reuse  --  do we really know what it is ? (long)


     The trouble with this idea is that we have no good way to express
algorithms "abstractly".  Much effort was put into attempting to do so
in the late 1970s, when it looked as if program verification was going to
work.  We know now that algebraic specifications (of the Parnas/SRI type)
are only marginally shorter than the programs they specify, and much
less readable.  Mechanical verification that programs match formal
specifications turned out not to be particularly useful for this reason.
(It is, however, quite possible; a few working systems have been
constructed, including one by myself and several others described in
ACM POPL 83).

     We will have an acceptable notation for algorithms when each algorithm
in Knuth's "Art of Computer Programming" is available in machineable form
and can be used without manual modification for most applications for which
the algorithm is applicable.  As an exercise for the reader, try writing
a few of Knuth's algorithms as Ada generics and make them available to
others, and find out if they can use them without modifying the source
text of the generics.

     In practice, there now is a modest industry in reusable software
components; see the ads in any issue of Computer Language.  Worth noting
is that most of these components are in C.

                                        John Nagle

------------------------------

Date: 02 Jul 87 09:55:35 EDT (Thu)
From: sas@bfly-vax.bbn.com
Subject: Don Norman's comments on time perception and AI
         philosophizing


Actually, many studies have been done on time perception. One rather
interesting one reported some years back in Science showed that time
and size scale together.  Smaller models (mannikins in a model office
setting) move faster.  It was kind of neat paper to read.

I agree that AI suffers from a decidedly non-scientific approach.
Even when theoretical physicists flame about liberated quarks and the
anthropic principle, they usually have some experiments in mind. In
the AI world we get thousands of bytes on the "symbol grounding
problem" and very little evidence that symbols have anything to do
with intelligence and thought. (How's that for Drano[tm] on troubled
waters?)

There have been a lot of neat papers on animal (and human) learning
coming out lately.  Maybe the biological brain hackers will get us
somewhere - at least they look for evidence.

                                        Probably overstating my case,
                                                Seth

------------------------------

Date: Thu 2 Jul 87 12:10:08-PDT
From: PAT <HAYES@SPAR-20.ARPA>
Subject: Re: AIList Digest   V5 #165

HEY, DON!!! RIGHT ON!

Pat Hayes


  [Donald Norman, I presume.  -- KIL]

------------------------------

Date: 3 Jul 87 18:01:33 GMT
From: nosc!humu!uhccux!stampe@sdcsvax.ucsd.edu (David Stampe)
Subject: Submission for comp-ai-digest

Path: uhccux!stampe
From: stampe@uhccux.UUCP (David Stampe)
Newsgroups: comp.ai.digest
Subject: Re: On how AI answers psychological issues
Message-ID: <651@uhccux.UUCP>
Date: 3 Jul 87 18:01:33 GMT
References: <8706301418.AA08078@sunl.ICS>
Distribution: world
Organization: U. of Hawaii, Manoa (Honolulu)
Lines: 44
In-reply-to: norman%ics@SDCSVAX.UCSD.EDU's message of 30 Jun 87 14:18:40 GMT

norman%ics@SDCSVAX.UCSD.EDU (Donald A. Norman) writes:
> Thinking about "how the mind works" is fun, but not science, not
> the way to get to the correct answer.

In fact it's the ONLY way to get the correct answer.  Experiments
don't design themselves, and they don't interpret their own results.

We don't see with outward eyes or hear with outward ears alone.  The
outward perception or behavior does not exist without the inward one.
If you practice your remembered violin in your imagination, while your
actual violin is being repaired, you, as well as the violin, may sound
much better when the repairs are finished.

I am a linguist.  I write a tongue twister on the board that they
haven't hear before: 'Unique New York Unique New York Unique New
York....'  My students watch silently, but when I ask them what errors
this tongue twister induces, they immediately name the very errors I
discovered before class, when I tried to pronounce it aloud.  You
didn't have to say it aloud, either, did you?

It is not introspection that is AI's trouble.  It is that an expert
system, for example, isn't likely to model expertise correctly until
it is designed by someone who is himself the expert, or who knows how
to discover the nature of the expert's typically unconscious wisdom.
Linguistics has struggled for over a century to develop tools for
learning how human beings acquire and use language.  It seems likely
that a comparable struggle will be required learn how the expert
diagnostician, welder, draftsman, or reference librarian does what he
or she does.

I often feel that when a good student of language takes a job building
a natural language interface for some AI project, in her work --
though it may be viewed by others in the project as marginal, if not
menial -- she is more likely to turn up something of scientific import
than are those working on the core of the project.  This is just
because she has spent years learning to learn how experts -- in this
case language users -- do what they do.  On the other hand, she is not
likely to believe that programs can realistically model much of the
human linguistic faculty, at least in the imaginable future.  For
example, computer parsers presuppose grammars.  But it is not clear
whether children, the only devices so far known to have mastered any
natural language, come equipped with any analogous utilities.

David Stampe, Linguistics, Univ. of Hawaii

------------------------------

Date: Thu, 2 Jul 87 22:36:05 edt
From: amsler@flash.bellcore.com (Robert Amsler)
Subject: Re: thinking about thinking not being science


I think Don Norman's argument is true for cognitive psychologists,
but may not be true for AI researchers. The reason is that the two
groups seek different answers. If AI were only the task of finding
out how people work, then it would be valid to regard armschair
reasoning as an invalid form of speculation. One can study
people directly (this is the old ``stop arguing over the number of
teeth in a horse's mouth and go outside and count them'' argument).
However, some AI researchers are really engineers at heart. The
question then is not how do people work, but how could processes
providing comparable performance quality to those of humans be made
to work in technological implementations.  `Could' is important.
Airplanes are clearly not very good imitations of birds.  They are
too big, for one thing. They have wheels instead of feet, and the
list goes on and on (no feathers!).  Speculating about flight might
lead to building other types of aircraft (as certainly those now
humorous old films of early aviation experiments show), but it would
certainly be a bad procedure to follow to understand birds and how
they fly. Speculating about why the $6M man appears as he does
while running is a tad off the beaten path for AILIST, but that
process of speculation is hardly worthless for arriving at novel
means of representing memory or perception FOR COMPUTER SYSTEMS.

Lets not squabble over the wrong issue.  The problem is that the
imagery of the $6M man's running is just too weak as a springboard for
much directed thought and the messages (including my own earlier
reply) are just rambling off in directions more appropriate to
SF-Lovers than AILIST. I do agree that the CURRENT discussion isn't
likely to lead anywhere--but not that the method of armchair
speculation is invalid in AI.

------------------------------

Date: Fri, 3 Jul 87 07:29:41 pdt
From: norman%ics@sdcsvax.ucsd.edu (Donald A. Norman)
Subject: Why AI is not a science


A private message to me in response to my recent AI List posting,
coupled with general observations lead me to realize why so many of us
otherwise friendly folks in the sciences that neighbor AI can be so
frustrated with AI's casual attitude toward theory: AI is not a science
and its practitioners are woefuly untutored in scientific method.

At the recent MIT conference on Foundations of AI, Nils Nilsson stated
that AI was not a science, that it had no empirical content, nor
claims to emperical content, that it said nothing of any emperical
value.  AI, stated Nilsson, was engineering.  No more, no less.  (And
with that statement he left to catch an airplane, stopping further
discussion.)  I objected to the statement, but now that I consider it
more deeply, I believe it to be correct and to reflect the
dissatisfaction people like me (i.e., "real scientists") feel with AI.
The problem is that most folks in AI think they are scientists and
think they have the competence to pronounce scientific theories about
almost any topic, but especially about psychology, neuroscience, or
language.   Note that perfectly sensible dsciplines such as
mathematics and philosophy are also not sciences, at least not in the
normal intrerpretation of that word.  It is no crime not to be a
science.  The crime is to think you are one when you aren't.

AI worries a lot about methods and techniques, with many books and
articles devoted to these issues.  But by methods and techniques I
mean such topics as the representation of knowledge, logic,
programming, control structures, etc.  None of this method includes
anything about content.  And there is the flaw: nobody in the field of
Artificial Intelligence speaks of what it means to study intelligence,
of what scientific methods are appropriate, what emprical methods are
relevant, what theories mean, and how they are to be tested.  All the
other sciences worry a lot about these issues, about methodology,
about the meaning of theory and what the appropriate data collection
methods might be.  AI is not a science in this sense of the word.
        Read any standard text on AI: Nilsson or Winston or Rich or
        even the multi-volumned handbook.  Nothing on what it means to
        test a theory, to compare it with others, nothing on what
        constitutes evidence, or with how to conduct experiments.
        Look at any science and you will find lots of books on
        experimental method, on the evaluation of theory.  That is why
        statistics are so important in psychology or biology or
        physics, or why counterexamples are so important in
        linguistics.  Not a word on these issues in AI.
The result is that practitioners of AI have no experience in the
complexity of experimental data, no understanding of scientific
method.  They feel content to argue their points through rhetoric,
example, and the demonstration of programs that mimic behavior thought
to be relevant.  Formal proof methods are used to describe the formal
power of systems, but this rigor in the mathematical analysis is not
matched by any similar rigor of theoretical analysis and evaluation
for the content.

This is why other sciences think that folks in AI are off-the-wall,
uneducated in scientific methodology (the truth is that they are), and
completely incompetent at the doing of science, no matter how
brilliant at the development of mathematics of representation or
formal programming methods.  AI will contribute to the A, but will
not contribute to the I unless and until it becomes a science and
develops an appreciation for the experimental methods of science.  AI
might very well develop its own methods -- I am not trying to argue
that existing methods of existing sciences are necessarily appropriate
-- but at the moment, there is only clever argumentation and proof
through made-up example (the technical expression for this is "thought
experiment" or "gadanken experiment").  Gedanken experiments are not
accepted methods in science: they are simply suggestive for a source
of ideas, not evidence at the end.

don norman

Donald A. Norman
Institute for Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
norman@nprdc.arpa       {decvax,ucbvax,ihnp4}!sdcsvax!ics!norman
norman@sdics.ucsd.edu   norman%sdics.ucsd.edu@RELAY.CS.NET

------------------------------

End of AIList Digest
********************
 9-Jul-87 19:40:41-PDT,19966;000000000001
Mail-From: LAWS created at  8-Jul-87 17:29:53
Date: Wed  8 Jul 1987 17:22-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #172
To: AIList@STRIPE.SRI.COM


AIList Digest            Thursday, 9 Jul 1987     Volume 5 : Issue 172

Today's Topics:
  Query - Xlisp,
  Programming - Software Reuse & Abstract Specifications,
  Scientific Method - Is AI a Science?

----------------------------------------------------------------------

Date: Tue 7 Jul 87 09:18:25-PDT
From: BEASLEY@EDWARDS-2060.ARPA
Subject: xlisp

If anyone has any information or has heard any information about using
XLISP (eXperimental LISP) on the PC, please send me that information at
beasley@edwards-2060.ARPA. Thank you.

------------------------------

Date: 6 Jul 87 05:28:23 GMT
From: vrdxhq!verdix!ogcvax!dinucci@seismo.css.gov  (David C. DiNucci)
Subject: Re: Software Reuse  --  do we really know what it is ? (long)

In article <titan.668> ijd@camcon.co.uk (Ian Dickinson) writes:
>> Xref: camcon comp.lang.ada:166 comp.lang.misc:164
>Hence a solution:  we somehow encode _abstractions_ of the ideas and place
>these in the library - in a form which also supplies some knowledge about the
>way that they should be used.  The corollary of this is that we need more
>sophisticated methods for using the specifications in the library.
>(Semi)-automated transformations seem to be the answer to me.
>
>Thus we start out with a correct (or so assumed) specification,  apply
>correctness-preserving transormation operators,  and so end up with a correct
>implementation in our native tongue (Ada, Prolog etc, as you will).  The
>transformations can be interactively guided to fit the precise circumstance.
>[Credit]  I originally got this idea from my supervisor: Dr Colin Runciman
>@ University of York.

In his Phd thesis defense here at Oregon Graduate Center, Dennis
Volpano presented his package that did basically this.  Though certainly
not of production quality, the system was able to take an abstraction
of a stack and, as a separate module, a description of a language and
data types within the language (in this case integer array and file,
if I remember correctly), and produce code which was an instantiation
of the abstraction - a stack implemented as an array or as a file.

I haven't actually read Dennis' thesis, so I don't know what the
limitations of constraints on his approach are.  I believe he is
currently employed in Texas at MCC.
---
Dave DiNucci        dinucci@Oregon-Grad

------------------------------

Date: 7 Jul 87 02:21:06 GMT
From: vrdxhq!verdix!ogcvax!pase@seismo.css.gov  (Douglas M. Pase)
Subject: Re: Software Reuse (short title)

In article <glacier.17113> jbn@glacier.UUCP (John B. Nagle) writes:
>
>     The trouble with this idea is that we have no good way to express
>algorithms "abstractly".  [...]

Well, I'm not sure just where the limits are, but polymorphic types can go
a long way towards what you have been describing.  It seems that a uniform
notation for operators + the ability to define additional operators +
polymorphically typed structures are about all you need.  Several functional
languages already provide an adequate basis for these features.  One such
language is called LML, or Lazy ML.  Current language definitions tend to
concentrate on the novel features rather than attempt to make LML a full-blown
"production" language, and therefore may be missing some of your favorite
features.  However, my point is that we may well be closer to your objective
than some of us realize.

I apologize for the brevity of this article -- if I have been too vague,
send me e-mail and I will be more specific.
--
Doug Pase   --   ...ucbvax!tektronix!ogcvax!pase  or  pase@Oregon-Grad.csnet

------------------------------

Date: 7 Jul 87 15:18:32 GMT
From: debray@arizona.edu  (Saumya Debray)
Subject: Automatic implementation of abstract specifications

In article <1337@ogcvax.UUCP>, dinucci@ogcvax.UUCP (David C. DiNucci) writes:
> In his Phd thesis defense here at Oregon Graduate Center, Dennis
> Volpano presented his package that did basically this.  Though certainly
> not of production quality, the system was able to take an abstraction
> of a stack and, as a separate module, a description of a language and
> data types within the language (in this case integer array and file,
> if I remember correctly), and produce code which was an instantiation
> of the abstraction - a stack implemented as an array or as a file.

I believe there was quite a bit of work on this sort of stuff at MIT
earlier in the decade.  E.g. there was a PhD thesis [ca. 1983] by
M. K. Srivas titled "Automatic Implementation of Abstract Data Types"
(or something close to it).  The idea, if I remember correctly, was to
take sets of equations specifying the "source" ADT (e.g. stack) and the
"target" ADT (e.g. array), and map the source into the target.
--
Saumya Debray           CS Department, University of Arizona, Tucson

     internet:   debray@arizona.edu
     uucp:       {allegra, cmcl2, ihnp4} !arizona!debray

------------------------------

Date: Mon, 6 Jul 87 10:06:05 MDT
From: shebs%orion@cs.utah.edu (Stanley T. Shebs)
Subject: AI vs Scientific Method

I can understand Don Norman's unhappiness about the lack of scientific method
in AI - from a practical point of view, the lack of well-understood criteria
for validity means that refereeing of publications is unlikely to be very
objective... :-(

The scientific method is a two-edged sword, however.  Not only does it define
what is interesting, but what is uninteresting - if you can't devise a con-
trolled experiment varying just a single parameter, you can't say anything
about a phenomenon.  A good scientist will perhaps be able to come up with
a different experiment, but if stymied enough times, he/she is likely to move
on to something else (at about the same time the grant money runs out :-) ).
Established sciences like chemistry have an advantage in that the parameters
most likely to be of interest are already known; for instance temperature,
percentages of compounds, types of catalysts, and so forth.  What do we have
for studying intelligence?  Hardly anything!  Yes, I know psychologists have
plenty of experimental techniques, but the quality is pretty low compared to
the "hard sciences".  A truly accurate psychology experiment would involve
raising cloned children in a computer-controlled environment for 18 years.
Even then, you're getting minute amounts of data about incredibly complex
systems, with no way to know if the parameters you're varying are even
relevant.

There's some consolation to be gained from the history of science/technology.
The established fields did not spring full-blown from some genius' head;
each started out as a confused mix of engineering, science, and speculation.
Most stayed that way until the late 19th or early 20th century.  If you don't
believe me, look at an 18th or early 19th century scientific journal (most
libraries have a few).  Quite amusing, in fact very similar to contemporary
AI work.  For instance, an article on electric eels from about 1780 featured
the observations that a slave grabbing the eel got a stronger shock on the
second grab, and that the shock could be felt through a wooden container.
No tables or charts or voltmeter readings :-).

My suggestion is to not get too worked up about scientific methods in AI.
It's worth thinking about, but people in other fields have spent centuries
establishing their methods, and there's no reason to suppose it will take any
less for AI.

                                                        stan shebs
                                                        shebs@cs.utah.edu

------------------------------

Date: Mon, 6 Jul 1987  16:29 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest   V5 #170


I would like to see that discussion of "symbol grounding" reduced to
much smaller proportions because I think it is not very relevant to
AI, CS, or psychology.  To understand my reason, you'd have to read
"Society of Mind", which argues that this approach is obsolete because
it recapitulates the "single agent" concept of mind that dominates
traditional philosophy.  For example, the idea of "categorizing"
perceptions is, I think, mainly an artifact of language; different
parts of the brain deal with inputs in different ways, in parallel.
In SOM I suggest many alternative ways to think about thinking and, in
several sections, I also suggest reasons why the single agent idea has
such a powerful grip on us.  I realize that it might seem self-serving
for me to advocate discussing Society of Mind instead.  I would have
presented my arguments in reply to Harnad, but they would have been
too long-winded and the book is readily available.

------------------------------

Date: Mon, 6 Jul 87 18:25:51 EDT
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: Re:  AIList Digest   V5 #171

While I have some quibbles with Don N.'s long statement on AI viz (or
vs.) science, I think he gets close to what I have felt a key point
for a long time -- that the move towards formalism in AI, while important
in the change of AI from a pre-science (alchemy was Drew McDermott's
term) to a science, is not enough.  For a field to make the transition
an experimental methodology is needed.  In AI we have the potential
to decide what counts as experimentation (with implementation being
an important consideration) but have not really made any serious
strides in that direction.  When I publish work on planning and
claim ``my system makes better choices than <name of favorite
planning program's>'' I cannot verify this other than by showing
some examples that my system handles that <other>'s can't.  But of
course, there is no way of establishing that <other> couldn't do
examples mine can't and etc.  Instead we can end up forming camps of
beliefs (the standard proof methodology in AI) and arguing -- sometimes
for the better, sometimes for the worse.
 While I have no solution for this, I think it is an important issue
for consideration, and I thank Don for provoking this discussion.

 -Jim Hendler

------------------------------

Date: Tue, 7 Jul 1987  01:11 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest   V5 #171


At the end of that long and angry flame, I think D.Norman unwittingly
hit upon what made him so mad:

>  Gedanken experiments are not accepted methods in science: they are
>  simply suggestive for a source of ideas, not evidence at the end.

And that's just what AI has provided these last thirty years - a
source of ideas that were missing from psychology in the century
before.  Representation theories, planning procedures, heuristic
methods, hundreds of such.  The history of previous psychology is ripe
with "proved" hypotheses, few of which were worth a damn, and many of
which were refuted by Norman himself.  Now "cognitive psychology" -
which I claim and Norman will predictably deny (see there: a testable
hypothesis!) is largely based on AI theories and experiments - is
taking over at last - as a result of those suggestions for ideas.

------------------------------

Date: Tue, 7 Jul 87 01:28 MST
From: "Paul B. Rauschelbach" <Rauschelbach@HIS-PHOENIX-MULTICS.ARPA>
Subject: What is science

I normally only observe this discussion, but Don Norman's pomposity
struck a nerve.  The first objection I have is to his statement that
mathematics and philosophy are not sciences "in the normal
interpretation of the word." The Webster's definition (a fairly normal
interpretation) is:  "accumulated knowledge systematized and formulated
with reference to the discovery of general truths or the operation of
general laws." This certainly applies to both.

The next problem is his statement that AI people think they're
scientists.  He seemed to believe that it was a science until Nils Nilsson
told him the obvious. AI, like it's name implies, is a product, not a
phenomenon, not an occurence of nature to be described. The problem is the
creation of a product, an engineering problem. The preservation of theory is
far from an engineer's mind. The engineer uses theory to describe possible
solutions.  If an engineer comes across a possible solution that has not been
addressed by theory, s/he may get his hands a little dirty before the
"scientists" take control of it. It seems to me that much of the talk in this
discussion is of a hypothetical nature, one of the elements of THE SCIENTIFIC
METHOD he was defending.  This is a good place for that portion of the
method, as well as statement of the problem. The experimentation is left to
the psychologists, neurologists, etc.  I see no one but scientists claiming
to be scientists, and I hear AI people shouting, "Yeah, but how do you code
it?" or "What doohickey will do that?"  Implementation of theory. I have also
read discussion of the testing of implementation. Come to think of it,
engineering also fits the definition of science.

Both things, implementation and theory have been and should be discussed here.
If they intermingle, this can only be healthy, even if somewhat confusing. I
hope we can both get down off our respective high horses now.

Paul Rauschelbach
Honeywell Bull
P.O. Box 8000, M/S K55, Phoenix, AZ  85006
(602) 862-3650
pbr%pco@BCO-MULTICS.ARPA

Disclaimer: The opinions expressed above are mine, and not endorsed by
Honeywell Bull.

------------------------------

Date: 7 Jul 87 08:41:33 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Why AI is not a science

   Date: Fri, 3 Jul 87 07:29:41 pdt
   From: norman%ics@sdcsvax.ucsd.edu (Donald A. Norman)

I started out writing a message that said this message was
97% true, but that there was an arguable 3%, namely:

   The problem is that most folks in AI think they are scientists * * *

I was going to pick a nit with the word "most".

Then, I remembered that the AAAI-86 Proceedings were
split into a "Science" track and an "Engineering" track,
the former being about half again as thick as the latter...

------------------------------

Date: 8 Jul 87 01:37:17 GMT
From: munnari!goanna.oz!jlc@uunet.UU.NET (J.L Cybulski)
Subject: Re: Why AI is not a science


Don Norman says that AI is not a Science!
Is Mathematics a science or is it not?
No experiments, no comparisons, thus they are not Sciences!
Perhaps both AI and Maths are Arts, ie. creative disciplines.
Both adhere to their own rigour and methods.
Both talk about hypothetical worlds.
Both are used by researchers from other disciplines as tools,
Maths is used to formally describe natural phenomena,
AI is used to construct computable models of these phenomena.

So, where is the problem?
Hmmm, I think some of the AI researchers wander into the
areas of their incompetence and they impose their quasi-theories
on the specialists from other scientific domains. Some of those
quasi-theories are later reworked and adopted by the same specialists.

Is it, then, good or bad? It seems that lack of scientific constraints
may be helpful in advancing knowledge about the principles of science,
it seems that the greatest breakthroughs in Science come from those
who were regarded as unorthodox in their methods.

May be AI is such unorthodox Science, or perhaps an Art.
Let us keep AI this way!

Jacob L. Cybulski

------------------------------

Date: 07-Jul-1987 0829
From: billmers%aiag.DEC@decwrl.dec.com  (Meyer Billmers, AI
      Applications Group)
Subject: Re: AIList Digest   V5 #171


Don Norman writes that "AI will contribute to the A, but will not
contribute to the I unless and until it becomes a science...".

Alas, since physics is a science and mathematics is not one, I guess the
latter cannot help contribute to the former unless and until mathematicians
develop an appreciation for the experimental methods of science. Ironic
that throughout history mathematics has been called the queen of sciences
(except, of course, by Prof. Norman).

Indeed, physics is a case in point. There are experimental physicists, but
there are also theoretical ones who formulate, posulate and hypothesize
about things they cannot measure or observe. Are these men not scientists?
And there are those who observe and measure that which has no theoretical
foundation (astrologists hypothesize about people's fortunes; would any
amount of experimentation turn astrology into a science?). I believe the
mix between theoretical underpinnings and scientific method makes for
science. The line is not hard and fast.

By my definition, AI has the right attributes to make it a science. There
are theoretically underpinnings in several domains (cognitive science,
theory of computation, information theory, neurobiology...) and yes, even an
experimental nature. Researchers postulate theories (of representation, of
implementation) but virtually every Ph.D. thesis also builds a working
program to test the theory.

If AI researchers seem to be weak in the disciplines of the scientific
method I submit it is because the phenomena they are trying to understand
are far more complex and elusive of definition that that of most science.
This is not a reason to deny AI the title of science, but rather a reason
to increase our efforts to understand the field. With this understanding
will come an increasingly visible scientific discipline.

------------------------------

Date: Mon, 6 Jul 87 17:19:38 PDT
From: cottrell%ics@sdcsvax.ucsd.edu (Gary Cottrell)
Subject: Re: thinking about thinking not being science

In article <8707030236.AA29872@flash.bellcore.com>
amsler@FLASH.BELLCORE.COM (Robert Amsler) writes:
>I think Don Norman's argument is true for cognitive psychologists,
>but may not be true for AI researchers. The reason is that the two
>groups seek different answers. [....] Speculating about flight might
>lead to building other types of aircraft (as certainly those now
>humorous old films of early aviation experiments show), but it would
>certainly be a bad procedure to follow to understand birds and how
>they fly.

In fact, the Wright Brothers spent quite a bit of time studying how
birds fly, and as a recent Scientific American notes, we may still have
a lot to learn from natural systems. A piece of Dennis Conner's boat was
based on a whale's tailfin.

I think Don's point was that many times AI researchers spend a lot of time
theorizing about how humans work, and then use that as justification for
their designs for AI systems, without ever consulting the facts.

It is certainly true that Cognitive Scientists and AI researchers are at
different ends of a spectrum (from NI (Natural Intelligence) to AI), but it
would be foolish for AI researchers not to take hints from the best example
of an intelligient being we have. On the other hand, it is not appropriate
for a medical expert system to make the same mistakes doctors do - sometimes
a criterion for a "good" cognitive model.

gary cottrell				
Institute for Cognitive Science C-015
UCSD, 
La Jolla, Ca. 92093
cottrell@nprdc.arpa (ARPA) (or perhaps cottrell%ics@cs.ucsd.edu)
{ucbvax,decvax,akgua,dcdwest}!sdcsvax!sdics!cottrell (USENET)
**********************************************************************
THE FUTURE'S SO BRIGHT I GOTTA WEAR SHADES - Timbuk 3
**********************************************************************

------------------------------

End of AIList Digest
********************
 9-Jul-87 19:43:47-PDT,16476;000000000001
Mail-From: LAWS created at  8-Jul-87 17:37:45
Date: Wed  8 Jul 1987 17:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #173
To: AIList@STRIPE.SRI.COM


AIList Digest            Thursday, 9 Jul 1987     Volume 5 : Issue 173

Today's Topics:
  Humor - Symbol Grounding References,
  Theory - Fuzzy Categories,
  Policy - Symbol-Grounding Metadiscussion

----------------------------------------------------------------------

Date: 7-JUL-1987 15:50:42
From: UBACW59%cu.bbk.ac.uk@Cs.Ucl.AC.UK
Subject: References Required.

Does anyone have any pointers to the "symbol grounding problem" or some
such area? Searches in the literature have proved fruitless.

The Joka.

------------------------------

Date: 7 Jul 1987 11:00-EDT
From: Spencer.Star@h.cs.cmu.edu
Subject: Re: AIList Digest   V5 #169

> ...a penguin is not a bird of degree...

The point of view that a bird IS a bird, and a rose IS a rose, has
limited usefulness.  If the question that we are trying to answer is
seen as how a person will classify a penguin after having seen one for
the first time, I think the answer is clear.  A large number of people
would not classify a penguin as a bird.  A program would likely be more
successful at imitating a human response if it based its response on
the features of the human answering the query as well as the features
of the concept it was trying to recognize.  Whether a penguin is a bird
then becomes quite dependent on context as well as a simple relation
between features and classes.

------------------------------

Date: 8 Jul 87 16:08:27 GMT
From: sunybcs!dmark@ames.arpa  (David M. Mark)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <974@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>
>In Article 185 of comp.cog-eng sher@rochester.arpa (David Sher) of U of
>Rochester, CS Dept, Rochester, NY responded as follows to my claim that
>"Most of our object categories are indeed all-or-none, not graded. A penguin
>is not a bird as a matter of degree. It's a bird, period." --
>
>>      Personally I have trouble imagining how to test such a claim...
>
>Try sampling concrete nouns in a dictionary.

Well, a dictionary may not always be a good authority fro this sort of
thing.  Last semester I led a graduate Geography seminar on the topic:
"What is a map?"   If you check out dictionaries, the definitions seem
unambiguous, non-fuzzy, concrete.  Even the question may seem foolish, since
"map" probably is a "basic-level" object/concept.  However, we conducted
a number of experiments and found many ambiguous stimuli near the boundary
of the concept "map".  Air photos and satellite images are an excellent
example: they fit the dictionary definition, and some people feel very
strongly that they *are* maps, others sharply reject that claim, etc.
Museum floor plans, topographic cross-profiles, digital cartographic
data files on tape, verbal driving directions for navigation, etc., are
just some examples of the ambiguous ("fuzzy"?) boundary of the concept
to which the English word "map" correctly applies.  I strongly suspect
that "map" is not unique in this regard!

------------------------------

Date: Mon 6 Jul 87 16:18:12-PDT
From: PAT <HAYES@SPAR-20.ARPA>
Subject: Re: AIList Digest   V5 #170

Talk about walking into a minefield, but here goes. Concerning the Harnad
grounding problem.  This is lovely stuff, and I save every word for later
reading, but it does seem recently to have gone from interesting discussions
and arguments to a rather repetitive grinding over the main points again and
again.  THe result is that Stevan is reduced to repeating himself and
reiterating his points in the face of what must seem to him to be increasing
stubbornness.  I seem to be seeing more and more phrases like '..as I have
emphasised earlier..'.  All of us who teach are familiar with the syndrome
where the 35th occurrence of the same error makes us more exasperated than the
first one did.

Let me suggest that perhaps nothing much new is being said
in these discussions any more, and certainly no-one is saying anything which
is going to cause Stevan to change any of his positions.  Perhaps the right
thing to do is for people to send their comments directly to Harnad, and for
him to send us the selections which HE considers worth public airing, together
with his responses.  That way we will be spared reading all this stuff which
is, apparently, of such low intellectual caliber, and Laws will have an easier
time, and public feelings will not get to the point which produces letters
like David Harwood's.

Just an idea.

Pat Hayes

------------------------------

Date: Mon, 6 Jul 87 16:56:06 PDT
From: cottrell%ics@sdcsvax.ucsd.edu (Gary Cottrell)
Subject: Automatic newsgroup creation to reduce aggravation


How about some software to automatically create newsgroups after a certain
amount of traffic with the same subject line? And an appropriate expiration
of the newsgroup after traffic dies down? Then people could decide to add
the newsgroup or not. E.g., comp.ai.symbol.grounding.. It doesn't even sound
hard enough to be called AI! I am a net.news.software.innocent, however.

gary cottrell

------------------------------

Date: Mon, 06 Jul 87 22:03:51 EST
From: Tim Daciuk <ACAD8023%RYERSON.BITNET@wiscvm.wisc.edu>
Subject: Symbol Grounding Problem

Having read the recent "discussion" regarding the Symbol Grounding Problem,
I would have to suggest that I tend to agree with Mr. Harwood.  Though the
discussion which has taken place on this subject was interesting, it has
become, at least to me, tedious and boring.  In addition, I think that any-
one joining AI-List at this point would find this topic almost impossible
to follow, due to the number of references to previous editions of the
journal, and due to the highly interactive mode which this discussion has
assumed.  I do not think that a separate discussion should be started,
however, I would suggest that future Symbol Grounding Problem entries be
sorted to the bottom of the list.  This would allow the list to continue
in publishing this important part of AI, and would allow those of us who
no longer have the stamina to ponder the implications of blue, green, blue-
green, etc., to quit at an appropriate time.

Would sorting the list with Symbol Grounding coming at the bottom be very
difficult Ken?

Tim Daciuk
Ryerson Polytechnical Institute
Toronto Ontario
Canada


  [That's essentially what I've been doing, although lengthy conference
  announcements sometimes get sorted even lower.  I was holding all of
  the symbol grounding discussion for the weekend, although that did
  create some synchronization problems between messages sent to my
  Arpanet mailbox and replies that went directly to Usenet.  I have
  usually published symbol grounding issues in separate digest issues,
  making them easier to skip (or save).  Usenet readers don't get the
  benefit of that sorting, of course (but make up for it by eliminating
  the digesting delay).  Sorting to the true "bottom" of an infinite
  discussion stream would seem a little extreme.  -- KIL]

------------------------------

Date: 6 Jul 87 21:38:29 GMT
From: harwood@cvl.umd.edu  (David Harwood)
Subject: An apology for being overly sarcastic


        I want to apologize for being overly sarcastic with Mr. Harnad.
Although I consider my complaint about his postings to be justified, I am
sorry about my overly-sarcastic manner. For the record, this apology was
my own idea, not involving discussion with others. I simply felt fairly
guilty about my irritable responses. (Actually, it is only recently that
I've had a chance to read this newsgroup; it has been suggested that I
read the moderated newsgroup instead - without posting of course!)


\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
        Briefly responding to a posted reply by B.I. Olasov, also to
correspondence by email from D. Stampe (for different reasons):

In article <1071@bloom-beacon.MIT.EDU> bolasov@aphrodite.UUCP
(Benjamin I Olasov) writes:
[...]
>Some of the most challenging and interesting problems of AI are philosophical
>in nature.  I frankly don't see why this fact should disturb anyone.
>
>Perhaps if more of us pursued our theoretical models with comparable rigor
>to that with which Mr. Harnad pursues his, the balance of topics represented
>on comp.ai might shift .....

        As I tried to make clear - supplying fairly clear examples of his
posting style - it is definitely not Mr. Harnad's particular philosophy or
theoretical proclivity which irritated me - it was his manner of discussion
I was complaining about. (Just as others complained about my sarcasm, more
than the content of my complaint.) Among other things, for example, I
scarcely consider his arguments to be what you say "rigorous." Some of the
discussants themselves have complained, albeit politely, about his somewhat
idiosyncratic usage of terminology (among other things).
        So you are mistaken in your suggestion about my complaint. Rigorous
use of very complex and abstract concepts is commonplace in many branches
of computer science, eg. semantic specification of languages executed by
parallel systems. The level of abstraction and rigor is not at all less
than in any area of inquiry, including philosophy or cognitive psychology.
On the other hand, I fully agree that both philosophy and psychology have
very important and relevant contributions to what is called "artificial
intelligence," although it seems to me that too much of the purported
interdisciplinary discussion is polemical and political rather than really
constructive. And I would add that much, even most, of AI's recent "advance"
has been nonsensical propaganda for funding, and devoid of theoretical
foundation.
        Also, I would add that Mr. Harnad - what is clear by his
postings - is perhaps only superficially familiar with what are real
advances in symbolical "AI", eg development of very powerful systems for
automatic deduction, which have practical importance for all of "AI"
as well as have rigorous foundations. These surely are not entirely
founded on theories of human psychology or on speculative philosophy,
and probably should not be, since we would like to consider computing
machines which do some things according to specification, and better
than we do.
        I realize very well that some areas of AI are very much
harder than others - computer vision comes to mind ;-) and it is
obvious to everyone concerned that we need both numerical and symbolical
algorithms and representations. (I will not get involved in discussing
what S.H. might mean by "symbolical", "analog", "invertible", and so
forth - I don't really know.)
        I think it is also apparent that we might have yet
to consider some "connectionist" architectures and algorithms, which
perhaps do not admit any simple formal specification of input/output
relations. This would invite some philosophical speculation about
the adequacy of purely logical specification for development of
artificial intelligence. Conversely, we may already have sufficient
theoretical basis for 'creating' human-like artificial intelligence,
by functional simulation of neurons, although we do have the technology
(and moral sense I hope) to de-engineer a human brain. This will surely
happen in the distant future only depending on our technology and not
on major improvements in our theoretical understanding of neurons. The
situation might well be that we can recreate human intelligence which
we still largely cannot comprehend by formal specification. In part,
these means that psychology, theoretical "AI", even S.H.'s "Total
Turing Test" are loose ends as much as interdependent.
        (As a religious person, I wonder about what this might mean -
I recall that an ancient interpretation of the Genesis story said that
when mankind ate of the fruit of the knowledge of good and evil - just
as the serpent claimed - mankind became endowed with a power like that
of God - that is, having the power to create and to destroy worlds. In
our times, our technology has surpassed our moral sensibilty - which
many computer sceientists say does not exist anyway. Of course, other
Jewish tradition has it that many worlds have already been destroyed
before this one. I'm not even sure that pursuit of "AI" technology
is such a good thing, if it contributes to our destruction or loss of
dignity. But who knows, except for God?)



Response to a reply by email:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\


Message-Id: <8707061548.AA11938@uhmanoa.ICS.HAWAII.EDU>
Date: Mon, 6 Jul 87 05:36:04-1000
From: seismo!scubed!sdcsvax!uhccux.UHCC.HAWAII.EDU!nosc!humu!stampe
(David Stampe)
To: harwood@cvl.umd.edu (David Harwood)
In-Reply-To: harwood@cvl.umd.edu's message of 5 Jul 87 21:48:28 GMT
Subject: Re: The symbol grounding problem - please start your own newsgroup
Status: R

You have now posted four messages to comp.ai containing nothing
but rude complaints about another's postings on symbol grounding.
They are not required reading, and they don't prevent you from
reading or posting on other topics.  What you MAY NOT do is
disturb the newsgroup with irrelevant and loutish postings like
your last four.  There are people who care about how University of
Maryland employees behave in public.

If I were you, I'd consider a public apology.

David Stampe, Univ. of Hawaii.

\\\\\\\\\\\\
        I don't have any desire to prevent S.H. from posting,
as I have made clear. You are right that I should apologize for
being overly sarcastic. He deserved some of it, but I overdid
it.
        I don't like your mention of my employment here - which
might be considered to be a threat, either to my employment or
to post things which you dislike, even sarcastic complaints. If
you did threaten me like this, you would have misjudged me, also
misjudged what would be my reaction.
        In any case, you are right about the apology being due.

-David Harwood

------------------------------

Date: Tue,  7 Jul 87 07:33:49 edt
From: dg1v+@andrew.cmu.edu (David Greene)
Subject: handling the S.G.P issue


While some of the discussion has proven interesting, it is become burdensome
to sort through and rather recursive as arguments start focusing on what
prior arguments meant...

Perhaps a seperate bboard would be more appropriate.  At the very least, Ken
Laws' suggestion that the arguments (and subject lines) be broken into
discrete categories seems to go a long way toward making this disscussion
palatable if not worthwhile.

Mr. Harnad might want to consider proposing a subject taxonomy prefaced with
"SGP".


David Greene
Carnegie Mellon

------------------------------

Date: Wed, 8 Jul 87 11:02:53 GMT
From: Caroline Knight <cdfk%hplb.csnet@RELAY.CS.NET>
Subject: Debating

As a so-far passive reader of the grounding problem debate via
AIList Digest I have at last been spurred to action:

For the proponents of a theory to be able to understand and discuss
the positive, the negative and the intersting aspects of it is a sign
of strength. For them to resort to personal name calling is not.

However I do have sympathy with those who have now started to put the
boot in. Especially with those who are tired of the language which
is frequently unclear and suspiciously polysyllabic.

A thought for those who honestly believe that an idea is wrong and the
holder of it would be better off without it:-

1. It is much easier to change one's mind and throw away useless ideas
   if one has NOT been pushed to defend them tooth and nail.

2. Few ideas (or accepted theories) are completely correct. One can
   gain more by simply acknowledging that an idea has flaws than by
   trying to stretch it until it rips. Of course these anomolies might
   trigger new ideas.


Caroline Knight

------------------------------

End of AIList Digest
********************
11-Jul-87 22:48:25-PDT,22462;000000000000
Mail-From: LAWS created at 11-Jul-87 22:33:13
Date: Sat 11 Jul 1987 22:31-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #174
To: AIList@STRIPE.SRI.COM


AIList Digest            Sunday, 12 Jul 1987      Volume 5 : Issue 174

Today's Topics:
  Seminars - Object-Oriented Databases (IBM) &
    A Model for Distributed Planning (SU) &
    Pengi: A Theory of Activity (UCB) &
    Learning Conjunctive Concepts (SU),
  Conferences - Concurrent Logic Programming, Open Systems Programming &
    NEXUS meeting at AAAI &
    Fifth International Machine Learning Conference &
    Second International Conference on AI in Engineering

----------------------------------------------------------------------

Date: Wed, 01 Jul 87 13:54:22 PDT
From: IBM Almaden Research Center Calendar <CALENDAR@ibm.com>
Subject: Seminars - Object-Oriented Databases (IBM)

                     IBM Almaden Research Center
                           650 Harry Road
                       San Jose, CA 95120-6099

                            Excerpts from
                          RESEARCH CALENDAR
                          July 6 - 10, 1987

EFFICIENT SUPPORT FOR DERIVED OBJECTS IN RELATIONAL DATABASE SYSTEMS
E. Hanson, University of California at Berkeley

Comp. Sci. Sem.    Tues., July 7    10:00 A.M.    Room:  B3-247

Recently, an incremental algorithm known as Algebraic View Maintenance
(AVM) was proposed for maintaining materialized copies of views.
Another incremental view maintenance algorithm called Rete View
Maintenance (RVM) is presented in this talk.  RVM is based on the Rete
network, a type of discrimination network used to support efficient
forward chaining rule interpreters in expert systems shells.  RVM is
known as a statically optimized view maintenance algorithm because the
execution plan for maintaining views is compiled in advance into the
Rete network.  In contrast, AVM is dynamically optimized since an
execution plan for maintaining a view is found after each base
relation update that affects the view.  A statically optimized
variation of AVM is also presented.  Using algorithms for view
maintenance as a starting point, a collection of methods is proposed
to allow other kinds of derived objects to be maintained.  These
include aggregates, database procedures, and views and procedures
containing aggregates.
Host:  S. Finkelstein


EPIDEMIC ALGORITHMS FOR REPLICATED DATABASE MAINTENANCE
A. Demers, Xerox Palo Alto Research Center

Bay Area Syst. Symp.  Fri., July 10 11:15 A.M. Room: Front Aud.

When a database is replicated at many sites, maintaining mutual
consistency among the sites in the face of updates is a significant
problem.  This paper describes several randomized algorithms for
distributing updates and driving the replicas toward consistency.  The
algorithms are very simple and require few guarantees from the
underlying communication system, yet they ensure that the effect of
every update is eventually reflected in all replicas.  The cost and
performance of the algorithms are tuned by choosing appropriate
distributions in the randomization step.  The algorithms are closely
analogous to epidemics, and the epidemiology literature aids in
understanding their behavior.  One of the algorithms has been
implemented in the Clearinghouse servers of the Xerox Corporate
Internet, solving long-standing problems of high traffic and database
inconsistency.
Host:  L.-F. Cabrera


OBJECT-ORIENTED INTERFACES TO RELATIONAL DATABASES
R. Cattell, SUN Microsystems

Bay Area Syst. Symp.    Fri., July 10    1:30 P.M.    Room:  Front Aud.

Users of engineering workstations have requirements that traditional
database systems often do not address.  A number of research projects
have recently examined addressing engineering requirements with
additional semantics that an "object-oriented" database system can
provide over a relational database system.  My talk will focus on two
topics that have received relatively little attention: (1)
object-oriented end-user interfaces to databases exploiting the
capabilities of an engineering workstation, and (2) the *performance*
that these tools and engineering applications require from a database
system, without which additional semantics are useless.  Examples will
be provided from some of our own database and user interface work
combining features of object-oriented and relational database models.
Users may graphically view and edit a database schema, view database
objects that span multiple relations, browse through databases by
pointing with a mouse, and display specialized objects such as
documents and images stored in a database.  To quantify performance,
we have proposed a set of benchmarks that measure the simple
object-oriented operations that we believe engineering applications
most typically execute.  I will discuss the results of performing
these benchmarks on several relational database systems, and the
implications for database system architecture for engineering
applications.
Host:  L.-F. Cabrera

For further information on individual talks, please contact the host
listed above.

------------------------------

Date: Fri 3 Jul 87 14:17:35-PDT
From: Charlie Koo <KOO@Sushi.Stanford.EDU>
Subject: Seminar - A Model for Distributed Planning (SU)


                A Model for Distributed Performance --

            Synchronizing Plans among Intelligent Agents
                         via Communication

                          Charles C. Koo

                        July 8, Wednesday
                         9:00am - 10:00am
                            Room 352
                        Margaret Jacks Hall


     In a society where a group of agents cooperate to achieve certain goals,
agents perform their tasks based on certain plans.  Some tasks may interact
with tasks done by other agents.  One way to coordinate those tasks is to let
a master planner generate a plan to begin with, and distribute tasks to
individual agents accordingly.  However, there are two difficulties
with this approach, given that agents are resource-limited.  First, the
master planner needs to know all the expertise that each agent has.  The
amount of knowledge sharply increases with the number of specialties.
Second, the centralized planning process takes longer turn-around time than
if each agent plans for itself.  This causes a lot of computing resources
not being utilized.  Thus, distributed planning is desirable.

    In this presentation, I will describe a model for synchronizing and
monitoring plans autonomously made by intelligent agents via communication.
The model suggests an planning algorithm that allows agents to plan in
parallel and then synchronize their plans via a commitment-based
communication vehicle.   Represenation as well as reasoning issues in the
distributed environment will be addressed.

Communication plays an integral role in planning for synchronization
purposes.  The communication vehicle includes a minimal set of protocols
that enables the synchronization, a set  of communication operators and a
set of commitment tracking operators.  The  tracking operators provide means
to monitor the progress of plan execution, to prevent delays, and to modify
plans with less effort when delays happen.  A deadlock detection scheme will
also be described.

------------------------------

Date: Mon, 6 Jul 87 08:48:17 PDT
From: teresa@ernie.berkeley.edu (teresa diaz)
Subject: Seminar - Pengi: A Theory of Activity (UCB)


                      Special Seminar
                         Phil Agre

             Artificial Intelligence Laboratory
           Massachusettes Institute of Technology
                 Pengi:  An Implementation
                            of a
                     Theory of Activity
                      2:00 - 4:00 p.m.
                   Friday, July 10, 1987
                      1011 Evans Hall

     AI has typically sought  to  understand  the  organized
nature  of  human activity in terms of the making and execu-
tion of plans.  There can be no doubt that people use plans.
But  before  and beneath any plan-use is a continual process
of moment-to-moment  improvisation.   An  improvising  agent
might  use  a plan as one of its resources, just as it might
use a map, the materials on a kitchen counter, or  a  string
tied round its finger.  David Chapman and I have been study-
ing the organization of the most common  sort  of  activity,
the   everyday,   ordinary,  routine,  familiar,  practiced,
unproblematic activity typified by  activities  like  making
breakfast,  driving  to  work,  and stuffing envelopes.  Our
theory describes the central role of improvisation  and  the
inherent  orderliness,  coherence,  and  laws  of  change of
improvised activity.  The organization of  everyday  routine
activity  makes strong suggestions about the organization of
the human cognitive architecture.  In  particular,  one  can
get  along  remarkably well with a peripheral system much as
described by Marr and Ullman and a central  system  made  of
combinational  logic.   Chapman has built a system with such
an architecture.  Called Pengi, it plays a commercial  video
game  called  Pengo, in which a player controls a penguin to
defend itself against ornery and  unpredictable  bees.   The
game  requires  both moderately complex tactics and constant
attention to opportunities and contingencies.  I  will  out-
line our theory of activity, describe the Pengi program, and
indicate the directions of ongoing further research.

______________________________________________________________________

   This information is also kept in usr/public/seminars.

------------------------------

From: Peter Karp <KARP@SUMEX-AIM.STANFORD.EDU>
Subject: Seminar - Learning Conjunctive Concepts (SU)

                   [Forwarded from the AFLB list.]

David Haussler from UC Santa Cruz will be giving a talk at the GRAIL
learning seminar this Thursday 7/9 at the Welch Road Conference room at
1:15.  This is Room A1110 in Building A at 701 Welch Road, across from
the Stanford Barn.


                Learning Conjunctive Concepts in Structural Domains

David Haussler
Department of Computer Science,
University of California, Santa Cruz, CA 95064

We study the problem of inductively learning conjunctive concepts from
examples on structural domains like the blocks world.  This class of
concepts is formally defined and it is shown that even when each example
(positive or negative) is a two-object scene it is NP-complete to
determine if there is any consistent concept in this class.  We
demonstrate how this result affects the feasibility of Mitchell's
version space approach and how it shows that it is unlikely that this
class of concepts is polynomially learnable from random examples in the
sense of Valiant. On the other hand, we show that for any fixed number
of objects per scene this class is polynomially learnable from random
examples if
        (1) we allow a larger hypothesis space, or
        (2) we answer cetrain types of queries in addition to providing
        random examples.

------------------------------

Date: Mon, 6 Jul 87 14:43:14 PDT
From: Ken Kahn <Kahn.pa@Xerox.COM>
Reply-to: Kahn.pa@Xerox.COM
Subject: Conference - Concurrent Logic Programming, Open Systems
         Programming

We are pleased to announce that Xerox PARC with support from AAAI will
host a workshop on concurrent logic programming, meta-programming, and
open systems programming on September 8 and 9 (the first business days
after the Fourth IEEE Symposium on Logic Programming in San Francisco).
Participation is by invitation only.  The purpose of the workshop is to
promote informal scientific interchanges between members of various
laboratories doing research centered around concurrent logic programming
languages such Guarded Horn Clauses and KL1 at ICOT, Parlog at Imperial
College, FCP at Weizmann Institute of Science, and Vulcan at Xerox PARC.
Other topics of interest include meta-programming to support programming
abstractions and issues related to programming large open distributed
systems. The format of the workshop will consist of informal
presentations and discussions of work in progress.  Presentations given
at the Fourth SLP are not to be repeated.  There will be several panel
discussions on topics such as the different proposals for dataflow
synchronization in these languages, the role of meta-programming in
supporting abstractions, and why it is that there are several indepenent
implementation efforts for different dialects of concurrent logic
programming languages (or are they committed choice programming
langauges or open systems programming languages?).

Live demonstrations of software is encouraged.  Available computers
include Xerox computers running Xerox Common Lisp, Vaxes under Unix
4.2BSD, Sun 3's, IBM PC's, and Macintoshes (SE and II).

We will not be covering participants' transportation or living expenses.
Lunches will be provided.  We are expecting between 20 and 40
participants.  If you are interested in coming, or know someone who
might be, please send a letter or electronic message indicating what you
would like to talk about or demo to:

Kenneth Kahn
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA 94304
(415) 494-4390

or

ArpaNet: Kahn.pa@Xerox.Com


Here's the preliminary list of invitees:

Ehud Shapiro, Weizmann Institute
Shmuel Klinger, Weizmann Institute
Vijay Saraswat, CMU
Leon Sterling, Case Western Reserve
Keith Clark, Imperial College
Steve Gregory, Imperial College
Andrew Davison, Imperial College
M. Huntbach, Imperial College
Mitsuhiro Kishimoto, Fujitsu
Y. Takayama, ICOT
A. Okumura, ICOT
Y. Kimura, ICOT
H. Seki, ICOT
T. Chikayama, ICOT
Kazonuri Ueda, ICOT
K. Furukawa, ICOT
Fernando Pereira, SRI
Tony Kusalik, Univ. of Saskatchewan
Leon Alkalaj, UCLA
Richard O'Keefe, Quintus Compter Systems
Bill Kornfeld
Lee Naish, Melbourne University
G. Levi, University of Pisa
Walter Wilson, IBM
M. Maher, IBM
Carl Hewitt, MIT
Will Clinger, Tektronics
Mark Miller, Xerox PARC
Danny Bobrow, Xerox PARC
Curtis Abbott, Xerox PARC
Ken Kahn, Xerox PARC
Eric Tribble, Xerox PARC

------------------------------

Date: Wed, 8 Jul 87  21:34:05 CDT
From: Dan Cerys <cerys@XX.LCS.MIT.EDU>
Subject: Conference - NEXUS meeting at AAAI

I don't recall seeing the following on these lists, but this meeting is
probably interesting to those interested in the TI Explorer.  Please
post if it will appear before July 15.



        The National Explorer Users' Society will meet during AAAI-87 in
Conference Room A of the Conference Center House at Seattle Center in
Seattle, Washington on Wednesday, the fifteenth of July from three
o'clock until six o'clock.

        3:00  Welcome, Introductions, and Organization
                Rich Acuff, Stanford University
        3:20  Explorer II
                Chuck Corley, Strategic Systems Engineering, TI
        3:30  New Customer Support Offerings
                Phil Campbell, Technical Support Center, TI
        3:35  Release 3.0 Summary
                Joyce Statz, User Interface Branch, TI
        3:45  TGC, System Training
                Jim Mynatt, AI Technical Consultant, TI
        3:55  Networking, Namespace, and Generic Network Interface
                Roger Frech, Networking Branch, TI
        4:05  New Compiler Features
                Merrill Cornish, member, Group Technical Staff, TI
        4:15  Future Directions
                Henry Carr, Explorer Software Development, TI
        4:25  Educational Marketing Survey
                John Alden, Educational Marketing, TI
        4:30  Bi-Directional Question and Answer
        5:10  Break into groups to talk about
                Explorer II with Chuck Corley
                LX/Multiprocessing with Kari Karhi
                Networking with Roger Frech
                TI Prolog with Dan Cerys

        NEXUS, the National Explorer Users' Society, met last year as
the Explorer Users' Group at AAAI-86.  The purpose of the group is to
share technical information about the Explorer.  There are no dues or
membership fees.  Membership is open to all Explorer users.  To join,
send your name, address, phone number, and net address to either of the
following addresses:

        Rich Acuff
        Stanford University
        251 Medical School Office Building
        Stanford CA 94305
        acuff@sumex-aim.stanford.edu

        Glenda S. McKinney M/S 2201
        Texas Instruments
        P. O. Box 2909
        Austin TX 78769
        mckinney%dsg%ti-csl@csnet-relay

        Conference Room A is on the second floor of the Conference
Center House, in the northeast corner.  The Conference Center House is
across the plaza from the Coliseum.

------------------------------

Date: Fri, 10 Jul 87 11:01:21 EDT
From: laird@caen.engin.umich.edu (John Laird)
Subject: Conference - Fifth International Machine Learning Conference


                                 CALL FOR PAPERS
                 FIFTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING
                               Ann Arbor, Michigan
                                June 12-15, 1988


 The Fifth International Conference on Machine Learning will be held at the
 University of Michigan, Ann Arbor, during June 12-15, 1988.  The goal of the
 conference is to bring together researchers from all areas of machine learning.
 The conference will have open attendance and registration fees.


                                  REVIEW CRITERIA

 In order to ensure high quality papers, each submission will be reviewed
 by two members of the program committee and judged on clarity, significance,
 and originality.  The best papers will be published in the proceedings, and
 their authors will be invited to give a talk on their work or describe it at
 a poster session.  All submissions should contain new work, new results, or
 major extensions to prior work.  Summaries and overviews are discouraged.

 The ideal paper will present a clear description of the learning task being
 addressed and the proposed solution to that problem.  If the paper describes
 a running system, it should explain that system's representation of inputs
 and outputs, its performance component, and its learning methods.  It should
 include a detailed example, as well as relate the work to earlier research.
 Most important, all papers should include some evaluation of the work in the
 form of substantive results.  Papers are not required to take this form, but
 authors are strongly encouraged to follow this format.


                                 SUBMISSION OF PAPERS

 Each paper must have a cover sheet with the title, authors' names, primary
 author's address and telephone number, and an abstract of about 200 words. The
 cover page should also give three keywords that specify the problem area,
 general approach, and evaluation criteria.  Some examples of each are:

 PROBLEM AREA               GENERAL APPROACH         EVALUATION CRITERIA
 Concept learning           Genetic algorithms       Empirical evaluation
 Learning and planning      Empirical methods        Theoretical analysis
 Language learning          Explanation-based        Psychological validity
 Learning and design        Connectionist
 Machine discovery          Analogical reasoning

 The body of the paper must not exceed 13 double-spaced pages in twelve point
 font, including figures but excluding references.  Authors should send four
 copies of their papers to:

        Machine Learning Conference
        Cognitive Science and Machine Intelligence Laboratory
        The University of Michigan
        904 Monroe Street
        Ann Arbor, MI 48109-1234
        Internet: ml88@csmil.umich.edu

 The deadline for submission of papers is January 15, 1988.  Authors will be
 notified of acceptance by March 1, 1988.  Final camera-ready copies of the
 papers will be due April 1, 1988.

Organizing Committee
 J. E. Laird (chairman)                      University of Michigan
 J. H. Holland, S. L. Lytinen, G. M. Olson   University of Michigan
 J. G. Carbonell, T. M. Mitchell             Carnegie-Mellon University
 P. Langley                                  University of California, Irvine
 R. S. Michalski                             University of Illinois

------------------------------

Date: Fri, 10 Jul 87 23:40:17 EDT
From: sriram@ATHENA.MIT.EDU
Subject: Conference - Second International Conference on AI in
         Engineering


                  SECOND INTERNATIONAL CONFERENCE ON
                          AI IN ENGINEERING

The  program  agenda  for the above conference, which is to be held in
Boston from August 3-8, 1987, can be obtained from Ms Sandra  Elliott,
Computational  Mechanics  Inst.,  25 Bridge Street, Billerica, MA 01821,
USA (Tel. No: (617)667 5841). I have a copy of the agenda  online.  If
you are interested in getting a copy, send me mail.

Some program highlights:
        Keynote speaker: Dr. Randy Davis, MIT, USA
        Banquet speaker: Dr. Mark Stefik, Xerox,USA
        Invited speakers:
                Dr. John Gero, Univ. of Sydney, Australia
                Dr. Jean-Claude Latombe, ITMI, France
                                         (Currently atStanford Univ.)
                Dr. B. Chandrasekaran, OSU, USA
        Panels on:
                AI in Mechancial Engineering: The Commerical Reality
                AI in Electrical Engineering: The Commerical Reality
                AI in Engineering Design: The Research Issues
                AI in Engineering: The Engineer's Perspective
        Over 80 papers dealing with various applications of knowledge-based
        systems, robotics, and natural language processing will be presented.

Sriram@athena.mit.edu

------------------------------

End of AIList Digest
********************
11-Jul-87 22:55:04-PDT,19384;000000000000
Mail-From: LAWS created at 11-Jul-87 22:46:52
Date: Sat 11 Jul 1987 22:43-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #175
To: AIList@STRIPE.SRI.COM


AIList Digest            Sunday, 12 Jul 1987      Volume 5 : Issue 175

Today's Topics:
  Queries - ANIMAL in BASIC & XLisp & Monkey and Bananas Benchmark &
    Conference on Production Planning and Control & Neural Networks &
    GLISP,
  Tools - Real Time Expert Systems,
  Programming - Software Reuse,
  Law - Liability in Expert Systems,
  Expert Systems - Plausible Reasoning

----------------------------------------------------------------------

Date: 9 Jul 87 03:04:14 GMT
From: David L. Brauer <nosc!humu!dbrauer@sdcsvax.ucsd.edu>
Subject: ANIMAL in BASIC ???

Somewhere in the darkest reaches of my memory I recall seeing a listing
of the game ANIMAL in BASIC.  It's that old standby introduction to rule-based
reasoning that tries to deduce what animal you have in mind by asking
questions like "Does it have feathers?", "Does it have hooves?" etc.
The problem is that I described this program to my wife and she now wants
to program it on an Apple IIc for her elementary school students.  I believe
I saw the listing in an "Intro to AI" article in some magazine but I'm not
sure.  I would prefer not to have to help her program the thing from
scratch so any pointers would be greatly appreciated.

                        Thanks,

                        David C. Brauer
                        MilNet: dbrauer@NOSC.Mil

------------------------------

Date: Thu 9 Jul 87 08:51:29-PDT
From: BEASLEY@EDWARDS-2060.ARPA
Subject: clarification

I would like to clarify my request for information about XLISP.  The particular
version i have is XLISP Experimental Object-oriented Language Version 1.6
by David M. Betz for use on the IBM PC and others.  Any information would be
greatly appreciated.  By the way, i have the article from BYTE magazine.
The examples didn't work!!!!
Please send the info to beasley@edwards-2060.arpa.

joe

------------------------------

Date: Fri, 10 Jul 87 10:20:10 SET
From: "Adlassnig, Peter" <ADLASSNI%AWIIMC11.BITNET@wiscvm.wisc.edu>
Subject: Monkey and Bananas Benchmark

RE: Inquiry for Production Systems

Since we finished our PAMELA (PAttern Matching Expertsystem Language)
we are interesting in the Monkeys and Bananas benchmark (NASA MEMO
FM7(86-51). I wonder how to obtain the source code.

In addition to that we would be interested in YAPS (Yet Another Production
System) running under VAX/UNIX. Is there any information available.

I have no direct access to the ARPANET. Please return mails to my
friend's email address:
                       adlassni at awiimc11.bitnet

my postal address is: Franz Barachini
                      ALCATEL-ELIN Research Center
                      Floridusgasse 50
                      A-1210 Vienna
                      Austria

------------------------------

Date: 10 Jul 87 13:46:58 GMT
From: dhj@aegir.dmt.oz (Dennis Jarvis)
Subject: conference on production planning and control

In a (not so) recent posting to comp.ai.digest, it was announced that a
conference entitled "Expert Systems and the Leading Edge in Production
Planning and Control" would be held from May 10-13 in Charleston, South
Carolina. I would like to obtain a copy of the proceedings of that
conference - any assistance in this regard would be greatly appreciated.

________________________________________________________________________
Dennis Jarvis, CSIRO, PO Box 4, Woodville, S.A. 5011, Australia.

                        UUCP:  {decvax,pesnta,vax135}!mulga!aegir.dmt.oz!dhj
PHONE: +61 8 268 0156   ARPA:  dhj%aegir.dmt.oz!dhj@seismo.arpa
                        CSNET: dhj@aegir.dmt.oz

------------------------------

Date: Fri, 10 Jul 87 11:04:59 +0200
From: mcvax!idefix.laas.fr!helder@seismo.CSS.GOV (Helder Araujo)
Subject: Neural Networks


    I am just starting working on a vision system, for which I am
considering several different architectures. I am interested in studying the
utilization of a neural network in such a system. My problem is that I am
lacking information on neural networks. I would be grateful if anyone could
suggest me a bibliography and references on neural networks. As I am not
a regular reader of AIlist I would prefer to receive this information
directly. My address:

      mcvax!inria!lasso!magnon!helder

  I will select the information and put it on AIlist.

      Helder Araujo
      LAAS
      mcvax!inria!lasso!magnon!helder
      7, ave. du Colonel-Roche
      31077 Toulouse
      FRANCE


  [I have forwarded this to the neuron%ti-csl.csnet@relay.cs.net
  neural-network list.  -- KIL]

------------------------------

Date: 10 Jul 87 14:45:41 GMT
From: uwmcsd1!leah!itsgw!nysernic!b.nyser.net!weltyc@unix.macc.wisc.ed
      u  (Christopher A. Welty)
Subject: Looking for GLISP


        I am looking for some references to G-LISP, something written
by a guy named Novac (sp?) at Stanford.  I don't actually need G-LISP,
but I would like to see the papers or any other references.  Any help
would be much appreciated.  With enough interest I'll post to the
list..





Christopher Welty - Asst. Director, RPI CS Labs
weltyc@cs.rpi.edu       ...!seismo!rpics!weltyc

------------------------------

Date: Fri, 10 Jul 87 01:22:56 gmt
From: Aaron Sloman <aarons%cvaxa.sussex.ac.uk@Cs.Ucl.AC.UK>
Subject: Real Time expert systems

Hi,

I saw your enquiry about real time expert systems. A UK firm called
Systems Designers have used our Poplog system to implement a prototype
system called RESCU which can control production of detergent at ICI.

This was one of the UK Alvey Programme's "community club" projects,
i.e. a number of industrial firms potentially able to benefit from
the development helped to fund the prototype demonstration system.

They were so pleased with the result that the development work
is continuing.

They used Poplog on a VAX-730 connected to a variety of monitoring
devices displays, etc.

The system was written in POP-11 extended by a task specific rule
language for which they implemented an incremental compiler using
the POP-11 compiler-building tools.

There have been various relatively short reports on RESCU in newspapers, etc.,
as well as conference presentations, but I have not seen a full write-up.

If you want to know more about RESCU write to:
    Mike Dulieu,
    Systems Designers Plc,
    Pembroke House,
    Pembroke Broadway
    Camberley, Surrey, GU15 3XD
    England
                                    Phone +44 276 686200

I hope this information is of some use.

Best wishes
Aaron Sloman,
U of Sussex, School of Cognitive Sciences, Brighton, BN1 9QN, England
    UUCP:     ...mcvax!ukc!cvaxa!aarons
    ARPANET : aarons%uk.ac.sussex.cvaxa@cs.ucl.ac.uk
    JANET     aarons@cvaxa.sussex.ac.uk

PS
Robin Popplestone at University of Amherst Mass (pop@edu.umass.cs) is
taking over academic distribution of Poplog in USA. He may have some
information about RESCU. He'll be at Amherst and SUN stands at AAAI
conference.

------------------------------

Date: 9 Jul 87 03:10:00 GMT
From: johnson@p.cs.uiuc.edu
Subject: Re: Software Reuse (short title)


Object-oriented programming languages like Smalltalk provide a great
deal of software reuse.  There seems to be several reasons for this.
One is that the late bound procedure calls (i.e. message sending)
provide polymorphism, so it is easier to write generic algorithms.
Late binding encourages the use of abstract interfaces, since the
interface to an object is the set of messages it accepts.  Another
reason is that class inheritance lets the programmer take some code
that is almost right and convert it without destroying the original,
i.e. it permits "programming by difference".  These two features
combine to encourage the creation of "application frameworks" or
"application toolkits", which are sets of objects and, more importantly,
interfaces that let the application developer quickly build an application
by mixing and matching objects from existing classes.

There are a number of ways that an abstract algorithm can be expressed
in these languages.  An abstract sort or summation algorithm can be
built just using a polymorphic procedure.  Abstract "process all" and
reduction algorithms are provided by inheritance in the Collection
class hierarchy of Smalltalk, and a toolkit can be used to describe
the abstract design of a browser or editor from a set of abstract
data types, a display manager, and a dialog control component
(i.e. the Model/View/Controller system).

The Smalltalk programming environment also provides tools to help
the user find code and to figure out what it does.  While these tools
(and the language) could stand some improvement, they already provide
a lot of what is needed for code reuse.  And they don't use A.I!

------------------------------

Date: Fri, 10 Jul 87 07:53:43 PDT
From: George Cross <cross%cs1.wsu.edu@RELAY.CS.NET>
Subject: Re: Liability in Expert Systems


Hi,
I don't know about any pending cases, but readers interested in this subject
should check the article by Christopher J. Gill, High Technology Law Journal,
Vol 1, #2, P483-520, Fall 1986 entitled "Medical Expert Systems: Grappling
with Issues of Liability."  An important legal issue is
whether the use of a medical expert system constitutes a product or a service.
If an expert system is a product, strict liability applies whereas if it a
service then a negligence standard applies.  Perhaps some lawyer reading
Risks or AILIST could read this article and summarize it for us.
It is not easy going.

 ---- George

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
 George R. Cross                cross@cs1.wsu.edu
 Computer Science Department    ...!ucbvax!ucdavis!egg-id!ui3!wsucshp!cs1!cross
 Washington State University    faccross@wsuvm1.BITNET
 Pullman, WA      99164-1210    Phone: 509-335-6319 or 509-335-6636

------------------------------

Date: Wed, 8 Jul 87 09:56 EDT
From: DON%atc.bendix.com@RELAY.CS.NET
Subject: Plausibility reasoning

>From: Jenny <ISCLIMEL%NUSVM.BITNET@wiscvm.wisc.edu>
>Subject: so what about plausible reasoning ?

>As I read articles on plausible reasoning in expert systems, I come to the
>conclusion that experts themselves do not exactly work with numbers as they
>solve problems.

You are correct in several senses.  One, the psychology literature has
shown time and time again that human belief revision does not conform to
Bayesian evidence accumulation (e.g., Edwards, 1968; Fischhoff &
Beyth-Marom, 1983; Robinson & Hastie, 1985; Schum, Du Charme, & DePitts,
1973; Slovic & Lichtenstein, 1971).  Two, it does not appear that
humans literally use any of the methods.

However, the humans do appear to be weighing alternatives.  Although,
for a period, it may seem that the humans are performing sequential
hypothesis testing, for stochastic domains with non-trivial uncertainty,
humans gather support for a large set of hypotheses at the same time.
They may appear to only gather support for their "favorite"; however, if
asked for an ordering over the alternatives or if asked how much they
believe the alternatives, it is obvious that they have allowed the
evidence to change their beliefs about the non-favorite hypotheses
(e.g., Robinson & Hastie, 1985).

The question becomes, "what are they doing?"  For the sake of argument,
let's take your assertion and say they are not explicitly manipulating
numbers -- it does seem absurd that the automobile mechanic who can't
add simple integers without a calculator could possibly perform the
complex aggregations necessary to use numbers.

Another possibility is that they are performing a type of non-monotonic
logic with the choice of assumptions and generation and testing of
possible worlds. This possibility suggests that, if the human is not
using numbers at any level, the human's choice of one assumption over
another uses a simple set of context sensitive rules.  The only time the
human should change assumptions (generate an alternative path or
possible world) is if the current assumptions are defeated or if some
magical attentional process causes the human to arbitrarily try another
path.  When choosing another path, there should be a fixed set of
rules guiding the choice of alternative -- there can be no idea of
"this looks a little stronger than that" because such comparisons
require a comparison metric which is not built into non-monotonic
logics.

The psychological research on human search strategies (especially for
games such as chess) suggests that humans often abandon one search path
to test another which looks like it might be as strong or stronger and
then return to the original path.  This return to the original path
leads to a rejection of the hypothesis that humans maintain a set of
assumptions until evidence refutes those assumptions.  By my previous
argument, then, if non-monotonic logics model human decision making, the
humans must be choosing to change path generation based on an
attentional mechanism.  If numbers are not involved, then the
attentional mechanism is probably rule-driven.

Of course, I've laid out a straw man.  I've said it's either numbers
or rules; however, there are probably many other possibilities.
The most likely possibility is an analog process something akin to
comparisons of weights.  If we were to model this process in a computer,
we would use numbers; so, we're back to numbers.  The trouble with
just using numbers, of course, is determining how to combine them
under different circumstances and how to interpret them.  Plausibility
reasoning has been used because it, at least, suggests methods for
both of these processes.  Something, even an approximation, which
has validity at some level, is better than nothing.

Rather than turn this into a thesis, let's go on to your next point.

>And many of them are not willing to commit themselves into
>specifying a figure to signify their belief in a rule.

Hum, this sounds like something from Buchanan and Shortliffe.  Let's
think about the implications of this argument.  You're saying, if
humans find it difficult to generate numbers to represent their degrees
of belief, then numbers must be ineffective.  Perhaps even at a
higher-level, if humans find some piece of knowledge or knowledge
artifact difficult to specify, then it probably is ineffective.
What evidence do we have for these claims?  What are the implications
of these claims?  From a personal standpoint, I find any knowledge,
beyond the trivial, is difficult to specify in some external formalism
(including writing, rules, and probabilities).  It seems unlikely
that we will ever generate external formalisms which allow painless
knowledge transfer.  Does that imply that knowledge transfer is
hopeless?  Let's hope not because that is the modus operandi of the
human species.  Granted, it will not be perfect, it will be painfull,
it will take time, but does that imply that it is worthless?

We "know" that human experts have knowledge which is effective.
There is growing evidence that purely logical formalisms for
representing this knowledge will not work for all problem domains
due to the stochastic nature of the domains or the incomplete
understanding of the domain.  Does this mean that automated problem
solving must be limited to non-stochastic domains in which there
is a full and complete understanding of the causal relations and
elements?

I fear that I have left the primary argument which I wanted to use in
response to your statement.  I looked at statements such as these and
asked myself whether "comfort" was a legitimate metric for determining
the effectiveness of knowledge.  This question suggested an experiment
in which different sets of experts were asked to generate the
comfortable MYCIN confidence factors, the uncomfortable but definable
conditional and a priori probabilities needed for Bayes' theorem, and
the interesting, but perhaps not well-defined, probability bounds for
the typical Dempster-Shafer formulation.

I ran this experiment in which the experts were matched for knowledge in
the domain.  Each expert was asked to provide the parameters needed for
only one of the plausibility reasoning formulisms. The results were
that, at a superficial level, humans can provide better MYCIN and
Dempster-Shafer parameters than Bayesian numbers.  However, when
considering how these numbers are used and how errors in the numbers
propagate through repeated applications of the aggregation formulae, the
Bayesian parameters led to more effective automated decision making than
the MYCIN parameters.  The performance of the Demspter-Shafer parameters
was not significantly better or worse than either system in this test.
(This research is documented in two papers -- ask me for references.)
The conclusion: the domain expert's comfort is not a legitimate
determinant of knowledge effectiveness.

>If one obtains two conclusions with numbers indicating some significance,
>say 75 % and 80 %, can one say that the conclusion with 80% significance is
>the correct conclusion and ignore the other one ?

There is a fundamental problem here.  If you are refering to
percentages, then the numbers cannot add to more than 100.  You are
correct in that a decision theory for plausibility reasoning must
take into account the accuracy of the parameters, and I believe that
some researchers have not considered this problem; however, most
plausibility reasoning researchers consider the decision theory to
be an important component which must be given strict attention.

>These numbers do not seem to mean much since they are just beliefs or
>probabilties.

I alluded to this problem earlier.  Actually, if they are probabilities,
they mean a lot.  Probabilities have clear operational and theoretical
definitions.  Some, for example Shafer (1981), have suggested that
the definition of probabilities can be extended to better account
for the subjective nature of the probabilities used in most decision
support systems.  The real problem is with the MYCIN style confidence
factors.  Although Heckman (1986) has developed a formal interpretation
of confidence factors, the interpretation is ad hoc and it seems
difficult to imagine that domain experts use this interpretation.
The meaningfulness of the numbers is an important criterion for
determining the successful application of the numbers and is one
of the strongest arguments for using probabilities and perhaps for
using Bayes' theorem.

Donald H. Mitchell              Don@atc.bendix.com
Bendix Aero. Tech. Ctr.         Don%atc.bendix.com@relay.cs.net
9140 Old Annapolis Rd.          (301)964-4156
Columbia, MD 21045

------------------------------

End of AIList Digest
********************
11-Jul-87 22:59:01-PDT,15211;000000000000
Mail-From: LAWS created at 11-Jul-87 22:57:01
Date: Sat 11 Jul 1987 22:53-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #176
To: AIList@STRIPE.SRI.COM


AIList Digest            Sunday, 12 Jul 1987      Volume 5 : Issue 176

Today's Topics:
  Binding - Interactive Fiction List,
  Philosophy of Science - Is AI a Science

----------------------------------------------------------------------

Date: 8 Jul 87 20:16:34 GMT
From: engst@tcgould.tn.cornell.edu  (Adam C. Engst)
Subject: Re: Interactive fiction


    For those of you who cannot (or don't want to) read the Usenet or Bitnet
discussion groups on interactive fiction we are back in mailing list form.
If you want to send mail to the list, the address is . . . . . . . . . .
              >>>>  gamemasters@parcvax.xerox.com   <<<<
Just include "Interactive fiction" on the Subject line so the moderator can
separate it out from the adventure game discussion messages.  If you want to
add yourself to the mailing list (so you get digests every day or so) send a
request to  . . . . . . . .
              >>>>  gamemasters-request@parcvax.xerox.com  <<<<
and ask to be added.  You can also ask to be deleted from the list, ask for
archived mail, or report a mailer failure at the request address.  I will be
sending the messages that come from Bitnet and Usenet as well, so everyone
will have access to all the messages.  If anyone has any questions, just
email me at either of the below addresses and I'll try to help.  Thanks a
lot for the discussion up to now and I hope that it will improve even more
with the increased audience.

                                               Adam C. Engst

engst@tcgould.tn.cornell.edu
pv9y@cornella.bitnet

------------------------------

Date: 9 Jul 87 14:37 PDT
From: Tony Wilkie /DAC/  <TLW.MDC@OFFICE-1.ARPA>
Subject: Is AI a Science? A Pragmatic Test Offered!

I'm inclined to belive that Don Norman is right, and that AI is not a science;
which is okay, there being a number of perfectly good, self-respecting fields
of study out there that are not sciences.

Still, its likely that there have been sensitivities offended and a defense is
to be anticipated. In lieu of a more respectable and formal argument in defense
of AI being a science, I am prepared to steal from William James and proffer a
pragmatic test.  The rationale is as follows:

    1. Grant moneys are issued by various public and private agencies for the
support of research in both sciences and non-sciences

    2. Issuing agencies are generally authorized to finance projects falling
within their scope of study only.

    3. These agencies have some criteria for determining what appropriate
projects are.

THEREFORE:

    4. Any projects funded by an agency as a science (e.g. NSF) are science
projects reflecting scientific work (except for method or instrumentation
projects).

The challenge, then, is to find any researcher working on an AI project funded
by a science-supportive agency.

  If only it were all this easy...

    Tony Wilkie <TLW.MDC@Office-1.ARPA>

------------------------------

Date: Fri, 10 Jul 87 10:34:39 n
From: Paul Davis <DAVIS%EMBL.BITNET@wiscvm.wisc.edu>
Subject: AI, science & Don Norman


Briefly - seems to me that most everyone (including DN himself) has
missed out on two key points. First, after Searle, there isn't only
*one* AI but two (Searle's strong and weak AI): the first is a suitable
target of DN's critique since its whole raison d'etre can be summed up
in its idea of AI as `cognitive science', ie; that computer science is
a way to approach an understanding of what *existing* intelligent systems
do and how they do it. However, let us not forget `weak' AI, which makes
no such claims - there is no assumption that the products of weak AI
function analagously to "real" intelligent systems, only that they
are capable of doing X by some means or another.

Second, given that `strong' AI *does* claim to have some intimate relation-
ship with cognitive science, its worth asking "is there any other way to
study the brain/mind ?". Don Norman castigates (probably correctly) AI
for not being a science, but he also fails to point out the likely
impossibility of any non-AI-stimulated approaches ever coming to terms
with the complexity of the brain. AI models are *NOT* testable!!
Just imagine that a keen AI worker comes up with an implementation
of his/her model of human brain activity, and that this implementation
is so good, and so powerful that it saunters through Mr. Harnad's TTT
like a knife through butter.... it is vital to see that there is very
little information in this result bearing on the question "is this
the correct model of the brain ?". The ONLY way to confirm (test)
a `strong' AI model is to demonstrate functionally equivalent hardware
behaviour, and psychology is a century or more from being able to do this.
Norman seems right to castigate AI workers for excessive speculation
unsupported by `real experiments', and undoubtedly, if the aim of
`strong' AI is ever to succeed, then we *must* know what it is that
we are trying to model, but he should also recognize that AI
cannot be tested or developed as other sciences simply because it is
unique in studying one domain (computers) with the idea of understanding
another (the brain). When AI *is* a science, it will be called psychology..

too long..,

paul davis

EMBL, Heidelberg, FRG

bitnet: davis@embl      arpa: davis%embl.bitnet@wiscvm.wisc.edu
uucp: ...!psuvax1!embl.bitnet!davis

------------------------------

Date: Fri 10 Jul 87 09:43:03-PDT
From: Douglas Edwards <EDWARDS@Stripe.SRI.Com>
Subject: Don Norman on AI as nonscience

Don Norman assumes that he knows enough about scientific methods to
assert that AI doesn't use them.

I don't believe that he, or anyone else, has a good general
characterization of how science discovers what it discovers.
Especially, I don't believe that he has used scientific methods in
determining what scientific methods are.  Attempts at characterizing
the methods of science typically come from intuitive reflection, or
from philosophy, not from science.  There are some questions we have
to make educated guesses at, because scientific answers are not yet
available.

Norman's attack on AI is vitiated by the same weakness that vitiated
Dresher and Hornstein's earlier attack on AI.  The critics'
characterizations of scientific methods are far *less* firmly grounded
than most assertions being made from within the discipline being
attacked.

Among intuitive and philosophical theories of scientific method--the
only kind yet available--a priori reasoning of the type used in AI
plays a prominent role.  Exactly what relation such a priori reasoning
must have to experimental data is very much an open question.

My own background is in philosophy.  I have gotten involved in AI
partly because I believe, on intuitive grounds, that it *is* a
science, and that it has a better shot at giving rise to a truly
scientific characterization of scientific methods than philosophy,
psychology, linguistics, or neuroscience.  (I am not saying anything
against interdisciplinary cross-fertilization.)  I am now trying to
work out a logical characterization of hypothesis formation.

Douglas D. Edwards
EK225
SRI International
333 Ravenswood Ave.
Menlo Park CA  94025
(edwards@warbucks.sri.com)
(edwards@stripe.sri.com)

------------------------------

Date: 10 Jul 87 18:37:00 GMT
From: jbn@glacier.STANFORD.EDU (John B. Nagle)
Reply-to: jbn@glacier.UUCP (John B. Nagle)
Subject: Re:  AIList Digest   V5 #171


In article <8707062225.AA18518@brillig.umd.edu> hendler@BRILLIG.UMD.EDU
(Jim Hendler) writes:

>When I publish work on planning and
>claim ``my system makes better choices than <name of favorite
>planning program's>'' I cannot verify this other than by showing
>some examples that my system handles that <other>'s can't.  But of
>course, there is no way of establishing that <other> couldn't do
>examples mine can't and etc.  Instead we can end up forming camps of
>beliefs (the standard proof methodology in AI) and arguing -- sometimes
>for the better, sometimes for the worse.

     Of course there's a way of "establishing that <other> couldn't do
examples mine can't and etc."  You have somebody try the same problems on
both systems.  That's why you need to bring the work up to the point that others
can try your software and evaluate your work.  Others must repeat your
experiments and confirm your results.  That's how science is done.

     I work on planning myself.  But I'm not publishing yet.  My planning
system is connected to a robot and the plans generated are carried out in the
physical world.  This keeps me honest.  I have simple demos running now;
the first videotaping session was last month, and I expect to have more
interesting demos later this year.  Then I'll publish.  I'll also distribute
the code and the video.

     So shut up until you can demo.

                                                John Nagle

------------------------------

Date: Fri, 10 Jul 87 20:32:07 GMT
From: Caroline Knight <cdfk%hplb.csnet@RELAY.CS.NET>
Subject: AI applications

This is sort of growing out from the discussion on whether AI is a
science or not, although I'm more concerned with the status of AI
applications.

Ever since AI applications started to catch on there has been a
growing divide between those who build software as some form of
experiment (no comment on the degree of scientific method applied) and
those who are building software *FOR ACTUAL USE* using techniques
associated with AI.

Many people try to go about the second as though it were the first.
This is not so: an experimental piece of software has every right to
be "toy" in all those dimensions which can be shown to be unnecessary
for testing the hypotheses. A fancy interface with graphics does not
necessarily make this into a usable system. However most pieces of
software built to do a job have potential users some of whom can be
consulted right from the start.

I am not the first person to notice this, I know. See, for instance,
Woods' work on human strengths and weakness or Alty and Coombes
alternative paradigm for expert systems or Kidd's work on expert
systems answering the wrong questions (sorry I haven't the refs to
hand - if you want them let me know and I'll dig them out).

I think I have a good name for it: complementary intelligence. By this
I mean complementary to human intelligence. I am not assuming that the
programmed part of the system need been seen as intelligent at all.
However this does not mean that it has nothing to do with AI or
cognitive psychology:

    AI can help build up the computer's strengths and define what
    will be weaknesses for sometime yet.

    Cog psy can help define what human's strengths and weaknesses
    are.

Somehow we then have to work out how to put this information together
to support people doing various tasks. It is currently much easier to
produce a usable system if the whole task can be given to a machine
the real challenge for complementary intelligence is in how to share
tasks between people and computers.

All application work benefits from some form of systems analysis or
problem definition. This is quite different from describing a system
to show off a new theory. It also allows the builder to consider the
people issues:

    Job satisfaction - if the tool doesn't enrich the job how are you
    going to persuade the users to adopt it?.

    Efficient sharing of tasks - just because you can automate some
    part does not mean you should!

    Redesign of process?

I could go on for ages about this. But back to the main point about
whether AI is a science or not.

AI is a rather fuzzy area to consider as a science. Various sub-parts
might well have gained the status. For instance, vision has good
criteria to measure the success of a hypothesis against.

I suggest that the area that I am calling complementary intelligence
consists of both a science and an engineering discipline. It is a
science in which experiments such as those of cog psy can be applied.
They are hard to make clear cut but so are many others (didn't you
ever have a standard classroom physics experiment fail at school?).
It is engineering because it must build a product.

And if we want to start a new debate off how about whether it is more
profitable to apply engineering methods to software production or to
consider it an art - I recently saw a film of Picasso painting in
front of a camera and I could see more parallels with some of the
excellent hackers I've observed than with what I've seen of engineers
at work. (This is valid AI stuff rather than just a software
engineering issue because it is about how people work and anyone
interested in creating the next generation of programmer's assistants
must have some views on this subject!).

Caroline Knight             This is my personal view.
Hewlett-Packard Ltd
Bristol, UK

------------------------------

Date: 11 Jul 87 04:48:04 GMT
From: isis!csm9a!japplega@seismo.CSS.GOV (Joe Applegate)
Subject: Re: Why AI is not a science


> From jlc@goanna.OZ.AU.UUCP Sat Feb  5 23:28:16 206
>
> May be AI is such unorthodox Science, or perhaps an Art.
> Let us keep AI this way!

I'm not sure there is any maybe about it!  AI development, is in my humble
opinion, the most creative expression of the programmers art.  Any semi-
educated fool can code a program... but the creation of a useful, productivity
enhancing application or system is far more art than science!  The same is
more so in AI development, a query and answer style expert system can be
coded in basic by a high school hacker... but the true application for AI
is in sophisticated applications that employ high quality presentation
techniques that eliminate the ambiguities so often present in a text only
presentation.

One benefit of the advent of the personal computer is the redirection of
software product developent away from data driven environment of DP and
accounting and towards the presentation style environment of the non-DP
professional.  Fortunately, most AI development systems are acknowledging
this trend by providing graphical interfaces.

Art mimics science and the application of science is an art!

    Joe Applegate - Colorado School of Mines Computing Center
            {seismo, hplabs}!hao!isis!csm9a!japplega
                              or
 SYSOP @ M.O.M. AI BBS - (303) 273-3989 - 300/1200/2400 8-N-1 24 hrs.

       *** UNIX is a philosophy, not an operating system ***
 *** BUT it is a registered trademark of AT&T, so get off my back ***

------------------------------

End of AIList Digest
********************
12-Jul-87 21:47:44-PDT,17108;000000000000
Mail-From: LAWS created at 12-Jul-87 21:34:18
Date: Sun 12 Jul 1987 21:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #177
To: AIList@STRIPE.SRI.COM


AIList Digest            Monday, 13 Jul 1987      Volume 5 : Issue 177

Today's Topics:
  Theory - Symbol Grounding Poll: Yea's

----------------------------------------------------------------------

Date: 9 Jul 87 03:32:45 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Results of Symbol Grounding Poll (1st of 3 parts)


In the poll on whether the symbol grounding discussion was useful and
worth continuing there were 24 yea's and 37 nays (with some ambiguous
ones I have tried to classify non-self-servingly), so the nays have it.
As promised, I am posting the results (yea's in part 2 and nays in
part 3) and I will abide by the decision. Perhaps I may be allowed a few
parting reflections:

(1) It is not entirely clear what the motivation of the nays was:
ecological/economic considerations about overuse of the airways or
reluctance to perform the dozen or so keystrokes per week (or to
put in the software filter) that would flush unwanted topic headers.

(2) There were distinct signs of the default option "I can't follow it,
therefore it makes no sense" running through some of the nays (and indeed
some of the discussion itself). This may be a liability of polling as a
method of advancing human inquiry.

(3) Along with several thoughtful replies, there was unfortunately also some
ad hominem abusiveness, both in the poll and in the discussion. This is the
ugly side of electronic networks: unmoderated noise from the tail end of the
gaussian distribution. It will certainly be a serious obstacle to making the
Net the reliable and respectable medium of scholarly communication that I
and (I trust) others are hoping it will evolve into. It may turn out that
moderated groups, despite the bottle-necking they add -- a slight step
backward from the unique potential of electronic nets -- will have
to be the direction this evolution takes.

(4) I continue to be extremely enthusiastic about and committed to
developing the remarkable potential of electronic networks for scholarly
communication and the evolution of ideas. I take the present votes to
indicate that the current Usenet Newsgroups may not be the place to attempt
to start this.

(5) Starting a special-interest Newsgroup every time a topic catches
on does not seem like the optimal solution. It is also unclear whether
even majority lack of interest should prevail over minority interest
when all that seems to be at issue is a keystroke. (Not only is there
software to screen out unwanted topics, but to filter multiple postings
as well. I have been posting to both comp.ai and comp.cog-eng because they
each have a relevant nonoverlapping sub-readership. I subscribe to both; my own
version of "rn" only displays multiple postings once. Secondary
digests like the ailist are another matter, but everyone knows that
half or more of it duplicates comp.ai anyway. The general ecology and economy
of the airwaves, on the other hand, should perhaps be deliberated at a higher
level, by whoever actually pays the piper.)

(6) The current majority status of engineers, computer scientists and
programmers on the Net also seems to be a constraint on the development of
its broader scholarly potential. Although these two disciplines developed the
technology and were the first to use it widely, it's now rather as if
Guttenberg and a legion of linotype operators were largely determining not
just the form but the content of the printed page. The other academic
disciplines need *much* greater representation in the intellectual Newsgroups
(such as those devoted to biology, language, philosophy, music, etc.)
if the Net's scholarly contribution is ever to become serious and lasting; right
now these Newsgroups seem only to be outlets for the intellectual hobbies of the
two predominant disciplines. This may just be a quirk of initial conditions
and a matter of time. I wlll certainly do my best to get the other disciplines
involved in this unique and powerful new medium.

[N.B.: I am of course in no way deprecating the great value or contribution
to knowledge of the two disciplines I mentioned; I just believe that their
incidental monopoly over the electronic networks should be benignly dissolved
as soon as possible by the entry of the other disciplines that have a hand in
the written word, scholarly communication and the advancement of knowledge.
The interdisciplinary field of cognitive science happens to be a microcosm of
this larger problem of temporary disciplinary imbalance on the Net,
and the subfield of artificial intelligence -- though of course legitimately
skewed toward computer science -- seems to be showing some of its effects too,
especially on foundational topics like the symbol grounding problem.]

--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: 10 Jul 87 11:32:00 EST
From: "Robert  Breaux" <breaux@ntsc-74.arpa>
Reply-to: "Robert  Breaux" <breaux@ntsc-74.arpa>
Subject: SYMBOL GROUNDING DIES DOWN

It occurs to me that the flare up then dying of symbol grounding in
the ai list is an evolution not possible until recently.  I believe
it is good.  In the "old days" prior to electronic bulletin boards,
this argument would have raged for years, camps divided, universities
would have created "schools of thought", and perhaps books written
which would not have stood the "test of time" as a classic issue.

Now, we can have "face to face", so to speak, discussions early on,
resolve the issues which are not "classic" or seminal, and get on
with it.

It's GREAT, wouldn't you say?

------------------------------

Date: 9 Jul 87 03:41:27 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Results of Symbol Grounding Poll: Yea's (2nd of 3 parts)


[These are the 24 yea's in response to the poll on whether or not to
continue the symbol grounding discussion on comp.ai/comp.cog-eng. I have
removed names and addresses because I had not asked for permission to repost
them. If you wish to communicate with anyone, specify by number (*and* whether
"yea" or "nay") and I will forward it to the author.]
-------------------------
1.       I am finding the symbol grounding discussion very interesting and would
like it to continue. More generally, the community is better served by having
too much information flow than too little. I hope the discussion will continue
even if most respondents to your poll disagree.
----------------------
2.      I personally   don't  feel that  it's   Harwood's  place  to   make  a
recommendation  such as the  one he made (rude or  otherwise). If  the
discussion   is  germane to  the  stated purpose(s)  of the  newsgroup
(which it  is),  and is carried on   in an intellectually  responsible
manner (which it certainly has been), why should  it not be allowed to
continue?
        Isn't the solution for those who don't  find the topic  interesting to
simply not read the messages bearing  that topic on  the subject line?
After all, any number of discussions can be carried on concurrently.
---------------------
3.      I vote to continue on symbol grounding. And by all means, keep going
with your good and interesting work.
-------------------
4.      I don't read this discussion anymore. I couldn't find the beginning,
and never felt that I really understood what the problem was.
        However, I have absolutely no objection to the discussion continuing.
I presume that the discussants get value out of it.
----------------------
5.      Although I only peruse most of the symbol grounding discussion I think
it is well placed in comp.ai and I vote to see it continue. Personally, I do
not see why intelligent use of the NET needs to be defended but apparently
there is always an 'offended' party.
---------------------
6.      [re. ailist] I initially found some of the symbol grounding discussion
interesting, but at the moment it is getting in my way, interfering with my
work of reviewing what is already too much material in AIList. Perhaps a
general solution to "what belongs on AIList" is to put lengthy, continuing
discussions which are of a temporary nature in separate issues, each clearly
titled so it can be deleted by the recepient at the title level without danger
of deleting other AIList topics.
[Ken Laws, Ailist's moderator, then replied that he was sorting already]
        Thanks for the reply. Indeed, you are sorting the material already.
Thanks for the reference to the mail scanning program. It, or an enhancement of
the one I am using, could fill the bill nicely. Perhaps a one-character
appendage to the digest name to indicate the issue pertains exclusively to a
continuing lengthy discussion?  Then, if desired, a smart mailer could
automatically omit or delete them.  Just a thought.
------------------------
7.      I would, with the following reservation, vote against splitting
off this discussion.  It is tangential to some important aspects
of AI and discussions of this sort tend to emphasize areas which
need further scientific exploration.
        My reservation, which I have until now contained, is that your
contributions do tend to be lengthy, wordy, vague, and full of
(sigh) ungrounded symbols.  At times they also appear to lack
respect for the views of other contributors.  If you're looking
for a soapbox, please find one that doesn't appear in my
mailbox.  If you have a point to make, and can do so precisely,
concisely, and with an open mind towards the responses you receive
and respect for their contributors, please contribute to the AIList.
        This is offered in the spirit of constructive criticism, and I hope
you can accept it as such.
----------------
8.      I think symbol grounding discussion are *very* critical to
the AIList and count me as pro-discussion on the AIList.
--------------------
9.      Ha!  I subscribe to quite a few bulletin boards. The symbol grounding
problem is the only discussion topic for which I religiously archive all notes.
It's far, FAR more important than 99.9% of the drivel you see on the net.
        What are your critics suggesting? Free up more slots for dumb jokes and
sophomoric opinions about the nature of intelligence? I say, "Right on! Keep
the symbol grounding discussion going."
        If you want to be magnanimous, you might request that the discussion
be confined to one bulletin board. It seems to inhabit ai, cog-eng, and language
boards, at least, now. If you decide to start your own board, however, please
let me know.
---------------------
10.     Please continue! Critics who care would notice that (in the ailist
version at least) these discussions are usually in a posting on their own, and
are thus easily discarded by those uninterested.
---------------------
11.     Mark one with thumbs up.
-------------------
12.     As per our phone conversation this morning...  continue the dialogue.
-------------------------
13.     Please continue the very enlightening discussion on symbol grounding in
its present arena. And thanks very much for the effort you put into explaining
quite carefully what you propose.
----------------------------
14.     I consider the recent discussion on the symbol grounding problem to be
very interesting and relevant.  Please continue.
--------------------------
15.     What I am doing is responding to your poll request. Please continue
the discussion of the symbol grounding problem.  I have not had time to
contribute, but I find the contributions, especially yours, quite valuable.
(Your contributions are good, but I also value "bad" contributions, since they
are often clear examples of the bad philosophy and epistemology which people
inflict on themselves and others.)   My vote:  continue posting.
-------------------
16.     Despite the complaints from McCarthy and Minsky, there does seem to be
some benefit of the Symbol Grounding discussion for we lurkers. Sometimes I
almost think I understand what the issue is.
        However, I do find it distracting that essentially the same material is
arriving by both comp.ai and comp.cog-eng newsgroups.  I don't want to
unsubscribe to either, but I'd like to have to see the material only once.
        Is it possible to move this discussion to just comp.cog-eng, since it
seems to be the (weak?) AI community that finds much of this correspondence
tiresome?
        I think if you simply announce your intention to operate on one group,
and then make all your submissions there (while monitoring both, of course),
the news stream will become a bit easier to cope with for many of us.
[See earlier material on filtering multiple postings.]
---------------------------
17.     I followed your early discussion in symbol grounding but now skip
over it. Maybe its gone on too long? But * as long as Ken Laws [ailist]
separates it into its own volumes * (as he has been doing) I can skip it and
others can follow it as they wish. If he decides this is too much work for him,
I would suggest moving it to a different forum.
--------------------------
18.     I find the discussion of symbol grounding useful and worth
contuinuing.  I vote to continue.
-----------------------------------
19.     You get my vote for continuing the discussion.
---------------------------
20.     Simple response.  I don't participate, but I enjoy the discussion.
I'm a novice in this area, and seeing exchanges like this help educate.
-----------------------
21.     Yes I find it useful and worth continuing.
[Mild ad hominem remarks about a prior rude poster deleted]
-----------------------------
22.     My response to your request for a vote: I am emphatically *FOR*
keeping discussions such as the symbol grounding discussion *ON* Ailist Digest.
Though I don't always read all of them (I'm amazed at your energy and
ability to sustain these discussions on "paper") as a philosopher I find
discussions such as yours the the most important part of the digest. If
people think that AI is just computer science, let them start another list.
Laws obviously thinks that these discussions are part of AI and he's right.
        I think that your policy of initially ignoring the rude remarks made
against you was a good one. It is unfortunate that some people lose their
manners when they go electronic.
----------------------
23.     I vote that you continue the symbol grounding discussion and related
topics in the present forum.  I've found these articles to be far more
enlightening, useful, and relevant than the typical requests and responses
for the latest references on KB techniques or expert systems marketing.  Not
to say that such articles are inappropriate, but that this forum is for all
AI-related discussion.  Please continue to ignore Booth and Harwood.
-------------------------
24.     A difficult question. The discussion HAS been going on at considerable
length, but it evolves, and maintains a certain interest.  Many people
(including me) seem not to work from the same foundation as you, and
therefore you need many words to get across what often sounds like
reiterations.  But if you used fewer words, perhaps we might misunderstand
worse than we do.
        Personally, I think you skirt some important points about
categorization, which may be in your book: that it is probably required
only for communication (perhaps for a conversation within a single brain,
as Gordon Pask would insist); that it usually depends on the existence
of a catastrophe function (anywhere near the border of a category,
the data may lead unequivocally to more than one result depending
on historic and local context); that symbols need not be grounded
in real-world phenomena, but in agreed categories constrained by context
(people DO communicate about religion and politics, in which fields
there is unlikley to be any real-world grounding of the symbols).
        There are probably other issues.  As for continuing the discussion,
I would say yes if the contributions could be kept under 75 lines,
no otherwise.  Or else act as a moderator and submit weekly digests
of the arguments people send you privately.
------------------------
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************
12-Jul-87 22:18:29-PDT,19005;000000000000
Mail-From: LAWS created at 12-Jul-87 22:08:09
Date: Sun 12 Jul 1987 22:05-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #178
To: AIList@STRIPE.SRI.COM


AIList Digest            Monday, 13 Jul 1987      Volume 5 : Issue 178

Today's Topics:
  Theory - Symbol Grounding Poll: Nays,
  Comment - Characteristics of Discussion Lists

----------------------------------------------------------------------

Date: 9 Jul 87 03:44:34 GMT
From: mind!harnad@princeton.edu  (Stevan Harnad)
Subject: Re: Results of Symbol Grounding Poll: Nays (3rd of 3 parts)


[These are the 37 nays in response to the poll on whether to continue
the symbol grounding discussion in comp.ai/comp.cog-eng. I have removed
names and addresses because I had not asked for permission to repost them.
If you wish to communicate with anyone, specify by number (*and* whether
"yea" or "nay") and I will forward it to the author.]
-------
[The first three nays (from Harwood, Minsky and Booth) preceded this
poll; enumeration accordingly begins with 4.]
-------
4.      Please do not take this personally. I have almost stopped reading
comp.ai because of the ridiculous quantity of material being posted by you
and Brilliant, and others. This discussion has been completely unuseful
to me and I would really like to see it stopped. It is much more like
philosophy than AI to me and I am sure there are others who feel the same way
but wont tell you. Please stop dominating this newsgroup.
-------
5.      Either you start another newsgroup or I unsubscribe to this one.
I cannot take any more.
-------
6.      Please start your own newsgroup!
-------
7.      My vote:  *do* start your own news group or private mailing list.
This discussion, however interesting it may be to the participants,
has gone on too long to continue in "comp.ai".
-------
8.      I enjoy skiming your symbol grounding writing though for my research it
is totally irrelevant.  However since there are relatively few people
who do AI who need to consider the TTT (most people in AI are just
trying to make machines more intelligent right now) I suspect that the
symbol grounding problem better belongs in sci.philosophy.tech.  The
issues wont come up in the real world for at least 5 years because we
are not even close to human emulation at the moment.  On the other
hand you may be working on psychological modelling.  If so then there
must be a news group or mailing list more close to that topic than
comp.ai.  All together I suspect that sci.philosophy.tech is the best
place along with periodic notes to comp.ai notifying people that there
is a discussion of importance to those who model human beings there.
This would get your messages to the relevant people.  Also if
sci.philosophy.tech doesn't exist for some reason then talk.philosophy
would be the next best thing.  If the problem is that you can not
reach the arpa world that way, I think there is a psychology mailing list.
        BTW about graded vs ungraded concepts, point taken.
On the other hand most of the verbs in any language are regular but
most of the verbs used by the speakers of a language are irregular.
The dictionary is not meant to be and is not a fair sample of usage.
Nor does the set of nouns in the language necessarily correspond to
the set of concepts employed by its speakers, (it corresponds to the
set of concepts that the speakers find convinient to convey rapidly).
However you have presented inconclusive evidence that most concepts
are not graded.  If you had a dictionary that was sorted by usage and
gave the usage of words rather than their definitions you would
have better evidence that most concepts are not graded.
-------
9.      As to your polling request regarding the symbol grounding issue:
I am quite tired of all the traffic it has generated. Considering that no
real information has been revealed, I feel it is time to drop it. In the
recent time that these postings have filled the newsgroup, most all other
worthy postings have vanished.
        The newsgroup should address a range of pertinent issues that will
enlighten subscribers. I feel that the symbol grounding issue has only
enlightened me in the use of the 'n' key!
        While I am on the subject, the cross-posting to 'comp.cog-eng' are
atrocious. Either post to one or the other. Most every symbol grounding
article has appeared in both. This generated to much traffic on the net
and defeats the purpose of making special purpose groups.
        I thank you for your ability to notice fellow subscribers views.
-------
10.     Can't we bag this damn symbol grounding discussion already?
If it *must* continue, how about instituting a symbol grounding news
group, and freeing the majority of us poor AILIST readers from the
burden of flipping past the symbol grounding stuff every morning.
-------
11.     I generally do not read the SGP articles simply because I do not
understand them (and they are so looong!). If there are a few people interested
in reading and discussing SGP, there is no reason to prevent such postings. But
if there are also many people who do not want to read that sort of things in
comp.ai, then it would be wise to consider the possibility of creating a
news-subgroup `comp.ai.sgp'.
-------
12.     The ramblings on this topic passed my threshhold of boredom long ago.
I'm not proposing censorship, but if you choose to continue the discussion
with a smaller group of people who find this topic of interest, I will
applaud your good manners.
-------
13.     I vote you start your own newsgroup--I was bored with "Symbol Grounding"
about 500 kilo-bytes ago.  Ditto "The Total Touring Test" or whatever
your last filibuster was called. . . .
-------
14.     My vote is for ending the discussion on the symbol grounding problem.
Thanks.  p.s. If you are interested in finding out why I voted against
continuing the discussion, please let me know -- I will be glad to oblige.
-------
15.     Thank you for taking a poll on whether the symbol grounding problem
discussion should or should not continue in comp.ai. My vote is to remove the
discussion from this newsgroup. Maybe it could be moved to a new newsgroup
talk.symbolgroundingproblem ???
-------
16.       I think that the discussion has been out of hand for a long time now.
It doesn't seem to contain any useful insights, and is taking up inordinate
resources.  Not the least of which is the time spent by the authors
expounding their viewpoints.  I think that this sort of disagreement is
better done in position papers in and letters to journals.
        The odd use of terms hasn't helped keep the discussion on a high level.
Not to point fingers, but your nonstandard use of "analog" made a large number
of your posts completely incomprehensible to me until you said that you meant
something other than the usual meaning of the term.
        So, I vote to flush this discussion.
-------
17.     Personally, I have been skiping most of the articles in this discussion.
I was referred to this newsgroup as a forum for other discussion but have
seen little other than what appears to be a war of words from two opposing
camps.  By now the sides must be set--perhaps it is time to move the
discussion from "news" to an e-mail mailing-list.
-------
18.     Definitely neither useful nor worth continuing.
-------
19.     The manner in which the issue was raised *was* rather rude, but I regret
to say that I find much of what was stated about your extended discussions
very much to the point.  I tried to keep up with discussion; I found it
rather interesting at first.  But it rapidly became clear that you were
all talking at cross purposes, refusing to accept conventional usage or
even common-usage-for-the-purpose-of-debate of the key words in question.
The appalling level of quotation made things much, much worse and it became
well-nigh impossible to ferret out the pearls of insight in the flood of
verbiage.  I do not wish your discussion to completely vanish from the
airwaves, as it were, but without a bit of self-restraint all round,
together with some sincere efforts to try to answer one another's
objections, I don't think the discussion is particularly useful.  (e.g.
wrt all-or-none categories: pointing to concrete nouns in the dictionary
or to the very special categories that have "hardware support" is not,
in my opinion, a sincere effort to meet the objections to the contention
that categories are all (or mostly) all-or-none, a rather contrary-to-
common-observation position.)
        Perhaps the new policy on quotation will help: there has been a modest
improvement in a couple of the recent postings.  I remain hopeful.  All
I can say is, until things improve quite a bit, I will probably be
flushing all the digests with "Symbol Grounding" in the topics list. Sorry.
-------
20.     I do not find the symbol grounding problem discussion worthwhile.
Thank you for (politely) asking.
-------
21.     I vote for discontinuing the discussion.  It would be interesting except
that there is far too much confusion over who's using what terminology.
Probably dozens of articles have been wasted over "well, I don't know
what *you* mean by 'analog', but when *I* say 'analog' I mean etc etc etc".
-------
22.     You have made an unseemly attempt to bias this vote.  The question is
not whether your discussion is ``useful and worth continuing,'' but whether
we *ALL* need to read or even be sent the truly amazing volume that you
seem able to generate on this one topic !?!
**  Please remove your discussion from the AI-list (to a new bboard?).  **
{And if you find it absolutely necessary to be mad at how stupid and
unjust the rest  of the world is, go ahead and tally this as a vote for
your discussion being useless and not worth continuing}
-------
23.
1.  I find it neither interesting nor useful.
2.  The arguments, until I stopped following it, somtimeseveral weeks ago,
    are circular if not repetitive.
3.  I've speculated privately that the argruments were cranked out by a
    machine in someone basement as a Turing Test on the rest of the net.
    Either that or ...
4.  But none of this justifies setting up another news group. comp.ai
    isn't being used for anything else.  For a heavily used group, see
    comp.sys.ibm.pc.
5.  Personally, I'd suggest that you take all of the correspondence.  Put
    it in a folder, and open it again at New Years.  Reread it, and write
    a real paper.
-------
24.     Please stop!
-------
25.     NO!  Please take this discussion to e-mail.  It's gone far
beyond the point where it's interesting to anyone other than
you and the few people still arguing.
-------
26.     Stop it!
-------
27.     The symbol grounding problem - please start your own newsgroup.
DEFINITELY!
-------
28.     Although I don't think that AI-list should be strictly limited to
discussions of algorithms and similarly down-to-earth items, I do think
that the symbol grounding discussion has gotten a bit out of hand and
should be conducted privately among the three or four major participants,
with perhaps a summary to appear at some future date.
-------
29.     In article <977@mind.UUCP> you write:
        >David Harwood has made two very rude requests
(Yes, he was way out of line.)
As a former philosophy undergrad and current A.I. grad student,
I've found the topic in general to be interesting.
BUT . . . I think it should in fact be moved to its own newgroup.
Comp.ai is now completely dominated by exchanges between you and
Marty Brilliant, Anders Weinstein, etc.  After a while, "listening"
to a few other people argue gets tedious, no matter how interesting
the topic.  Frankly, I think people have been frightened away from
the newsgroup in the past few months, with the result that there have
been no discussions other than this one, unless you count a few
requests for info on some language.
P.S. I enjoyed your "uncomplemented categories" talk at the Phil/Psych meetings.
-------
30.     I vote to cease the endless symbol grounding discussion!
-------
31.     I find the discussion neither useful or worth continuing.
------------
32.     Please stop it. I agree with Law that most of the discussion can be
carried thru private mail. I can see that R is easier to type than
mail ...%....@...... etc but, then use the facilities provided by
Unix like aliases etc.  I am looking forward to your results.
-------
33.     You asked for votes.  Mine is... no more on the symbol grounding
problem.  Thanks for asking.
-------
34.     I for one would greatly appreciate having the discussion removed from
subsequent AIlists.  As in a conference presentation, if a heated topic goes
on for too long, the people involved should agree to meet later and discuss
the issue amongst themselves without burdening the whole group. You must know
by now who the interested parties are; can't you just send mail to each other?
-------
35.     It not the discussion per se that I think people object to as
much as it is the size of the discussion.  The replys are very
large, each addressing 15 points of reply to the previous reply.
It takes a while to read through the text, and extract
some salient points of interest.  Having real work to do, I sometimes
just file the message, thinking I'll get to it later.
        I save ALL my mod.ai mail for a time in the near future when I attempt
to complete my MS and want to scan back over the current "hot" topics.
Unfortunatly I've had to start a special archive just for this discussion,
and it's chewing my disk drive all to bits with saved mail.
        I find the disscussion interesting, and informative but...
(Now for the poll): If discussion continues to involve ginormous reply's: END IT
If discussion stops taking over whole digests: KEEP IT.
-------
36.     I'm sorry but for me the discussion is no longer interesting.
-------
37.     I think that this discussion belongs to philosophy, not to AI. I hope
that it will relocate itself accordingly.
--

Stevan Harnad                                  (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet       harnad@mind.Princeton.EDU

------------------------------

Date: Thu 9 Jul 87 10:41:46-PDT
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Characteristics of Discussion Lists

              [Excerpt from a message to Steven Harnad.]


A problem with large, permanent lists is that they are primarily for
those on the fringes of a field who want to monitor or join what is
happening further in -- but not so far in that it becomes a full-time
occupation or involves incomprehensible jargon.  The professionals
already have channels of communication among themselves (including
personal visits, seminars, conferences, publications, and even e-mail
or phone calls) and have little time for list discussions that are
outside their own exceedingly narrow specialties.

As to the suggestion of continuing via e-mail, it's not really so bad.
Two options exist.  One is to cc everyone on each message, letting the
mailer propagate the cc list from one message to another.  It is usually
easy to add new members to such a discussion, but impossible to drop old
ones without retyping the whole list.  There is also a problem that BITNET
gateways don't add necessary routing information to message that are
handed over to the Arpanet.  The other option is for one person to
maintain a file with all the addresses, headed by a "label:" to suppress
the information in the cc field of each message.  All traffic is sent
to this one individual, who then remails it to the distribution.  That's
a moderated list.  (Anyone can get in this business!)

One of the charges in your Nay summary was that discussion of other
topics has been down since the fundamentals discussion took over.
I believe that's true, although there seems no rational reason for
it.  Even queries and replies have been reduced, although that could
be a coincidence due to the end of the school year and of the proposal
year.  A few people have dropped off the list because of the volume,
many more have added themselves because AIList was beginning to
border on their interests.  The effects are complex, and certainly
not just a linear addition of your text to whatever would have been
present anyway.

I believe that the proper model of a discussion list is the town
meeting.  AIList began with my own announcement of myself as
moderator, or chairman/speaker of the house.  A group of interested
individuals formed, and through custom and convention we have worked
out an informal social contract that governs the proceedings.  Part
of the contract is that participants pay reasonable attention to
the proceedings, if only to avoid redundant or naive remarks.  This,
together with the serial nature of current message streams, implies
that only one person (more or less ...) has the floor.  Part of
my job as moderator is to insure a balanced discussion, soliciting
(or forwarding) new topics and viewpoints.  Not every list is run
as a town meeting, but that's my view of AIList.

The symbol grounding discussion was carried out with great respect
for the participants and with incredible attention to detail.  AI
needs to grapple with the problems you raised.  (Whether AIList
needs to is debated in your vote summaries.)  The difficulty is simply
that people can't pay attention to everything, and your discussion
was demanding more attention than they could spare.  The other rings
of the circus require equal time.

Incidentally, much of the personal criticism has been sparked by the
one-against-all nature of your discussion.
If the level of discussion had been more approachable, we might have
had more people joining your cause and providing examples for your
position.  That would have been more interesting, and might have
reached an obvious conclusion or stalemate sooner.  It is a common
characteristic of net debates, however, that nothing is ever settled.
Points that are agreed to are simply dropped, with little or no mention
that agreement has been reached, and may even be picked up by some
other participant.  Net discussions generate a continuous stream of
ideas, but conclusions are lacking.  I thank you for repeatedly
reminding us that conclusions have not been reached in this particular
topic area, and hope you will continue to contribute to AIList.

                                        -- Ken

------------------------------

End of AIList Digest
********************
14-Jul-87 22:55:39-PDT,19313;000000000000
Mail-From: LAWS created at 14-Jul-87 22:43:16
Date: Tue 14 Jul 1987 22:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #179
To: AIList@STRIPE.SRI.COM


AIList Digest           Wednesday, 15 Jul 1987    Volume 5 : Issue 179

Today's Topics:
  Review - Spang Robinson 3#6, 6/87 &
    Spang Robinson 3#7, 7/87 &
    Canadian AI, 7/87,
  Report - Strategy Learning with Connectionist Networks,
  Bibliography - Definitions for Leff a58C &
    Leff a58C (Part 1 of 2)

----------------------------------------------------------------------

Date: Sat, 11 Jul 1987 17:39 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: bm654 - Spang Robinson 3#6, 6/87

Summary of Spang Robinson Report on Artificial Intelligence
June 1987, Volume 3, No. 6

AI and The Military

In 1985, DOD AI activity was 91.1 million with funding of 500 million/year
estimated at 1992.

The rest discusses
summary of military activities, hopes and prospects in the AI field including
disillusionment on the part of some in industry.  Gary Martins of Intelligent
Software is quoted as saying

"Early returns from the first two major AI projects under the strategic
computing program show few real accomplishments... The autonomous land vehicle
projected resulted in the construction of a handsome test track and a huge,
lumbering van stuffed with computers running expert systems software.  If it
travels slowly enough (under three m.p.h), the van is sometimes able to
make it all the way around the brightly lit, carefully marked, optically
smooth course without serious mishap."  "The pilot's associate project
aims to produce a refrigerator sized computing system, having functionality
comparable to a 3 inch by  5-inch check list car."

Charles Anderson of the SDI group said AI would use would be  quite low
in the SDI project with no increase in the ADI budget for AI applications
in spite of the fact that the ADI budget itself is growing."  However,
the SDI is still spending 200 million per year on AI.

Rome Air Force Development Center is building a system to help decide
if foreign rocket launches are threats.  They also have systems to schedule
pilots and aircraft hours.  They also have an expert system that links
together various office automation tools and can generate its own forms.

()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
Shorts

AION corporation's ADS is being extended to CICS and IMS and other
IBM data base products.

Lockheed has set up a 4.5 million dollar AI center.

Symbolics has announced a single chip LISP processor which fits on one
card after adding interface and memory chips.

Coopers and Lybrand has developed an expert system to monitor brokerage
accounts for irregularities.

Allan Levine will be manager of Gold Hill's Los Angeles sales office.
James McGowan will be Palladian's vice president of sales and
marketing nad Thomas Murphy will be their director of sales.

40% of the Japanese Information Processing Association's presentations
were related to AI  .

*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(*(

This issue also include a directory of people working at various companies,
agencies and the like in Military Artificial Intellgience and announcements
of various tools and expert systems at the above show.

------------------------------

Date: Sat, 11 Jul 1987 17:39 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: bm660 - Spang Robinson 3#7, 7/87

Summary of Spang Robinson Report on AI, Volume 3, No. 7, July 1987

The New AI Pioneers: The Knowledge Merchants

The market for pre-built expert systems was estimated at 10 to 15 million
for 1986 with expected growth to 40 million in 1987.  Many developers
found extensive customization was needed for each customer while there
were many rules that were common to everybody in the application domain.

Some info on various expert systems being sold including data on how
many sold and time/cost to develop.  UNDERWRITER saves three percent
in insurance losses while Syntelligence reports a five to ten percent
improvement in loss ratios.

The numbers on the left are the development cost or times while the
numbers on the right are the purchase price.

40 man years: APEX Plan Power (125 sold) ~$34,500
20 man years: APEX Client Profiling ~$100,000
50 man years: Palladian operations planning system ~$100,000
50 man years: Palladian project management system ~$100,000
              Sterling Wentworth: PLANMAN, PC based planning system
                   (800 copies, 7500 rules)
8 million:    Syntelligence Syntel (risk assesment) ~500,000
              Expert Technlogies (yellow page layout)
              Cogensys: judgement processing for financial service applications
                 (9 systems installed.) ~ $250,000
              Composition Systems: publishing systems
              Eloquent Systems: Hotel Inventory Processing
              Applicon: circuit design
              Direct Marketing: Persorft
              TRansform Logic: Computer Aided Software Engineering
                (Generates COBOL generation)
              General Data System, RATER and UNDERWRITER for insurance ~$250,000

_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-

Real Time Expert Systems on PC's and micros.

Texas Instruments developed an expert system in FORTH to control water
treatment plan.  McDonald Douglas is using a Fuzzy Logic based Forth running
on the NOVIX forth engine running 30,000 rules per second.
UME Corporation offers an Expert Controller
box which is a self-contained controller using expert system technology
supporting 5000 rules/second and 16,000 rules total.
It is being used in automotive hood stamping process control and for industrial
clothes driers.

ONSPEC sells a Stand Alone System for $895.00 and Superintendant intended
for running Programmable Logic Controllers.  The systems support a user-friendly

operator interface for the final system, explicit handling of unknown
data and retraction of facts.
The system handles 1000 rules and 50 rules per second.  (A review of
this software is in the issue.)

()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()

Shorts:

Natural Language Incorporated has a product licensing and equity financing
agreement with MicroSoft.
DATA General will be distributing Gold Hill's products.
Teknowledge has named a former Under Secretary of Defense to
it's board of directors.
Nestor, a maker of a neural-network based system,
reported a net loss of $539,252 on revenues of $8,016.

MicroProducts is marketing PowerLisp, a virtual memory based system,
for Intel 286 and 386 based PC's.
Programs in Motion is now offering
an expert system with code generators for Pascal, C, dbase III interfacing
and form design capabilities.

Automated Reasoning is developing expert systems for ATE programming
and generates source code in BASIC, C, ATLAS, ADA or Pascal.

------------------------------

Date: Sat, 11 Jul 1987 17:39 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: bm668 - Canadian AI, 7/87

Summary of Canadian Artificial Intellgience, July 1987, No. 12

Discussion of the Canadian Governments research initiative.

The Canadian AI Conference for 1988 will be June 6-10, 1988 in Edmonton.
It will be held simultaneously with the Canadian Image Processing and Pattern
Recognition Society and Canadian Man-Computer Communications Society
meetings.

Jose A. Ambros_Ingerson of University of California, Irvine is collecting
info re AI applications and efforts in Third World Countries.

Canada is setting up a research consortium for AI and robotics.
This is similar to MCC and other efforts in that companies produce
research that they all can use before competitive additions and
applications are made.

There was a report on the National Meeting of the Fifth Generation Society.
There are a variety of research infrastructures in Canada involving
joint industry-academic type efforts.

New bindings:

Nick Cercone will be Director of the Centre for Systems Science at
Simon Fraser University.
Randy Goebel is now at the University of Alberta.
Brian Schaefer, Beverly Smith, Ian Morrison and Julian Siegel are now
at Acquired Intelligence, 2304 Epworth Street, Victoria B. C. V8R 5L2.

Report on Research at University of Toronto:

Hector Levesque and Ray Reiter are working on formal foundations of
knowledge-based systems.
John Mylopoulos is working on AI applications to software engineering
and databases.
Russ Greiner on learning by analogy.
Effort to develop an autonomous vision-guided robot.
Interpretation of Remotely Sensed Images, e. g. from satellites.
  Applications include river or lake ice measurements and interpreting
  weather data for storm forecasting
Knowledge Based Debugging system  based on MRS.

Reviews of "Robotics Research: The Third International Symposium"
  New Horizons in Educational Computing by Masoud Yazdani
  The Mathematics of Inheritance Systems by David S. Touretsky
  Robotics and Ai: An Introduction to Applied Machine Intelligence
    by Andrew C. Staugaard.

Abstracts of papers in Computational Intelligence and some
AI Technical Reports.

Report on the recent CHI+GI+87 conference on Computer Human Interactions
and Graphical Interfaces.

------------------------------

Date: Mon, 13 Jul 87 14:23:53 EDT
From: Chuck Anderson <cwa0%gte-labs.csnet@RELAY.CS.NET>
Subject: Technical Report:  Strategy Learning with Connectionist
         Networks

      Strategy Learning with Multilayer Connectionist Representations

                          Chuck Anderson
                        (cwa@gte-labs.csnet)

                   GTE Laboratories Incorporated
                           40 Sylvan Road
                         Waltham, MA  02254

                             Abstract


      Results are presented that demonstrate the learning and
fine-tuning of search strategies using connectionist mechanisms.
Previous studies of strategy learning within the symbolic,
production-rule formalism have not addressed fine-tuning behavior.
Here a two-layer connectionist system is presented that develops its
search from a weak to a task-specific strategy and fine-tunes its
performance.  The system is applied to a simulated, real-time,
balance-control task.  We compare the performance of one-layer and
two-layer networks, showing that the ability of the two-layer network
to discover new features and thus enhance the original representation
is critical to solving the balancing task.

(Also appears in the Proceedings of the Fourth International Workshop on
Machine Learning, Irvine, June, 1987)

------------------------------

Date: Sat, 11 Jul 1987 17:39 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: defs for a58C

D MAG115 Pattern Recognition\
%V 20\
%N 1\
%D 1987
D MAG116 1985 International Test Conference\
%D 1985
D MAG117 Proceedings IEEE International Symmposium on Circuits and Systems\
%C Kyoto, Japan\
%D JUN 5-7 1985
D MAG118 Proceedings of the Second Australian Conference on Applications of Expe
rt Systems\
%C Sydney\
%D 14-16 May 1986
D BOOK66 International Conference on Computers in Engineering Conference and Exh
ibit (Las Vegas)\
%D 1984\
%I American Society for Mechanical Engineers
D MAG119 Proceedings of the 1986 International Test Conference\
%D SEP 9-11, 1986
D MAG120 1986 IEEE International Conference on Computer Design (Port Chester, NY
)\
%D October 6-9, 1986
D BOOK67 1985 Engineering Software IV\
%I Springer Verlag\
%C Berlin-Heidelberg New York\
%D 1985\
%E R. A. Edey
D MAG121 American Control Conference (Seattle, WA)\
%D JUN 18-20 1986
D MAG122 1985 Proceedings Annual Reliability and Maintainability Symposium\
%D 1985
D MAG123 Proceedings of the 1986 International Computers and Engineering Confere
nce (Chicago, Ill.)\
%D JUL 1986
D MAG124 International Conference on Computer Aided Design (Santa Clara, CA)\
%D 1986
D MAG130 AT&T Technical Journal\
%V 65\
%N 5\
%D SEP-OCT 1986
D MAG131 Pattern Recognition Letters\
%V 5\
%N 3\
%D MAR 1987
D BOOK80 Mathematical Foundations of Computer Science\
%S Lecture Notes in Computer Science\
%V 233\
%I Springer-Verlag\
%C Berlin-New York\
%D 1986
D MAG132 J. Logic Programming\
%V 3\
%N 3\
%D 1986
D BOOK81 GWAI-85 Proceedings of the Ninth German Workshop on Artificial Intellig
ence\
%E Herbert Stoyan\
%S Technical Reports on Information Science\
%V 118\
%I Springer-Verlag\
%C Berlin-New York\
%D 1986
D BOOK82 Eighth International Conference on Automated Deduction (Oxford 1986)\
%P 470-488\
%S Lecture Notes in Computer Science\
%V 230\
%I Springer-Verlag\
%C Berlin-New York\
%D 1986
D BOOK83 Algebra, Combinatorics and Logic in Computer Science, Vol I. II, (Gyor,
 1983)\
%S Colloq. Math. Soc. Janos Bolyai\
%V 42\
%I North-Holland\
%C Amsterdam-New York\
%D 1986
D BOOK84 Category Theory and Computer Programming (Guildford, 1985)\
%S Lecture Notes in Computer Science\
%V 240\
%I Springer-Verlag\
%C Berlin-New York\
%D 1986
D MAG135 Journal of Logic Programming\
%V 3\
%N 4\
%D 1986\
D MAG136 IEEE Transactions on Geoscience and Remote Sensing\
%V 25\
%N 3\
%D MAY 1987
D MAG137 Soviet Journal of Computer and Systems Sciences\
%V 24\
%N 6\
%D NOV-DEC 1986

------------------------------

Date: Sat, 11 Jul 1987 17:39 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject:      a58C  (Part 1 of 2)

%A M. J. Amundsen
%T The Compact LISP Machine, a Lisp Machine in a Shoe Box
%J IEEE National Aerospace and Electronics Conference
%V 4
%D 1986
%P 1309-1314
%K H02

%A Robert Buday
%T LISP-Machine Maker Symbolics, Spawned at MIT, is Growing Up
%J Information Week
%N 5
%D MAR 3, 1986
%P 34-37
%K H02 AT16

%A M. Carlsson
%T A Microcoded Unifier for LISP Machine Prolog
%B Symposium on Logic Programming
%D 1985
%P 162-171
%K T02 H02

%A H. Maegawa
%T Fast LISP Machine and Lisp Evaluation Processor Eval II-processor Architectur
e
and Hardware Configuration
%J Journal of Information Processing (Japan)
%V 8
%N 2
%D 1985
%P 121-126
%K H02 GA01

%A S. Sakamooto
%T The Design of a Firmware LISP Machine
%R Technology Reports of the Seikei University
%I Faculty of Engineering, Fukuoka, Japan
%N 41
%D 1986
%P 2751-2752
%K H02

%A H. Schotel
%A J. Pijls
%T A Prototype From Grammatical Instruction on a LISP Machine
%J Informatie (Netherlands)
%V 28
%N 1
%D 1986
%P 48-50

%A J. Spoerl
%T The Architecture of the Symbolics LISP Machine
%J Informatique
%V 1
%D 1986
%P 140-144

%A J. M. Switlik
%A R. J. Short
%T The Database Environment and the LISP Machine
%B Artificial Intelligence and Advanced Computer Technology Conference and
Exhibition. Proceedings.
%D 1986

%A M. Yuhara
%T Evaluation of the FACOM Alpha LISP Machine
%B Thirteenth Annual International Symposium on Computer Architecture
%D 1986
%P 184-190
%K H02

%A V. W. Zue
%T The Development of the MIT LISP-Machine Based Research Workstation
%J International Conference on Acoustics, Speech and Signal Processing.
proceedings
%V 1
%D 1986
%P 329-332

%A Y. J. Chao
%T Image Processing Methods in Ductile Fracture of Solids
%J Mechanics
%V 14
%N 1
%D JAN-FEB 1987
%P 57-60
%K AA05 AI06

%A Yu. S. Afonin
%T Blocked Branch and Bound Method
%J Automation and Remote Control
%V 47
%N 8 Part II
%D AUG 1986
%P 1107
%K AI03

%A I. B. Muchnik
%A P. M. Snegirev
%T Algorithm to Estimate the Approximation Accuracy of an Empirical Dependence
%J Automation nad Remote Control
%V 47
%N 8 Part II
%D AUG 1986
%K O06 AI04 O04

%A J. L. Nevins
%T Information-Control Aspects of Sensor Systems for Intelligent Robotics
%J Journal of Robotic Systems
%V 4
%N 2
%D APR 1987
%P 215-228
%K AI07 AI06

%A Hooshang Hemami
%A Ralph E. Goddard
%T Recognition of Geometrical Shape by a Robotic Probe
%J Journal of Robotic Systems
%V 4
%N 2
%D APR 1987
%P 237-258
%K AI06 AI07

%A Ren C. Luo
%T MIcrocomputer-Based Robot Dynamic Sensing Using Linear Array Sensor for
Object Recognition and Manipulation
%J Journal of Robotic Systems
%V 4
%N 2
%D APR 197
%P 199-214
%K AI06 AI07 H01

%A C. Morandi
%A F. Piazza
%A R. Capancioni
%T Digital Image Registration by Phase Correlation Between Boundary Maps
%J IEE Proceedings-E
%V 134
%N 2 Part E
%P 101-104
%D MAR 1987
%K AI06

%A J. Mantas
%T Methodologies in Pattern Recognition and Image Analysis -- A Brief
Survey
%J MAG115
%P 1-6
%K AI06

%A R. W. Smith
%T Computer Processing of Line Images: A Survey
%J MAG115
%P 7-16
%K AI06

%A S. J. Roan
%A J. K. Aggarwal
%A W. N. Martin
%T Multiple Resolution Imagery and Texture Analysis
%J MAG115
%P 17-34
%K AI06

%A S. Basu
%A K. S. Fu
%T Image Segmentation by Syntactic Method
%J MAG115
%P 35-44
%K AI06

%A Zhen Zhang
%A M. Simaan
%T A Rule-Based Interpretation System for Segmentation of Seismic Images
%J MAG115
%P 45-54
%K AI06

%A Maylor K. Leung
%A Yee-Hong Yang
%T Human Body Motion Segmentation in a Complex Scene
%J MAG115
%P 55-64
%K AI065

%A D. J. Peuquet
%A Zhang Ci-Xiang
%T An Algoirthm to Determine the Directional Relationship Between Arbitrarily-
Shaped Polygons in the Plane
%J MAG115
%P 65-74
%K AI06

%A L. G. Shapiro
%A R. S. MacDonald
%A S. R. Sternberg
%T Ordered Structural Shape Matching with Primitive Extraction by Mathematical
Morphology
%J MAG115
%P 75-90
%K AI06

%A M. R. Korn
%A C. R. Dyer
%T 3-D Multiview Object Representations for Model-Based Object Recognition
%J MAG115
%P 91-104
%K AI06

%A Toshifumi Tsukiyama
%A T. S. Huang
%T Motion Stereo for Navigation of Autonomous Vehicles in Man-Made Environments
%J MAG115
%P 105-114
%K AI06 AA19

%A S. Y. Lee
%A S. Yalamanchili
%A J. K. Aggarwal
%T Parallel Image Normalization on a Mesh Connected Array Processor
%J MAG115
%P 115-124
%K AI06 H03

%A H. D. Cheng
%A K. S. Fu
%T VLSI Architectures for String Matching and Pattern Matching
%J MAG115
%P 125-142
%K AI06 O06 H03

%A H. Mellink
%A H. Buffart
%T Abstract Code Network as  a Model of Perceptual Memory
%J MAG115
%P 143
%K AI08

%A K. N. Ngan
%A A. A. Kassim
%A H. S. Singh
%T Parallel Image-Processing System Based on the TMS 32010 Digital
Signal Processor
%J IEE Proceedings E
%V 134
%N 2 Part E
%D MAR 1987
%K AI06 H03

%A Soundar R. T. Kumara
%A R. L.  Kashyap
%A C. L. Moodie
%T Expert System for Industrial Facilities Layout Planning and Analysis
%J Computers and Industrial Engineering
%V 12
%N 2
%D 1987
%K AA05 AI01

------------------------------

End of AIList Digest
********************
14-Jul-87 22:59:05-PDT,23672;000000000000
Mail-From: LAWS created at 14-Jul-87 22:51:34
Date: Tue 14 Jul 1987 22:48-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #180
To: AIList@STRIPE.SRI.COM


AIList Digest           Wednesday, 15 Jul 1987    Volume 5 : Issue 180

Today's Topics:
  Bibliography - Leff a58C  (Part 2 of 2)

----------------------------------------------------------------------

Date: Sat, 11 Jul 1987 17:39 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: a58C  (Part 2 of 2)

%A D. Driankov
%T An Outline of a Fuzzy Sets Approach to Decison making with Interdependent
Goals
%J Fuzzy Sets and Systems
%V 21
%N 3
%D MAR 1987
%P 275-288
%K O04 AI13

%A J. J. Buckley
%T The Fuzzy Mathematics of Finance
%J Fuzzy Sets and Systems
%V 21
%N 3
%D MAR 1987
%P 257-274
%K AA06 O04

%A  S. K. M. Wong
%A W. Ziarko
%T Comparison of the Probabilistic Approximate Classification and the Fuzzy
Set Model
%J Fuzzy Sets and Systems
%V 21
%N 3
%D MAR 1987
%P 357-362
%K O04

%A W. Karkowkski
%A N. O. Mulholland
%A T. L. Ward
%T A Fuzzy Knowledge Base of an Expert System for Analysis of Manual Lifting
Tasks (Case Studies and Applications Contribution)
%J Fuzzy Sets and Systems
%V 21
%N 3
%D MAR 1987
%P 363
%K AA05 O04


%A S. S. Rao
%T Description and Optimum Design of Fuzzy Mechanical Systems
%J Journal of Mechanisms, Transmissions and Automation in Design
%V 109
%N 1
%D MAR 1987
%K AA05 O04
%P 126-132

%A Heiko Krumm
%T Logical Verification of Concurrent Programs
%J Angewandte Informatik
%N 4
%D APR 1987
%P 131-140
%K AA08

%A Janice I. Glasgow
%A Glenn H. MacEwen
%T Developing and Proof of a Formal Specification for a Multilevel
Secure System
%J ACM Transactions on Computer Systems
%V 5
%N 2
%D May 1987
%P 151
%K AA08

%A A. Pashtan
%T A Prolog Implementation of an Instruction-level Simulator
%J Software Practice and Experience
%V 17
%N 5
%D MAY 1987
%P 309-318
%K AA08 AA04 T02

%A James L. Flanagan
%T Speech Processing an Evolving Technology
%J MAG130
%P 2-11
%K AI05

%A James G. Josenhans
%A John F. Lynch, Jr.
%A Marian R. Rogers
%A Richard R. Rosinski
%A Wendy P. VanDame
%T Speech Processing Application Standards
%J MAG130
%P 23-33
%K AI05

%A Robert J. Perdue
%A Eugene L. Rissanen
%T Conversant 1 Voice System: Architecture and Applications
%J MAG130
%P 34-47
%K AI05
%X Conversant is a Registered Trademark

%A John G. Ackenhusen
%A Syed S. Ali
%A James G. Josenhans
%A John W. Moffett
%A Reuel R. Robertson
%A Jaime R. Tormos
%T Speech Processing for AT&T Workstations
%J MAG130
%P 60-67
%K AI05

%A John G. Ackenhausen
%A Syed S. Ali
%A David Bishop
%A Louis F. Rosa
%A Reed Thorkildsen
%T Single-Board General-Prupose Speech Recognition System
%J MAG130
%P 48-59
%K AI05

%A Martha Birnbaum
%A Larry A. Cohen
%A Frank X. Welsh
%T A Voice Password System for Access Security
%J MAG130
%P 68-74
%K AI05

%A Bishnu S. Atal
%A Lawrence R. Rabiner
%T Speech Research Directions
%J MAG130
%P 75-88
%K AI05

%A Knut Conradsen
%A Gert Nilsson
%T Data Dependent Filters for Edge Enhancement of Landsat Images
%J Computer Vision, Graphics, and Image Processing
%V 38
%N 2
%D MAY 1987
%P 101-121
%K AI06

%A Ken-Ichi Kanatani
%T Structure and Motion from Optical Flow Under Perspective Projection
%J Computer Vision, Graphics, and Image Processing
%V 38
%N 2
%D MAY 1987
%P 122-146
%K AI06

%A Azriel Rosenfeld
%T Picture Processing: 1986
%J Computer Vision, Graphics, and Image Processing
%V 38
%N 2
%D MAY 1987
%P 147
%K AI06

%A W. Greblicki
%A M. Pawlak
%T Necessary and Sufficient Conditions for Bayes Risk Consistency of a Recursive
Kermnel Classification
%J IEEE Transactions on Information Theory
%D MAY 1987
%V 33
%N 3
%P 408-411
%K O04

%A V. Wispfenning
%T The Complexity of the Word Problem for Abelian I-Groups
%J Theoretical Computer Science
%V 48
%N 1
%D 1986
%P 127
%K AI14 AI10

%A A. V. Zhozhikashvili
%A V. L. Stefanyuk
%T The Category Theory in Problems of Knowledge Representation and Learning
%J Soviet Journal of Computer and Systems Sciences
%V 24
%N 5
%D SEP-OCT 1986
%P 11-23
%K AI16 AI04

%A Ye. K. Gordiyenko
%T Implementation of Search Functions of the FRL Language Using a Two-Tag
Associative Memory
%J Soviet Journal of Computer and Systems Sciences
%V 24
%N 5
%D SEP-OCT 1986
%P 43-58
%K AI03

%A L. I. Feygin
%T Estimation of the Value of the Planning Horizon in the Case of Fuzzy
Durations of the Operations
%J Soviet Journal of Computer and Systems Sciences
%V 24
%N 5
%D SEP-OCT 1986
%P 97-101
%K AI09 O04

%A Ronald R. Yager
%T On the Dempster-Shafer Framework and New Combination Rules
%J Information Sciences
%V 41
%N 2
%D MAR 1987
%P 93-138
%K O04

%A J. C. A. Van Der Lubbe
%A D. E. Boekee
%A Y. Boxma
%T Bivariate Certainty and Information Measures
%J Information Sciences
%V 41
%N 2
%D MAR 1987
%P 139-170
%K O04

%A M. A. Zuenkev
%A A. S. Kulguskin
%A A. G. Poletykin
%T Forming Similarity Relations in Analogy-Driven Systems
%J Automation and Remote Control
%V 47
%N 11 Part 2
%D NOV 1986
%P 1543-1551
%K AI16

%A S. Daley
%A f. F. Gill
%T Attitude Control of a Spacecraft Using an Extended Self-Organizing
Fuzzy Logic Control
%J Proceedings of the Institution of Mechanical Engineers Part C
%V 201
%N 2
%D 1987
%P 97-106
%K AA05 O04

%A G. Jumarie
%T A Concept of Observed Weighted Entropy and its Application to Pattern
Recognition
%J MAG131
%P 191-194
%K AI06

%A J. H. Kim
%T Distributed Inference for Plausible Classification
%J MAG131
%P 195-202
%K AI06

%A J. Ma
%A X. Lu
%A C. Wu
%T A Motion Constraint Equation Under Space-Varying or Time Varying
Illumination
%J MAG131
%P 203-206
%K AI06

%A M. Werman
%A A. Y. Wu
%A R. A. Melter
%T Recognition and Characterization of Digitized Curves
%J MAG131
%P 207-214
%K AI06

%A G. Cristobal
%A J. Bescos
%A J. Santamaria
%A J. Montes
%T Wigner Distribution Representation of Digital Images
%J MAG131
%P 215-222
%K AI06

%A S. Peleg
%A D. Keren
%A L. Schweitzer
%T Improving Image Resolution Using Subpixel Motion
%J MAG131
%P 223-226
%K AI06

%A M. C. Yuan
%A J. G. Li
%T A Production System for LSI Chip Anatomizing
%J MAG131
%P 227-232
%K AI06

%A R. D. Grisell
%T Noniterive Correction of Images and Motion Sequences
%J MAG131
%P 223-242
%K AI06

%A P. Fua
%A A. J. Hanson
%T Resegmentation Using Generic Shape: Locating General Cultural Objects
%J MAG131
%P 243
%K AI06

%A A. M. Rustamov
%A N. G. Dzhanibekova
%A V. G. Zakiev
%T Development of the Automated System on the Analysis of Reader Demand in
Regional Integral Automated Library-Bibliography Systems
%J Nauchno-Tekhnicheskaya Informatsiya, Seriya II - Informatsionnye
Protsessy I Sistemy
%N 3
%D 1987
%P 11-18
%K AA14

%A I. A. Bolshakov
%T Pure Automatic Seplling Correction Based on the Keyboard Model of
Common Errors
%J Nauchno-Tekhnicheskaya Informatsiya, Seriya II - Informatsionnye
Protsessy I Sistemy
%N 3
%D 1987
%P 11-18

%A K. V. K. K. Prasad
%A T. S. Lamba
%T Natural Language Interface Based on Keyword Extraction Using AWK
%J Microprocessors and Microsystems
%V 11
%N 3
%D APR 1987
%K AI02
%P 157-160

%A A. N. Averkin
%A V. B. Tarasov
%T The Fuzzy Modeling Relation and its Application to Artificial Intelligence
%J MAG122
%P 3-24
%K O04

%A A. V. Alexeyev
%A A. N. Borisov
%A V. I. Glushkov
%A O. A. Krumberg
%A G. V. Merkuryeva
%A V. A. Popov
%A N. N. Slyadz
%T A Linguistic Approach to Decision-Making Problems
%J MAG123
%P 25-42
%K AI02 AI13 O04

%A R. A. Aliev
%T Production Control on the Basis of Fuzzy Models
%J MAG123
%P 43-56
%K O04

%A A. F. Blishun
%T Fuzzy Learning Models in Expert Systems
%J MAG123
%P 57-70
%K AI01 AI04 O04

%A V. E. Zhukovin
%A F. V. Burshtein
%A E. S. Korelov
%T A Decisoin Making Model with Vector Fuzzy Preference Relation
%J MAG123
%P 71-80

%A S. G. Svarovski
%T Usage of Linguistic Variable Concept for Human Operator Modelling
%J MAG123
%P 107-114
%K O04 AI02

%A D. A. Pospelov
%T Fuzzy Reasoning in Pseudo-Physical Logics
%J MAG123
%P 115-120
%K O04

%A S. V. Chesnokov
%T The Effect of Semantic Freedom in the Logic of Natural Language
%J MAG123
%P 121-154
%K AI02 O04

%A D. I. Shapiro
%T Human Specifics, Fuzzy Categories and Counteraction in Decision
Making Problems
%J MAG123
%P 155-170
%K AI13 O04

%A I. A. Newman
%A R. P. Stallard
%A M. C. Woodward
%T A Hybrid Multiple Processor Garbage Collection Algorithm
%J The Computer Journal
%V 30
%N 2
%D APR 1987
%P 110-118
%K T01 H03

%A J. L. Dupouey
%T Using Artificial Intelligence Languages for the Calculation of
Inbreeding Coefficients - New Tools for an Old Problem
%J Computers in Biology and Medicine
%V 17
%N 2
%D 1987
%P 71-74
%K AA10

%A Rob Gerth
%A W. P. de Roever
%T Proving Monitors Revisited: a First Step Towards Verifying Object
Oriented Systems
%J Fund. Inform.
%V 9
%D 1986
%N 4
%P 371-399
%K AA08

%A P. T. Cox
%T On Determining the Causes of Nonunifiability
%J J. Logic Programming
%V 4
%D 1987
%N 1
%P 33-58
%K AI14 AI10

%A Peter van Emde Boss
%T A Semantical Model for Integration and Modularization of Rules
%B BOOK80
%P 78-92
%K AI01 AI16

%A Ken Hirose
%T An Approach to Proof Checker
%B BOOK80
%P 113-127
%K AA13 AI14 AI11

%A Guy Jumarie
%T New Decision Rules in Statistical Pattern Recognition
%J Kybernetes
%V 16
%D 1987
%N 1
%P 11-18
%K AI06

%A A. V. Kabulov
%A B. I. Zufarov
%T Logical Methods for the Design of Optimal Correctors of Heuristic
Algorithms
%B "Fan"
%C Tashkent
%D 1985
%P 11-17
%K AI16

%A I. V. Kotel'nikov
%T An Algorithm for Constructing a Set of Irredundant Fuzzy Sets
%J Avtomat. i. Telemekh.
%D 1986
%N 9
%P 139-144
%K O04

%A M. A. Nait Abdallah
%T Al-Khowarizmi: A Formal System for Higher Order Logic Programming
%B BOOK80
%P 545-553
%K AI10

%A Zbigniew W. Ras
%A Maria Zemankova
%T Learning in Knowledge Based Systems, a Possibilistic Approach
%B BOOK80
%P 630-638
%K AI04 O04

%A D. Snyers
%T Theorem Proving Techniques and P-Functions for Logic Design and
Logic Programming
%J Philips J. Res
%V 41
%D 1986
%N 5
%P 560-505
%K AA04 AI11 AI10

%A Zbigniew M. Wojcik
%T The Rough Sets Utilization for Linguistic Pattern Recognition
%J Bull. Polish Acad. Sci. Tech. Sci
%V 34
%D 1986
%N 5-6
%P 285-312
%K AI06 AI02

%A S. K. M. Wong
%T Algorithm for Inductive Learning
%J Bull. Polish Acad. Sci. Tech. Sci.
%V 34
%D 1986
%N 5-6
%P 271-276
%K AI04

%A S. K. M. Wong
%A Wojciech Ziarko
%T Remarks on Attribute Selection Criterion in Inductive Learning Based
on Rough Sets
%J Bull. Polish. Acad. Sci. Tech. Sci
%V 34
%D 1986
%N 5-6
%P 273-283
%K AI04

%A W. Bibel
%A Ph. Jorrand
%T Fundamentals of Artificial Intelligence. An Advanced Course.
%S Lecture Notes in Computer Science
%V 232
%I Springer-Verlag
%C Berlin-New York
%D 1986
%K AI16 AT15

%A V. Arvind
%A Somenath Biswas
%T An O($N sup 2$) algorithm for the Satisfiability Problem of a Subset
of Propositional Sentences in CNF that Includes all Horn Sentences
%J Inform. Process. Lett
%V 24
%D 1987
%P 67-69
%K O06 AI10

%A Luis Farinas del Cerro
%A Martti Pentonnen
%T A Note on the Complexity of the Satisfiability of Modal Horn Clauses
%J J. Logic Programming
%V 4
%D 1987
%N 1
%P 1-10
%K AI11 O06


%A Fracoise Fogelman-Soulie
%A Gerard Weisbuch
%T Random Iterations of Threshold Networks and Associative Memory
%J SIAM J. Comput
%V 16
%D 1987
%N 1
%P 203-220
%K AI16 AI08

%A Erik Tiden
%T First-order Unification in Combinations of Equational Theories (Ph. D.
Thesis)
%I Royal Institute of Technology
%C Stockholm
%D 1986
%K AI14 AI11

%A Moshe Y. Vardi
%T Querying Logical Databases
%J J. Comput. System Sci
%V 33
%D 1986
%N 2
%P 142-160
%K AA09 AI10

%A Zbigniew M. Wojcik
%T Contextual Information Research within Sentence with the Aid of the Rough
Sets
%J Bull. Polish Acad. Sci. Tech. Sci
%V 34
%D 1986
%N 5-6
%P 313-330
%K AI02 O04

%A Friedhelm Hinz
%T Regular Chain Code Picture Languages of Nonlinear Descriptional
Complexity
%B BOOK80
%P 414-421
%K AI06

%A Stephen D. Brookes
%T A Fully Abstract Semantics and a Proof System for an ALGOL-like language
with Sharing
%B Mathematical Foundations of Programming Semantics
%P 59-100
%S Lecture Notes in Computer Science
%I Springer-Verlag
%C Berlin-New York
%D 1986
%K AA08

%A Susanne Graf
%T A Complete Inference System for an Algebra of Regular Acceptance Models
%B BOOK80
%P 386-395
%K AI10

%A Laszlo Bela Kovacs
%T Automated Protocol Verification
%B Kozl.-MTA Szamitastech. Automat. Kutato Int. Budapest
%N 33
%D 1985
%P 37-45

%A M. J. Beeson
%T Proving Programs and Programming Proofs
%B Logic, Methodology and Philosophy of Science, VII
%S Stud. Log Foundations Math.
%V 114
%I North-Holland
%C Amsterdam-New York
%D 1986
%K AA08 AI16

%A Anne-Marie Deroualt
%A Bernard Merialdo
%T Language Modelling Using a Hidden Markov Chain with Application
to Automatic Transcription of French Stenotypy
%B Semi-Markov Models
%I Plenum
%C New York-London
%D 1986
%K AI02

%A A. J. Baddeley
%T Stochastic Geometry and Image Analysis
%B Mathematics and Computer Science (Amsterdam 1983)
%P 1-18
%S CWI Monographs
%V 1
%I North-Holland
%C Amsterdam-New York
%D 1986
%K AI06

%A A. G. Ivakhenko
%A S. A. Petukhova
%T Objective Computerized Clustering.  I. Theoretical Questions
%J Soviet J. Automat. Inform. Sci
%V 19
%D 1986
%N 3
%P 1-9
%K O06

%A Hassan Ait-Kaci
%T LOGIN: A Logic Programming Language with Built-in Inheritance
%J MAG132
%P 185-215
%K AI10

%A Marco Bellia
%A Giorgia Levi
%T The Relation Between Logic and Functional Languages: A Survey
%J MAG132
%P 217-236
%K AT08

%A Karl-Hans Blasius
%T Equality Reasoning with Equality Paths
%B BOOK81
%P 57-76
%K AI14

%A Wolfram Buttner
%T Unification in the Data Structure Sets
%B BOOK82
%P 470-488
%K AI14 AA08

%A Ahlenm Ben Cherifs
%A Pierre Lescane
%T An Actual Implementation of a Procedure that Mechanically Proves
Termination of Rewriting Systems Based on Inequalities Between
Polynomial Interpretations
%B BOOK82
%P 42-51
%K AI14 AI11

%A P. Ciancarini
%A P. Degano
%T An Approach to Proving Properties of Nonterminating Logic Programs
%B BOOK83
%P 223-243
%K AI14 AA08 O02

%A Hubert Comon
%T Sufficient Completeness, Term Rewriting Systems and "Anti-Unification"
%B BOOK82
%P 128-140
%K AI14 AI11

%A P. Tox Cox
%A T. Pietrzykowski
%T Causes for Events: Their Computation and Applications
%B BOOK82
%K AI11 temporal reasoning

%A A. J. J. Dick
%A R. J. Cunningham
%T Using Narrowing to Do Isolation in Symbolic Equation Solving
%B BOOK82
%P 272-280
%K AI14

%A Roland Dietrich
%T Relating Resolution and Algebraic Completion for Horn Logic
%B BOOK82
%P 62-78
%K AI14 AI10 AI11

%A B. Fronhofer
%T On Refinements of the Connection Method
%B BOOK83
%P 391-401

%A Isabelle Gnaedig
%A Pierre Lescanne
%T Proving Termination of Associative Commutative Rewriting Systems by
Rewriting
%B BOOK82
%P 52-61
%K AI14 AI11

%A Richard Gobel
%T Completion of Globally Finite Term Rewriting Systems for Inductive
Proofs
%B BOOK81
%P 101-110
%K AI11 AI14

%A I. R. Goodman
%T Some Asymptotic Results for the Combination of Evidence Problem
%J Math. Modelling
%V 8
%D 1987
%P 216-221
%K O04 O06

%A Alexander Herold
%T Combination of Unification Algorithms
%B BOOK82
%P 450-469
%K AI11 AI14

%A Douglas Howe
%T Implementing Number Theory: an Experiment with Nuprl.
%B BOOK82
%P 404-415
%K AA13 AI11 AI14

%A Tadashi Kanamori
%A Hiroshi Fujita
%T Formulation of Induction Formulas in Verification of Prolog Programs
%B BOOK82
%P 281-299
%K AI14 AI11 O02

%A Deepak Kapur
%A Paliath Narendran
%A Hantao Zhang
%T Proof by Induction Using Test Sets
%B BOOK82
%P 99-117
%K AI14 AI11

%A Deepak Kapur
%A Paliath Narendran
%T NP-Completeness of the Set Unification and Matching Problems
%B BOOK82
%P 489-495
%K O06 AI11

%A Thomas Kaufl
%T Program Verifier "Tatzelwurm": Reasoning About Systems of Linear
Inequalities
%B BOOK82
%P 300-305
%K AA13 AA08 AI11

%A Younghwan Lim
%T The Heuristics and Experimental Results of a New Hyperparamodulation: HL-
Resolution
%B BOOK82
%P 240-253
%K AI11

%A Rasiah Loganantharaj
%A Robert A. Mueller
%T Parallel Theorem Proving with Connection Graphs
%B BOOK82
%P 337-352
%K AI11 H03

%A Zohar Manar
%A Richard Waldinger
%T How to Clear a Block: Plan Formulation in Situational Logic
%B BOOK82
%P 622-640
%K AI07 AI09 AI11

%A Ursula Maritn
%A Tobias Nipkow
%T Unification in Boolean Rings
%B BOOK82
%P 506-513
%K AI14 AI11

%A Jalel Mzali
%T Matching with Distributivity
%B BOOK82
%P 496-502
%K O06 AI11

%A Sanjal Narain
%T A Technique for Doing Lazy Evaluation in Logic
%J  MAG132
%P 259-276
%K AI10

%A Hung T. Nguyen
%T On Modeling of Expert Knowledge and Admissibility of Uncertainty Measures
%J Math. Modelling
%V 8
%D 1987
%P 222-226
%K O04 AI01

%A Hans-Jurgen Ohlbach
%T Theory Unification in Abstract Clause Graphs
%B BOOK81
%P 77-100
%K AI14 AI11

%A F. Oppacher
%A E. Suen
%T Controlling Deduction with Proof Condensation and Heuristics
%B BOOK82
%P 384-393
%K AI11 AI14

%A Lawrence C. Paulson
%T Natural Deduction as Higher-Order Resolution
%J MAG131
%P 237-258
%K  AI10 AI11

%A David A. Plaisted
%T Abstraction Using Generalization Functions
%B BOOK82
%P 365-376
%K AI11

%A D. Rydeheard
%T A Categorical Unification Algorithm
%B BOOK84
%K AI14 AI11

%A Manfred Schmidt-Schauss
%T Unification in Many-Sorted Equational Theories
%B BOOK82
%P 538-552
%K AI14 AI11

%A Manfred Schmidt-Schauss
%T Unification in a Many Sorted Calculus with Declarations
%B BOOK81
%P 118-132
%K AI14 AI11

%A Hans-Albert Schneider
%T An Improvement of Deduction Plans: Refutation Plans
%B BOOK82
%P 377-383
%K AI11

%A O. Stepankova
%A P. Stepanek
%T And/or Schemes and Logic Programs
%B BOOK83
%P 765-776
%K AI10 AI03

%A Mandayam Thathachar
%A P. S. Sastry
%T Learning Optimal Discriminant Functions Through a Cooperative Game of
Automata
%J IEEE Trans. Systems Man Cybernet.
%V 17
%D 1987
%N 1
%P 73-85
%K AI12 AI04

%A Erik Tiden
%T Unification in Combinations of Collapse-Free Theories with Disjoint
Sets of Function Symbols
%B BOOK82
%P 431-449
%K AI11 AI14

%A F. Winkler
%A B. Buchberger
%T A Criterion for Eliminating Unnecessary Reductions in the Knuth-Bendix
Algorithm
%B BOOK83
%P 849-869
%K AI14 AI11

%A L. Wos
%A W. McCune
%T Negative Paramodulation
%B BOOK82
%P 229-239
%K AI14 AI11

%A Martin Abadi
%A Zohar Manna
%T Modal Theorem Proving
%B BOOK82
%P 172-189
%K AI11

%A Peter B. Andrews
%T Connections and Higher-Order Logic
%B BOOK82
%P 1-4
%K AI11 AI10

%A Leo Bachmair
%A Nachum Dershowitz
%T Commutation, Transformation, and Termination
%B BOOK82
%P 5-20
%K AI11 AI14

%A Julian Besag
%T On the Statistical Analysis of Dirty Pictures
%J J. Royal Statistical Society Series B
%V 48
%D 1986
%N 3
%P 259-302
%K AI06

%A R. Book
%T On the Unification Hierarchy
%B BOOK81
%P 111-117
%K AI14 AI11

%A Frank Malloy Brown
%T A Commonsense Theory of Nonmonotonic Reasoning
%B BOOK82
%P 209-228
%K AI15

%A Hans-Jurgen Burckert
%T Some Relationships Between Unification, Restricted Unification, and
Matching
%B BOOK82
%P 514-524
%K AI11 AI14 O06

%A Cynthia Dwork
%A Paris Kanellakis
%A Larry Stockmeyer
%T Parallel Algorithms for Term Matching
%B BOOK82
%P 416-430
%K AI11 O06 H03 AI14

%A Norbert Eisenger
%T What You Always Wanted to Know About Clause Graph Resolution
%B BOOK82
%P 316-336
%K AI11

%A M. Falaschi
%A Giorgia Levi
%A C. Palamidesi
%T The Formal Semantics of Processes and Streams in Logic Programming
%B BOOK83
%P 363-378
%K AI10 O02

%A Jieh Hsiang
%A Michael Fusinowitch
%T A New Method for Establishing Refutational Completeness in Theorem
Proving
%B BOOK82
%P 141-152
%K AI14 AI11

%A Gerhard Jaeger
%T Some Contributions to the Logical Analysis of Circumscription
%B BOOK82
%P 154-171
%K AI15 AI11

%A Kurt Konolige
%T Resolution and Quantified Epistemic Logics
%B BOOK82
%P 199-208
%K AI10 AI11 AI14

%A  Xu Hua Liu
%T Generalized Resolution Using Paramodulation
%J Kexue Tongbao (English Edition)
%V 31
%D 1986
%N 21
%P 1441-1444
%K AI11 AI14

%A Neil V. Murray
%T Theory Links in Semantic Graphs
%B BOOK82
%P 353-364
%K AI16

%A David A. Plaisted
%T A Simple Nontermination Test for the Knuth-Bendix Algorithm
%B BOOK82
%P 69-88
%K AI11 AI14

%A Patrick Saint-Dizler
%T An Approach to Natural-Language Semantics in Logic Programming
%J MAG135
%P 329-356
%K AI02 AI10

%A P. H. Schmitt
%T Computational Aspects of Three-Valued Logic
%B BOOK82
%P 190-198
%K AI11 O04

%A Yoshohito Toyama
%T How to Prove Equivalence of Term Rewriting Systems without Induction
%B BOOK82
%P 118-127
%K AI11 AI14

%A Jonathan Traugott
%T Nested Resolution
%B BOOK82
%P 394-402
%K AI11

%A Kyastutis Urba
%T Redundancy of Features in a Classification Problem
%J Statist. Problemy Upravleniya No. 72
%D 1986
%P 56-63
%K O04
%X Russian with English and Lithuanian Summaries

%A Christoph Walther
%T A Classification of Many-Sorted Unification Problems
%B BOOK82
%P 525-537
%K AI11 AI14

%A Tie Cheng Wang
%T ECR: An Equality Conditional Resolution Proof Procedure
%B BOOK82
%P 254-271
%K AI11

%A Yuan Yuan Wang
%T A Generalized Paramodulation-Resolution Method
%J Nanjing Daxue Xuebao Ziran Kexue Ban
%V 22
%D 1986
%N 2
%P 205-210
%K AI11
%X Chinese with English Summary

%A Richard Cole
%A Chee K. Yap
%T Shape From Probing
%J J. Algorithms
%V 8
%D 1987
%N 1
%P 19-38
%K AI06 AI07

%A Peter Hall
%A D. M. Titterington
%T On Some Smoothing Techniques Used in Image Restoration
%J J. Roy. Satist. Soc. Ser. B.
%V 48
%D 1986
%N 3
%P 330-343
%K AI06

%A R. Schott
%T Nonlinear Filtering and Stochastic Textures
%J Math. Modelling
%V 8
%D 1987
%P 167-169
%K AI06

%A Miguel Filgueiras
%T Cooperating Rewrite Processes for Natural-Language Analysis
%J MAG135
%P 299-328
%K AI11 AI02

%A Horst Reichel
%T Behavioral Program Specification
%B BOOK83
%P 390-411
%K AA08

%A Eugenio Moggi
%T Categories of Partial Morphisms and the $lambda sub p$ - Calculus
(extended abstract)
%B  BOOK84
%P 242-251
%K AA08


%A P. Hajek
%T Some Conservativeness Results for Nonstandard Dynamic Logic
%B BOOK83
%P 443-449
%K AI10


%A Thomas M. Fischer
%T On the Average Complexity of Searching for Partial Match Queries in
Multidimensional Search Trees
%B BOOK83
%P 379-390
%K O06


%A Werner Alexi
%T Extraction and Verification of Programs through the Analysis of
Formal Proofs
%B BOOK81
%P 135-152
%K AA08

%A P. Borowik
%A W. Korczynski
%A T. Kudla
%T An Axiomatic Characterisation of an Algebra of Processes
%B BOOK83
%P 141-150
%K AA08

------------------------------

End of AIList Digest
********************
14-Jul-87 23:17:59-PDT,21809;000000000000
Mail-From: LAWS created at 14-Jul-87 23:00:54
Date: Tue 14 Jul 1987 22:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #181
To: AIList@STRIPE.SRI.COM


AIList Digest           Wednesday, 15 Jul 1987    Volume 5 : Issue 181

Today's Topics:
  Classification - Natural kinds & Fuzzy Categories,
  Comment - Need for Harnad-Style Discussions

----------------------------------------------------------------------

Date: 10 Jul 87  1019 PDT
From: John McCarthy <JMC@SAIL.STANFORD.EDU>
Subject: Natural kinds

Recently philosophers, Hilary Putnam I think, introduced the concept
of natural kind which, in my opinion, is one of the few things they
have done that is useful for AI.  Most nouns designate natural kinds,
uncontroversially "bird", and in my opinion, even "chair".  (I don't
consider "natural kind" to be a linguistic term, because there may
be undiscovered natural kinds and never articulated natural kinds).

The clearest examples of natural kind are biological species -
say penguin.  We don't have a definition of penguin; rather we
have learned to recognize penguins.  Penguins have many properties
I don't know about; some unknown even to penguin specialists.
However, I can tell penguins from seagulls without a precise definition,
because there aren't any intermediates existing in nature.
Therefore, the criteria used by people or by the programs we build
can be quite rough, and we don't all need to use the same criteria,
because we will come out with the same answer in the cases that
actually arise.

In my view the same is true of chairs.  With apologies to Don Norman,
I note that my 20 month old son Timothy recognizes chairs and tables.
So far as I know, he is always right about the whether the objects
in our house are chairs.  He also recognizes toy chairs, but just
calls them "chair" and similarly treats pictures of chairs in books.
He doesn't yet say "real chair", "toy chair" and "picture of a chair",
but he doesn't try to sit on pictures of chairs.  He is entirely
prepared to be corrected about what an object is.  For example, he
called a tomato "apple" and accepted correction.

We should try to make AI systems as good as children in this respect.
When a an object is named, the system should generate a
gensym, e.g. G00137.  To this symbol should be attached the name
and what the system is to remember about the instance.  (Whether it
remembers a prototype or a criterion is independent of this discussion;
my prejudice is that it should do both if it can.  The utility of
prototypes depends on how good we have made it in handling similarities.)

The system should presume (defeasibly) that there is more to the concept
than it has learned and that some of what it has learned may be wrong.
It should also presume (although will usually be built into the design
rather than be linguistically represented) that the new concept is
a useful way to distinguish features of the world, although some new
concepts will turn out to be mere social conventions.

Attaching if-and-only-if definitions to concepts will sometimes be
possible, and mathematical concepts often are introduced by definitions.
However, this is a rare case in common sense experience.

I'm not sure that philosophers will agree with treating chairs as
natural kinds, because it is easy to invent intermediates between
chairs and other furniture.  However, I think it is psychologically
correct and advantageous for AI, because we and our robots exist
in a world in which doubtful cases are rare.

The mini-controversy about penguins can be treated from this point of
view.  That penguins are birds and whales are mammals has been discovered
by science.  Many of the properties that penguins have in common with
other birds have not even been discovered yet, but we are confident that
they exist.  It is not a matter of definition.  He who gets fanatical
about arbitrary definitions will make many mistakes - for example,
classifying penguins with seals will lead to not finding tasty penguin
eggs.

------------------------------

Date: Sat 11 Jul 87 21:45:36-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Reply-to: AIList-Request@STRIPE.SRI.COM
Subject: Natural Kinds

I would not be so quick to thank recent philosophers for the concept
of natural kinds.  While I am not familiar with their contributions,
the notion seems similar to "species" in biology and "cluster" in
engineering and statistics.  Cluster and discriminant analysis go
back to at least the 1930s, and have always depended on the tendency
of objects under study to group into classes.

                                        -- Ken

------------------------------

Date: 13 Jul 87 16:31:17 GMT
From: uwslh!lishka@rsch.wisc.edu  (Christopher Lishka)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <3930@sunybcs.UUCP> dmark@marvin.UUCP (David M. Mark) writes:
>In article <974@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>>
>>
>>In Article 185 of comp.cog-eng sher@rochester.arpa (David Sher) of U of
>>Rochester, CS Dept, Rochester, NY responded as follows to my claim that
>>"Most of our object categories are indeed all-or-none, not graded. A penguin
>>is not a bird as a matter of degree. It's a bird, period." --
>>
>>>     Personally I have trouble imagining how to test such a claim...
>>
>>Try sampling concrete nouns in a dictionary.
>
>Well, a dictionary may not always be a good authority for this sort of
>thing.

I don't want to start a huge discussion on a related topic, but I guess I'll
throw in my two-cents worth.

Mr. Harnad states that one should try sampling concrete nouns in a
dictionary.  It seems to me that a short while ago there was some
discussion around the country as to what a dictionary's purpose
actually is, to which a prominent authority on the subject replied
that a dictionary is *only* a description of what people are commonly
using certain words for.  Now, one upshot of this seems to be that a
dictionary, in the end, is NOT a final authority on many words (if not
all of them included).  It can only provide a current description of
what the public in general is using the word for.

In the case of some words, many people will use them for many
different things.  This may be one reason for the problems with the
word 'map.'  In the case of a penguin, scientifically it is considered
a bird.  I consider it a bird, although a penguin certainly does not
fly in the air.  However, if every English-speaking person except a
few, say myself and Mr. Harnad, suddenly decided to think of a penguin
as something other than a bird, than a dictionary's description would
need to be changed, for myself and Mr. Harnad would be far outweighed.
I suspect that the dictionary would have some entry as to the
historical meaning of 'penguin' (i.e. a penguin used to be considered
a bird, but now it is something else).  However, since a dictionary is
supposed to be descriptive of a language in its current usage, the
entry for penguin would have to be modified.

Which brings me to my point.  Given that a dictionary is a descriptive
tool that seeks to give a good view of a language as it is currently being
used, can it really be used as a final authority?  My feeling is no;
just look at all the different uses of a certain word among your
friends, not to mention the entire state you live in, not to mention
your continent, not to mention the entire English-speaking population
of the world.  Holy cow!  You've suddenly got a lot of little
differences in meaning for a certain word.  Not to mention slang and
local terms (e.g. has anyone ever heard of the word 'bubbler?'  It
means a 'Water Fountain' here in Wisconsin, but you'd be surprised how
many people don't know this term).  In this case you can only look at
words as a 'graded' term, not an all-or-none term if you are using a
dictionary as the basis for a definition.  Sure, if you want to use a
scientific definition for penguin, go ahead...since science seems to
seek to be unambiguos (unlike general spoken language), then you will
have a better all-or-none description.  But I don't think you can go
about using a dictionary, which is a descriptive tool, as an
all-or-none decisive authority on what a word means.  If I remember
back to a Linguistics course I took, this is the same difference as
denotation vs. connotation.

A couple notes: if you notice above (and right here), I use the word
'you' (as a technical writer would use the word 'one') to refer to a
person in general (i.e. the reader).  This is not generally accepted
as proper English by the people who seek to define proper English, but
it is the term that is used by most people that I have known (here in
Wisconsin).  It seems to me that this is further evidence of my
argument above, because I do not think twice in using this term 'you;'
it is how I was raised.

Also, please don't start a discussion on language in this group unless
it pertains to A.I. (and in some case it does); I just felt that
someone ought to speak up on the ambiguity of words, and how to
different people there might be problems with using a dictionary as a
basis for judgement.  If you want to continue this discussion, please
e-mail me, and I will respond in a decent amount of time (after I cool
off in the case of flames ;-)


--
Chris Lishka                    /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
                                \{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

------------------------------

Date: 10 Jul 87 16:47:54 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU  (Stephen
      Smoliar)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <3930@sunybcs.UUCP> dmark@marvin.UUCP (David M. Mark) writes:
> we conducted
>a number of experiments and found many ambiguous stimuli near the boundary
>of the concept "map".  Air photos and satellite images are an excellent
>example: they fit the dictionary definition, and some people feel very
>strongly that they *are* maps, others sharply reject that claim, etc.
>Museum floor plans, topographic cross-profiles, digital cartographic
>data files on tape, verbal driving directions for navigation, etc., are
>just some examples of the ambiguous ("fuzzy"?) boundary of the concept
>to which the English word "map" correctly applies.  I strongly suspect
>that "map" is not unique in this regard!


Indeed, it almost seems as if "What is a map?" is not really the appropriate
question.  The better question might be "What can be used as a map?" or
perhaps "How can I use a FOO as a map?"  Furthermore, I agree that "map"
is probably not unique.  There are probably any number of bindings for
BAR for which "What is a BAR?" runs into similar difficulty and for which
"How can I use a FOO as a BAR?" is the more useful question.

One candidate I might propose to discuss along these lines is the concept
of "algorithm."  There are any number of entities which might be regarded
as being used as algorithms, ranging from Julia Child's recipies to
chromosomes.  It would seem that any desire to classify such entities
as algorithms is only valuable to the extent that we are interested in
the algorithmic properties such entities possess.  For example, we might
be interested in the nature of recipes which incorporate "while loops"
because we are concerned with how such loops terminate.

In an earlier posting, Harnad gave the example of how we classify works of
art according to particular styles.  Such classifications may also be
susceptible to this intermediate level of interpretation.  Thus, you
may or may not choose to view a particular tapestry as an allegory.
You may or may not choose to view it as a pastoral.  Such decisions
influence the way you see it and "parse" it as part of your artistic
appreciation, regardless of whether or not your particular view coincides
with that of the creator!

I suspect there is a considerable amount of such relativity in the way we
detect categories.  That relativity is guided not by what the categories
are or what their features are but by how we intend to put those
categories to use.  (In other words, the issue isn't "What features
are present?" but "What features do we want to be present?")

------------------------------

Date: 14 Jul 87 15:37:00 GMT
From: apollo!laporta@beaver.cs.washington.edu  (John X. Laporta)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <245@uwslh.UUCP> lishka@uwslh.UUCP (Christopher Lishka) writes:

>Given that a dictionary is a descriptive
>tool that seeks to give a good view of a language as it is currently being
>used, can it really be used as a final authority?  My feeling is no;

SUMMARY

(1) You are absolutely right. There is no 'final authority' because language
changes even as one tries to pin it down, with a dictionary, for example.

(2) AI programs designed to 'understand' natural language must include
an encyclopedic as well as a lexicological (dictionary) competence.

(3) The nonexistence to date of perfect artificial understanders of natural
language should not be surprising, given the enormity of the task of
constructing an artificial encyclopedic competence.

(4) The encyclopedia in this instance must grow with the language, preserving
past states, simulating present states, and predicting future states.

ELABORATION

Tackling (2) first:

While dictionary definitions are helpful guides in some respects, the nature of
linguistic competence is encyclopedic rather than lexicological. For instance,
you might hear someone say:

    Because I was going to give a cocktail party, I went to the mall
    to buy whiskey, peanuts, and motor oil.

A lexicological competence would deem this sentence grammatical and
unremarkably consistent, since 'mall' includes the availability of all the
items mentioned. An encyclopedic competence, on the other hand, would
mark this sentence as strange, since 'motor oil' is not a part of 'cocktail
party,' unless, I suppose, you were willing to assume that some of the guests
needed mechanical, not social, lubrication. Even this conjecture is unlikely,
however, because 'cocktail party' includes humans consuming alcoholic
beverages. A case of Billy Beer at the local Exxon is not a cocktail party.
Car mechanics do not come to work in little black dresses.  An encyclopedic
competence is able (a) to isolate the assumptions an utterance requires for
coherence, (b) to rank their probability, and (c) thus to evaluate the
coherence of the utterance as a whole.

Further, 'encyclopedic' in this context includes more than is found in the
_Brittanica_. A humorist might write (in the character of a droll garage
mechanic) about a parley to negotiate sale of a gas station. He decides to
provide a little festive atmosphere by bringing along some beer. But even
this hypothesis doesn't eliminate all strangeness: why is the mechanic
buying motor oil at the supermarket? Certainly he could get a better price
from his distributor.

This sentence is a mine of linguistics lessons, but the above should be
enough to suggest my point. Encyclopedic competence, however provided,
(scripts or semantically marked graphs of words, to give two examples
which are not mutually exclusive) is crucial to understanding even the
topic of an utterance.

The wider question evolves from (1) ... :

Language is an elaboration of symbols which refer to other symbols. The
'last stop' (the boundary of semiotic analysis, not the the boundary of the
linguistic process itself in actual beings or machines) is the connection of
certain signs to 'cultural units.' These pieces of memory are what ground
symbol nets to whatever they are grounded upon. (I prefer Harnad's
formulation, but that is not crucial for this discussion.) When Og the Caveman
remembers one morning the shape of the stone that he used as a scraper
yesterday, a cultural unit exists, and stones of that shape are the first signs
dependent upon it. To oversimplify, the process continues infinitely as signs
are connected to other signs, new cultural units are formed, signs modify
other signs, etc.

... and concludes with (3) and (4):

Meaning is 'slippery' because language changes as it is used. A historically
amnesiac encyclopedic competence for 1980 would mark as improbable
sentences used daily at American slave auctions of the 1840's.

SOURCE NOTE:  Nearly everything I have said here has been elaborated by
Umberto Eco in his book 'A Theory of Semiotics' and subsequent writings.

------------------------------

Date: 14 Jul 87 20:25:01 GMT
From: ritcv!cci632!dwp@cs.rochester.edu  (Dana Paxson)
Subject: Re: Thanks. (was  Re: Results of Symbol Grounding Poll)

In article <1010@mind.UUCP> ghn@mind.UUCP (Gregory Nelson) writes:
>In article <993@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>>[]
>>[make the] Net the reliable and respectable medium of scholarly communication
>>that I and (I trust) others are hoping it will evolve into.
>>        ...
>>(4) I continue to be extremely enthusiastic about and committed to
>>developing the remarkable potential of electronic networks for scholarly
>>communication and the evolution of ideas. I take the present votes to
>>indicate that the current Usenet Newsgroups may not be the place to attempt
>>to start this.
>
>                             ...  Perhaps you should take some time off to
>look at some of the other newsgroups.  The comp.xxxx discussions are naturally
>oriented to computer people, but things like rec.xxx and sci.xxx are much
>more "broadminded" (if you will.)  If you want a real surprise, try tuning
>in to the Deja Vu discussion on misc.psi or something like that.
>

I realize that this is belated input.

As one who followed along with an occasional understanding of
the discussion on symbol grounding, I have been attracted both
to the discussion and to the way in which Stevan Harnad
conducted it.  I admire the discipline and rigor evident in his
postings, and see his work as an example of how a newsgroup
functioning often as a bulletin board with limited scope can
be enriched by some really difficult exploration.  Some of the
other contributors to the discussion appeared to work well at a
level near Mr. Harnad's.  It has been an exciting series of
exchanges.

I regret the loss of the discussion from the newsgroup.  Any
reader of the most potent material on computer science will find
that the authors reach out to many fields to gain inspiration,
illustration, and, yes, even forms of grounding(!) for their
work.  Especially grounding.

Like any other science area with meaning, computer science does
not begin in words (or bytes) and end in bytes (or words).  It
ends in application, or at least applicability, to our lives.
In the AI realm, that applicability is becoming an intimate
metamorphism, a mapping/transformation, of how we work rather
than a translation of what we do.  If I can characterize an
aspect of the symbol grounding discussion, it is a knife-sharp
exploration of the type of problem dismissed by so many as
having a self-evident solution.  This class of problem is
precisely the type which is most difficult even to see, let
alone solve.  Witness the depth and detail of the exchanges we
have seen.  If others become impatient with the material, they
don't have to read it; but this topic area appears to be poorly
understood by anybody, and desperately needs close dialogue.
Personally, I feel strongly the need to extend my cognitive
framework with such powerful and challenging material.

Perhaps the outcomes from discussions like this one have too
much potential for making a lot of funded thesis work and
product development irrelevant... but then some outcomes can
unfold whole new realms of exploration and advancement.  Unless
I am mistaken, these newsgroups can play an active role in this
unfoldment.  I don't want to see anything this good be relegated
to an obscure electronic cranny, or lumped with a lot of diffuse
and irrelevant outpourings.  Computer scientists have a lot to
learn from the symbol-grounding exchanges right here.

I sense that there are many quiet readers out there who have
powerful ideas relating to this subject, but who have kept
silent on seeing contemptuous and abusive complaints of
others about the length and content of the postings.  For
complaints, it seems reasonable to address the complaints to
authors privately, or to the moderator if there is one; but
open criticism on the net discourages its use by those whose
insight and sensitivity exceed their boldness.  Making one's
views public is an intimidating process in itself, so why should
we raise the level of intimidation?

For my part, I would like to ask for a citation for Mr. Harnad's
original article on the subject of symbol grounding; I want to
read it to find out what started the interchange I have seen.  I
tuned in late in the process.

Thanks to all of the participants in this probing discussion.

The views expressed here are my own.

Dana Paxson
Systems Engineering
Computer Consoles, Incorporated
Rochester, New York
716 482-5000
CIS User ID:  76327,65

------------------------------

End of AIList Digest
********************
14-Jul-87 23:27:38-PDT,18937;000000000000
Mail-From: LAWS created at 14-Jul-87 23:23:29
Date: Tue 14 Jul 1987 23:20-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #182
To: AIList@STRIPE.SRI.COM


AIList Digest           Wednesday, 15 Jul 1987    Volume 5 : Issue 182

Today's Topics:
  Logic Programming - ICOT Prolog Progress,
  Humor - AI Justification of Star Wars,
  Speculation - Moravec on Immortality,
  Philosophy of Science - AI as a Science

----------------------------------------------------------------------

Date: Wed, 15 Jul 87 10:32:20 JST
From: Chikayama Takashi <chik%icot.jp@RELAY.CS.NET>
Reply-to: chik@icot.icot.JUNET (Chikayama Takashi)
Subject: Re: Say, what ever happened to ... ICOT Prolog?????

In article <8706111231.AA18169@mitre.arpa> elsaesser%mwcamis@MITRE.ARPA writes:
>It seems ages ago that the 5th generation project was going to
>reinvent AI in a Prolog "engine" that was to do 10 gazillion "
>LIPS".  Anyone know what happened?  I mean, if you can make so many
>"quality" cars (sans auto transmission, useful A/C, paint that can take
>rain and sun, etc.), why can't you make a computer that runs an NP-complete
>applications language in real time???  Semi-seriously, what is the status
>of the 5th generation project, anyone got an update?

Well, we are sorry not distributing enough information to the AI
society.  Most papers related to ICOT's research are distributed to
the logic programming society but not to the AI world (I guess you
know how poor propagandist Japanese are:-).  Many are reported in:
        International Conference on Logic Programming
        IEEE Symposium on Logic Programming
Please look into proceedings of these conferences.

For about 10 gazillion LIPS computers: What our research of these 5
years revealed is that highly parallel hardware can never be practical
without much software effort, including new concepts in programming
languages.  More stress is put upon software than in the original
project plan.  Indeed, VLSI technology is dropped off from the
project.  Our experience shows that VLSI technology is NOT the most
difficult point in the way to realistic highly parallel computer
systems.  An efficient system with 256 processors may be built without
changing the software at all.  But for systems with 4096 processors,
we need a drastic change.  And this is what we need to achieve 10
gazillion LIPS.  NOT that VLSI technology has become easier, but that
we have found MORE difficult problems, unfortunately.

Where are we?  Well, one of our recent hardware achievement is the
development of the PSI-II machine, which executes 400 KLIPS (much less
than 10 gazillion, I guess :-).  It is a sequential machine and will
be used as element processors of our prototype parallel processor
Multi-PSI V2 (with 64 PE's), whose hardware is scheduled to come up at
the end of this year.

If you are interested in our research, a survey by myself titled:
        "Parallel Inference System Researches in the FGCS Project"
will be presented in the IEEE Symposium on Logic Programming, held at
San Francisco during Aug 31-Sep 4, 1987.  If you are more interested
in our project, please join the FGCS'88 conference.  It will be held
in Tokyo during Nov 28-Dec 2, 1988.

Takashi Chikayama

------------------------------

Date: 14-Jul-1987 2028
From: minow%thundr.DEC@decwrl.dec.com  (Martin Minow THUNDR::MINOW
      ML3-5/U26 223-9922)
Subject: Book Report

From "Dirk Gently's Holistic Detective Agency," by Douglas Adams.
(New York: Simon and Schuster, 1987):

    "Well," he said, "it's to do with the project which first made
    the software incarnation of the company profitable.  It was
    called _Reason_, and in its own way it was sensational."

    "What was it?"

    "Well, it was a kind of back-to-front program.  It's funny how
    many of the best ideas are just an old idea back-to-front.  You
    see, there have already been several programs written that help
    you make decisions by properly ordering and analysing all the
    relevant facts.... The drawback with these is that the decision
    which all the properly ordered and analyzed facts point to is not
    necessarily the one you want.

    "... Gordon's great insight was to design a program which allowed
    you to specify in advance what decision you wished it to reach,
    and only then to give it all the facts.  The program's task, ...
    was simply to construct a plausible series of logical-sounding
    steps to connect the premises with the conclusion." ....

    "Heavens.  and did the program sell very well?"

    "No, we never sold a single copy.... The entire project was bought
    up, lock, stock, and barrel, by the Pentagon.  The deal put WayForward
    on a very sound financial foundation.  Its moral foundation, on the
    other hand, is not something I would want to trust my weight to.
    I've recently been analyzing a lot of the arguments put forward in
    favor of the Star Wars project, and if you know what you're looking
    for, the pattern of the algorithms is very clear.

    "So much so, in fact, that looking at Pentagon policies over the
    last couple of years I think I can be fairly sure that the US
    Navy is using version 2.00 of the program, while the Air Force for
    some reason only has the beta-test version of 1.5.  Odd, that."

------------------------------

Date: Wed 8 Jul 87 16:19:25-PDT
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Moravec on Immortality

    [Forwarded with permission of Hans.Moravec@ROVER.RI.CMU.EDU.]


From AP Newsfeatures, June 14, 1987
By MICHAEL HIRSH
Associated Press Writer
    PITTSBURGH (AP) - If you can survive beyond the next 50 years or so,
you may not have to die at all - at least, not entirely.  [...]
    Hans Moravec, director of Mobile Robot Laboratory of the Robotics
Institute at Carnegie Mellon University, believes that computer
technology is advancing so swiftly there is little we can do to avoid
a future world run by superintelligent robots.
    Unless, he says, we become them ourselves.
    In an astonishingly short amount of time, scientists will be able to
transfer the contents of a person's mind into a powerful computer,
and in the process, make him - or at least his living essence -
virtually immortal, Moravec claims.
    ''The things we are building are our children, the next
generations,'' the burly, 39-year-old scientist says. ''They're
carrying on all our abilities, only they're doing it better. If you
look at it that way, it's not so devastating.''  [...]
    ''I have found in traveling throughout all of the major robotics and
artificial intelligence centers in the U.S. and Japan that the ideas
of Hans Moravec are taken seriously,'' says Grant Fjermedal, author
of ''The Tomorrow Makers,'' a recent book about the future of
computers and robotics.  [He] Devotes the first five chapters of
his book to the work of Moravec and his proteges at CMU.
    MIT's Gerald J. Sussman, who wrote the authoritative textbook on
artificial intelligence, agreed that computerized immortality for
people ''isn't very long from now.''
    ''A machine can last forever, and even if it doesn't you can always
make backups,'' Sussman told Fjermedal. ''I'm afraid, unfortunately,
that I'm the last generation to die. Some of my students may manage
to survive a little longer.''  [...]
    CMU's Alan Newell, one of the so-called founding fathers of
artificial intelligence, cautions that while little stands in the way
of intelligent machines, the transfer of a human mind into one is
''going down a whole other path.''
    ''The ability to create intelligent systems is not at all the same
as saying I can take an existing mind and capture what's in that
mind. You might be able to create intelligence but not (capture) the
set of biological circumstances that went into making a particular
mind,'' he says.
    In Moravec's forthcoming book, ''Mind Children,'' he argues that
economic competition for faster and better information-processing
systems is forcing the human race to engineer its own technological
Armageddon, one that a nuclear catastrophe can only delay.
    Natural evolution is finished, he says. The human race is no longer
procreating, but designing, its successors.
    ''We owe our existence to organic evolution. But we owe it little
loyalty,'' Moravec writes. ''We are on a threshold of a change in the
universe comparable to the transition from non-life to life.''
    Moravec's projections are based on his research showing that, on the
average, the cost of computation has halved every two years from the
time of the primitive adding machines of the late 19th century to the
supercomputers of the 1980s.  [...]
    Moreover, the rate is speeding up, and the technological pipeline is
full of new developments, like molecule-sized computer circuits and
recent advances in superconductors, that can ''sustain the pace for
the foreseeable future,'' he says.
    The implications of a continued steady decrease in computing costs
are even more mind-boggling.
    It is no surprise that studies in artificial intelligence have shown
sparse results in the last 20 years, Moravec says. Scientists are
severely limited by the calculating speed and capacity of laboratory
computers. Today's supercomputers, running at full tilt, can match in
power only the 1-gram brain of a mouse, he says.
    But by the year 2010, assuming the growth rate of the last 80 years
continues, the best machines will be a thousand times faster than
they are today and equivalent in speed and capacity to the human
mind, Moravec argues.  [...]
    ''All of our culture can be taken over by robots. It'll be boring to
be human. If you can get human equivalence by 2030, what will you
have by 2040?'' Moravec asks, laughing.
    ''Suppose you're sitting next to your best friend and you're 10
times smarter than he is. Are you going to ask his advice? In an
economic competition, if you make worse decisions, you don't do as
well,'' he says.
    ''We can't beat the computers. So that opens up another possibility.
We can survive by moving over into their form.''
    There are a number of different scenarios of ''digitizing'' the
contents of the human mind into a computer, all of which will be made
plausible in the next 50 to 100 years by the pace of current
technology, Moravec says.
    One is to hook up a superpowerful computer to the corpus callosum,
the bundle of nerve fibers that connects the two hemispheres of the
brain. The computer can be programmed to monitor the traffic between
the two and, eventually, to teach itself to think like the brain.
    After a while, the machine begins to insert its own messages into
the thought stream. ''The computer's coming up with brilliant
solutions and they're just popping into your head,'' Moravec says [...]
    As you lose your natural brain capacity through aging, the computer
takes over function by function. And with advances in brain scanning,
you might not need any ''messy surgery,'' Moravec says. ''Perhaps you
just wear some kind of helmet or headband.'' At the same time, the
person's aging, decrepit body is replaced with robot parts.
    ''In the long run, there won't be anything left of the original. The
person never noticed - his train of thought was never interrupted,''
he says.
    This scenario is probably more than 50 years away, Moravec says, but
because breakthroughs in medicine and biotechnology are likely to
extend people's life spans, ''anybody now living has a ticket.''
    Like many leading artificial intelligence researchers, Moravec
discounts the mind-body problem that has dogged philosophers for
centuries: whether a person's identity - in religious terms, his soul
- can exist independently of the physical brain.
    ''If you can make a machine that contains the contents of your mind,
then that machine is you,'' says MIT's Sussman.
    Moravec believes a machine-run world is inevitable ''because we
exist in a competing economy, because each increment in technology
provides an advantage for the possessor . . . Even if you can keep
them (the machines) slaves for a long time, more and more
decision-making will be passed over to them because of the
competitiveness.
    ''We may be still be left around, like the birds. It may well be
that we can arrange things so the machines leave us alone. But sooner
or later they'll accidently step on us. They'll need the material of
the earth.''
    Such talk is dismissed as sheer speculation by Moravec's detractors,
among them his former teacher, Stanford's John McCarthy, who is also
one of the founding fathers of artificial intelligence research.
    McCarthy says that while he respects Moravec's pioneering work on
robots, his former Ph.D student is considered a ''radical.''
    ''I'm more uncertain as to how long it (human equivalence) will
take. Maybe it's five years. Maybe it's 500. He has a slight tendency
to believe it will happen as soon as computers are powerful enough.
They may be powerful enough already. Maybe we're not smart enough to
program them.''
    Even with superintelligent machines, McCarthy says, it's hardly
inevitable that computers will take over the world.
    ''I think we ought to work it out to suit ourselves. In particular
it is not going to be to our advantage to give things with
human-level intelligence human-like emotions (like ambition). You
might want something to sit there and maybe read an encyclopedia
until you're ready to use it again,'' he says.
    George Williams, an emeritus professor of divinity at Harvard
University, called Moravec's scenario ''entirely repugnant.'' [...]
    McCarthy, however, insists there's no need to panic.
    ''Because the nature of the path that artificial intelligence will
take is so unknown, it's silly to attempt to plan any kind of social
policy at this early time,'' he says.

------------------------------

Date: Sun 12 Jul 87 19:45:34-PDT
From: Lee Altenberg <CCOCKERHAM.ALTENBERG@BIONET-20.ARPA>
Subject: AI is not a science

        This discussion has brought to my mind the question of undecidability
in cellular automata, as discussed by S. Wolfram.  For some rules and initial
sequences , the most efficient way of finding out how the automaton will behave
is simply to run it.  Now, what is the status of knowledge about the
behavior of automata and the process of obtaining this knowledge?  Is it a
science or not?
        Invoking some of the previous arguments regarding AI, it could be
said that it is not a science because knowing something about an
automaton tells one nothing about the actual world.  That is why mathematics
has been called not science.
        Yet, to find out how undecidable automata behave one needs to
carry out experiments of running them.  In this way they are just like a
worldly phenomenon where knowledge about them comes from observing them.
One must take an empirical approach to undecidable systems.
        But there is another angle of evaluation.  Naturalists have been
belittled as "not doing science" because their work is largely descriptive.
Does science consist then in making general statements?  Or to be more precise,
does science consist of redescribing reality in terms of some general
statements plus smaller sets of statements about the world, which when combined
can generate the full (the naturalists's) description of reality?  If this
is to be the case, then all examples of undecidable (and chaotic, I would
guess) processes fall outside the dominion of science, which seems to me
                          overly restrictive.

------------------------------

Date: Mon, 13 Jul 87 15:08:07 bst
From: Stefek Zaba <sjmz%hplb.csnet@RELAY.CS.NET>
Subject: AI as science: establishing generality of algorithms

In response to the points of Jim Hendler and John Nagle, about whether
you can verify that your favourite planning system can be shown to be
more general than the Standard Reference:

At the risk of drawing the slings and arrows of people who sincerely believe
Formalism to be the kiss of death to AI, I'd argue that there *are* better
characterisations of the power of algorithms than a battery of test cases -
or, in the case of the typical reported AI program, described in necessarily
space-limited journals, a tiny number thereof.  Such characterisations are in
the form of more formal specs of the algorithm - descriptions of it which
strip away implementation efficiency tricks, and typically use quantification
and set operations to get at the gist of the algorithm.  You can then *prove*
the correctness of your algorithm *under given assumptions*, or "equivalently"
derive conditions under which your algorithm produces correct results.

Such proofs are usually (and, I believe, more usefully) "rigorous - but -
informal"; that is a series of arguments with which your colleagues cannot
find fault, rather than an immensely long and tortuous series of syntactic
micro-steps which end up with a symbol string representing the desired
condition.  Often it's easier to give sufficient (i.e. stronger than
necessary) conditions under which the algorithm works than a precise set of
necessary-and-sufficient ones.  *Always* it's harder (for mere mortals like
me, anyway) than just producing code which works on some examples.

An example of just such a judicuious use of formalism which I personally found
inspiring is Tom Mitchell's PhD thesis covering the version space algorithm
(Stanford 1978, allegedly available as STAN-CS-78-711).  After presenting a
discursive description of the technique in chapter 2, chapter 3 gives a formal
treatment which introduces a minimum of new terminology, and gives a simple
and *testable* condition under which the algorithm works: "A set of patterns P
with associated matching predicate M is said to be an admissible pattern
language if and only if every chain [totally ordered subset] of P has a
maximum and a minimum element".

Stefek Zaba, Hewlett-Packard Labs, Bristol, England.
[Standard disclaimer concerning personal nature of views applies]

------------------------------

Date: 13 Jul 87 05:23:56 GMT
From: ihnp4!lll-lcc!esl.ESL.COM!ssh@ucbvax.Berkeley.EDU (Sam)
Reply-to: ssh@esl.UUCP (Sam)
Subject: is AI a science?


[There are several components of AI, as there are of CS, but...]

Let's take a step back.  Is "Computer Science" a science?  -- Sam

------------------------------

End of AIList Digest
********************
16-Jul-87 23:19:35-PDT,20285;000000000001
Mail-From: LAWS created at 16-Jul-87 23:08:59
Date: Thu 16 Jul 1987 23:05-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #183
To: AIList@STRIPE.SRI.COM


AIList Digest            Friday, 17 Jul 1987      Volume 5 : Issue 183

Today's Topics:
  Philosophy - Natural Kinds & Philosophy of Science &
    Categorization & Symbol Grounding

----------------------------------------------------------------------

Date: 15 Jul 87 14:19 PDT
From: Tony Wilkie /DAC/  <TLW.MDC@OFFICE-1.ARPA>
Subject: Natural Kinds

I may get sizzled for this, but I will suggest that the term "natural kind",
while a fairly recent addition to the philosophical lexicon, is a conceptual
descendant of Plato`s Forms, and more closely approximated in meaning to
Aristotle's discussions of 'kinds' in his Metaphysics.

Chairs would certainly be a paradigm example of a Platonic Form, and Aristotle
in his Metaphysics used his horse, Bucephalus, as an example in his discussion
of kinds. Given his inclination as sort of a teleological guerilla, Aristotle
would have (and may have) had a tough time separating his 'kinds' concept from
'species' in the biological cases. Still, I think it safe to say that
philosophical discussion of ontology preceded the development of a formal
concept of species.

   Tony L. Wilkie <TLW.MDC@Office-1.ARPA>

------------------------------

Date: Thu, 16 Jul 87 15:42:11 EDT
From: mclean@nrl-css.arpa (John McLean)
Subject: Natural Kinds

Even "recent" philosophical discussions of natural kinds go back 20 years
and much further if you count Nelson Goodman's stuff on projectibility of
predicates (why do we assume emeralds are green and not grue, i. e.,
green until the year 2000 and then blue?) or much of the stuff written
in response to Hempel's problem whether a nonblack nonraven could could
count as a confirming instance of the claim that all ravens are black (since
the claim that all P's are Q's is logically equivalent to the claim that
all nonQ's are nonP's).  But I think you can also view much of what Plato
had to say about forms and what Aristotle had to say about substance as
being concerned with the problem of natural kinds as well.

However, I think the issue being raised about recognizing penguins,
chairs, etc. goes back to Wittgenstein's _Philosophical_Investigations_:

   For if you look at them you will not see something that is common to
   all, but similarities, relationships, and whole series of them at
   that...I can think of no better expression to characterize these
   similarities than "family resemblance"...

John McLean

------------------------------

Date: 16 Jul 87  2207 PDT
From: John McCarthy <JMC@SAIL.STANFORD.EDU>
Subject: re: AIList Digest   V5 #181

[In reply to message sent Tue 14 Jul 1987 22:58-PDT.]

The distinction I had in mind between natural kind and cluster is
the presumed existence of as yet unknown properties of a natural
kind.

When I said "doubtful cases are rare", I left myself open to misunderstanding.
I meant that in case of chairs in Timothy's experience doubtful cases
are rare.  Therefore, for a child to presume a natural kind on hearing
a word or seeing an object is advantageous, and it will also be advantageous
to built AI systems with this presumption.

Finally, a remark concerning the "symbol grounding" discussion.  My
problems with it were mainly quantitative - there was just too much
to follow.  I suspect that Stevan Harnad's capacity to follow very
long discussions is exceptional.  I would welcome a summary of the
different points of view by someone who did follow it and feels himself
sufficiently uncommitted to any single point of view.

------------------------------

Date: Thu, 16 Jul 87 17:18 EDT
From: Nichael Cramer <nichael@JASPER.PALLADIAN.COM>
Reply-to: Nichael Cramer <NICHAEL%JASPER@LIVE-OAK.LCS.MIT.EDU>
Subject: AIList Digest   V5 #182

>>
>>    Let's take a step back.  Is "Computer Science" a science?  -- Sam
>>

There is the old chestnut that one should be leery of any disipline that feels
such a need to justify itself that it appends the term "Science" to its own
name.  Witness "Social Science".  Or more to the point, "Creation Science"
[sic].

[Standard disclaimer concerning personal nature of views applies]
NICHAEL

Rednecks for Rainforest

------------------------------

Date: 14 Jul 87 22:20:56 GMT
From: mcvax!botter!klipper!biep@seismo.css.gov  (J. A. "Biep" Durieux)
Subject: Definition of science and of scientific method.

1) I think this discussion belongs in sci.philosophy.tech, and perhaps in
sci.research, but definitely not in any of the other groups. Please let's
move out of the wrong newsgroups. This article is meant as a merger of
two discussions, one in sci.med (and other places), and one in comp.ai.
Followups will go to sci.philosophy.tech *only*.

2) There are multitudes of definitions for science, and even more usages.
Here I talk just about a rather generally accepted stance.

3) There is craft (what engineers and the like do), art (about which I
don't want to speak), science (the methodically unraveling of the
secrets of the world ("world" in a broad sense), and philosophy (the
necessary building of footholds, standing on which science can be done).

4) Philosophy starts with quarreling about whether God exists, then whether
I exist (some say the other way round - for "God" some read "anything at all"),
then whether an outside world exist, then how we should look at that world
(yielding things like epistemology, ethics, aesthetics, etc.), and,
choosing epistemology, which ways of getting knowledge are there and which
ones have which value. One of these methods (as many philosophers hold)
is reason, and there come logic and mathematics around the corner.
Still much dispute (intuitionism for example - could you give us an intro,
Lambert Meertens? - or "what constitutes a proof", "what is `mathematical
rigour'", etc.) and uncertainty (liars paradox) around, as the means of
thinking are still being defined, so they cannot be used freely yet.
Perhaps that is a good working definition of science: thinking there where
the means for thinking are not yet finished.

5) Science starts (or: sciences start) from the results of the philosophers'
work (unhappily the philosophers aren't ready yet, so those results are
not as sure as they should be, and certainly not as sure as they are often
thought to be by non-philosophical scientists) exploring the world.

6) The definition of "science", and of scientific method, is by its very
nature a philosophical, not a scientifical matter. Otherwise one would
get paradoxes like:

Ockhams razor tells us to throw away any non-necessary principles.
The principle of Ockhams razor is non-necessary.
So let's throw away Ockhams razor.
(Happily, the director of the British Museum will not let you touch it,
but anyway, the case is clear.)

7) The above is highly simplified, but I believe that simple introductions
are wanting on usenet. Too often I fall into a discussion which supposes
knowledge I don't have, of I see some participants don't have.

8) If this spawns serious discussion (only in sci.philosophy.tech, please!)
I would be more than pleased.
--
                                                Biep.  (biep@cs.vu.nl via mcvax)
Unix is a philosophy, not an operating system. Especially the latter.

------------------------------

Date: 15 Jul 87 15:45:00 GMT
From: apollo!laporta@beaver.cs.washington.edu  (John X. Laporta)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <3183@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen
Smoliar) writes:
>There are probably any number of bindings for BAR
>for which "What is a BAR?" runs >into ... difficulty and for which
>"How can I use a FOO as a BAR?" is the more useful >question.  > >In
>an earlier posting, Harnad gave the example of how we classify works
>of >art ... Such classifications may also be susceptible to this
>intermediate level of >interpretation.  Thus, you may or may not
>choose to view a particular tapestry >as an allegory ... [or] as a
>pastoral.  Such decisions influence the way you see it and >"parse" it
>as part of your artistic appreciation, regardless of whether or not
>your >particular view coincides with that of the creator!
>
>I suspect there is a considerable amount of such relativity in the way we
>detect categories.  That relativity is guided not by what the categories
>are or what their features are but by how we intend to put those
>categories to use.  (In other words, the issue isn't "What features
>are present?" but "What features do we want to be present?")

Umberto Eco writes in "Euge`ne Sue and _Les Myste`res de Paris_" about this
problem. Sue was a sort of gentleman pornographer in post-Napoleonic France.
One of his series, about a character like the Shadow who worked revenge on
decadent aristocratic evildoers, with a lot of bodice-ripping along the way,
caught on with the newly literate general working public. They consumed his
book in vast quantities and took it as a call to arms so seriously that Paris
was barricaded by people inspired by it. A sex-and-violence pornographic
thriller became a call to political reform and the return of morality.

The relevant semiotic category is "closure." Roughly speaking, a
closed work is one that uses a tight code to tell a tale to an
audience sharply defined by their sharing of that code. Superman
Comics is an example of a closed work. (There is an entertaining study
somewhere of explanations offered by New Guinean tribesmen of a
Superman Comic.) Closed works don't ring, so to speak, with the
resonance of the entire semiotic continuum, while open works do.
Closed works are thus easily subject to gross misinterpretation by
readers who don't share the code in which those works are written.

Open works, on the other hand, enforce their own interpretation. While
there is drift over time in these interpretations, it is far smaller
than the vastly divergent interpretations offered of closed works by
varying interpreters in the same era.  Open works connect to the
entire semiotic continuum - indeed, the (broadly) rhetorical methods
(tropoi) they use bespeak a purpose of educating the reader about the
subjects (topoi) they treat. _Remembrance of Things Past_ is an
example of an open work. While a great deal of unfamiliar material and
controversial analysis is offered to any reader of those 3000 pages,
the mere act of reading them enforces what is, for the purpose of
semiotics, a uniform interpretation (read disambiguated topical
hypothesis).

It is very easy to 'use' a closed work by correlating the elements of
an external symbol system with the opaque code the work presents. Of
course, if the 'grounding' of one's symbol system bears no relation to
that which the work employs, one is just as much 'used' by the work as
a consequence. (Imagine, for example, using a rectangular bar of
plastic explosive as a straightedge.)

It is far more difficult to impose an arbitrary interpretation on an
open work, since it contains material that tends to contradict
incorrect or incomplete hypotheses about its topos.  For example,
while we are 'told' that Superman comes from the planet Krypton, etc.,
we learn by watching Marcel what his origins are, and while Superman
comes as a given from space, Marcel's character defines itself in our
consciousness by our 'observation' of his life. Furthermore, while
Superman is always Superman, Marcel has an origin and a destiny.
Marcel changes with time, he breaks with Albertine; Superman always
almost, but actually never marries Lois Lane. (Spiderman's recent
marriage to Mary Jane is an interesting twist. Certainly by comparison
with Superman's, Spiderman's story is an open work.)

Historians who based hypotheses about 20th century American atittudes
on an analysis of Superman comics would have to confirm them by
considerable reference to external sources, while students of early
20th century France would likely use _Remembrance of Things Past_ to
confirm their ideas.

IN SUMMARY: The relativity of categorization is an inverse index of
the 'openness' of the thing categorized. Dr. Morbius in "Forbidden
Planet" was able to divine the purpose of Krell instrumentation
because the science on which it was founded, while more advanced than
his own, shared the same basis in physical reality and hypothesis
testing. The space-given monolith in "2001" is indecipherable (a real
'black box', but with undefined input and output), and thus can be
'used' for any purpose at all.

------------------------------

Date: 13 Jul 87 18:16:06 GMT
From: linus!philabs!sbcs!bnl!allard@husc6.harvard.edu  (rick allard)
Subject: Re: The symbol grounding problem: Again... grounding?

In article <931@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:

>Categorization preformance (with all-or-none categories) is highly reliable
>(close to 100%) and MEMBERSHIP is 100%. ...

Why add this clause about "real" membership?  Isn't the bulk of the
discussion about us humble humans doing the categorizing?  If we do
start wondering about this larger realm, does it bear on categorizing?

Rick
--
ooooooooooooootter#spoon in bowl
!!!!!!!!!!!!&   RooM    &
!!!!!!!!!!!!R   oooo    M

------------------------------

Date: 15 Jul 87 19:08:35 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem meta-discussion

Since I will shortly be posting a follow-up to Harnad's last reply to me on
the SGP, I guess I ought to address the meta-discussion.

I think that different standards apply in the two domains in which this
discussion has been taking place.

I recognize that AI-List subscribers rightfully expect some selectivity from
an edited digest, and I will understand completely if the moderator chooses
not to redistribute my follow-up because the volume on this subject has
exceeded the limits demanded by his readership.

On the other hand, I see no justification for attempting to squelch the
discussion on the Usenet side of things (from which I am participating). This
unmoderated forum is avowedly anarchic, and the wishes of a supposed majority
are irrelevant -- perhaps no single topic interests a majority of readers. If
you're not interested in a discussion that's clearly appropriate for this
newsgroup, the right thing to do is just ignore it.  The software makes it
easy to "kill" a topic you don't care about; do so, and you'll never even
*see* the messages.  I really don't understand the problem.

Anders Weinstein
BBN Labs

------------------------------

Date: 15 Jul 87 20:00:29 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu  (Anders Weinstein)
Subject: Re: The symbol grounding problem

In a previous message, I was prompted by Stevan Harnad's postings to try to
explain something I find very interesting, namely, why the psychology of
categorical perception won't do much to illuminate the difficult question of
how formal symbols should be semantically interpreted, i.e. what the symbols
really *mean*.  Harnad sent a long reply (message 972@mind.UUCP) explaining
the nature of his approach in great detail. The upshot, I think, is that in
spite of some of the rhetoric about "symbol grounding", Harnad's project is
not really *attempting* to do any such thing.  It merely aims to discover the
mechanisms underlying certain recognition skills. Since this more modest aim
was precisely what I was urging, I am satisfied that there is no major
disagreement between us.

I want to make clear that I am not here trying to pose any *objection* to
Harnad's model considered as a bit of psychology. I am only trying to
downplay its significance for philosophical issues.

Remember that the traditional conception of "meanings" or "concepts" involves
certain properties: for example, meanings are supposed to contain a criterion
which determines the correct application of the term, in effect defining the
metaphysical essence of the concept in question; they are supposed to serve
as elementary constituents of more complex concepts and thoughts; and they
are supposed to license analytic implications, such as "all bachelors are
unmarried". Since none of these properties seem to be required of the
representations in Harnad's theory, it is in a philosophical sense *not* a
theory of "concepts" or "meanings" at all. As Harnad should be be happy to
concede.

But I want to emphasize again an important reason for this which Harnad
seemed not to acknowledge.  There is a vast difference between the
quick, observational categorization that psychologists tend (rightly) to
focus on and the processes involved in what might be called "conclusive"
classification.  This is the difference between the ability to recognize
something as fish-like in, say, 500 milliseconds, and the ability to
ascertain that something *really* is a fish and not, say, an aquatic mammal.

Now the former quick and largely unconscious ability seems at least a
plausible candidate for revealing fundamental cognitive mechanisms.  The
latter, however, may involve the full exercise of high-level cognition --
remember, conclusive classification can require *years* of experiment,
discussion and debate, and potentially involves everything we know. The
psychology of conclusive categorization does *not* deal with some specialized
area of cognition -- it's just the psychology of all of science and human
rationality, the cognitive scientist's Theory of Everything. And I don't
expect to see such a thing any time soon.

Confusion can result from losing sight of the boundary between these two
domains, for results from the former do not carry over to the latter. And I
think Harnad's model is only reasonably viewed as applying to the first of
these.  The rub is that it seems that the notion of *meaning* has more to do
with what goes on in the second.  Indeed, what I find most interesting in all
this is the way recent philosophy suggests that concepts or meanings in the
traditional sense are essentially *outside* the scope of forseeable psychology.

Some other replies to Harnad:

Although my discussion was informed by Quine's philosophy in its reference to
"meaning holism", it was otherwise not all that Quinean, and I'm not sure
that Quine's highly counter-intuitive views could be called "standard." Note
also that I was *not* arguing from Quine's thesis of the indeterminacy of
translation; nor did I bring up Putnam's Twin-Earth example. (Both of these
arguments would be congenial to my points, but I think they're excessively
weighty sledgehammers to wield in this context). The distinction between
observational and "conclusive" classification, however, does bear in mind
Putnam's points about the the non-necessity of stereotypical properties.

I also don't think that philosophers have been looking for "the wrong thing
in the wrong way." I think they have made a host of genuine discoveries about
the nature of meaning -- you cite several in your list of issues you'd prefer
to ignore.  The only "failure" I mentioned was the inability to come up with
necessary and sufficient definitions for almost anything. (Not at all, by the
way, a mere failure of "introspection".)

I *do* agree that the aims of philosophy are different than those of
psychology. Indeed, because of this difference of goals, you shouldn't feel
you have to argue *against* Quine or Putnam or even me. You merely have to
explain why you are side-stepping those philosophical issues (as I think you
have done). And the reason in brief is that philosophers are investigating
the notion of meaning and you are not.

Anders Weinstein
BBN Labs

------------------------------

End of AIList Digest
********************
19-Jul-87 21:57:19-PDT,17223;000000000001
Mail-From: LAWS created at 19-Jul-87 21:47:43
Date: Sun 19 Jul 1987 21:41-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #184
To: AIList@STRIPE.SRI.COM


AIList Digest            Monday, 20 Jul 1987      Volume 5 : Issue 184

Today's Topics:
  Queries - Cooperating Expert Systems & Garbage Collection Suppression,
  Comments - Expert System for Rocket Launching &
    Automatic Implementation of Abstract Specifications &
    ANIMAL in BASIC & Immortality via Computer,
  Correction - Spang Robinson Report, June 1987,
  Perception - Natural Kinds

----------------------------------------------------------------------

Date: Wed, 15 Jul 87 01:24 EDT
From: Arnold@DOCKMASTER.ARPA
Subject: Query - Cooperating Expert Systems

I am looking for information on cooperating expert systems.  Any pointers,
references, etc would be appreciated.

Terry S. Arnold

Merdan Group
4617 Ruffner St.
San Diego
CA 92111
Arnold -at Dockmaster
(619) 571-8565

------------------------------

Date: Fri, 17 Jul 87 08:32:41 edt
From: nancy@grasp.cis.upenn.edu (Nancy Orlando)
Subject: Garbage Collection Suppression


Are there any "accepted" methods of writing code that minimize a LISP's
tendancy to garbage-collect? I don't mean a switch to turn it off;
just a means of minimizing the need for it. I'm dealing particularly with
DEC VAX lisp. I have assumed that iteration as opposed to recursion was
one way; is this correct? Are there other techniques?

Nancy Sliwa
nancy@grasp.cis.upenn.edu  or  nesliwa%telemail@orion.arpa

------------------------------

Date: 15 Jul 87 16:42:47 GMT
From: jbn@glacier.STANFORD.EDU (John B. Nagle)
Reply-to: jbn@glacier.UUCP (John B. Nagle)
Subject: Re: bm654 - Spang Robinson 3#6, 6/87


>Rome Air Force Development Center is building a system to help decide
>if foreign rocket launches are threats.

      I saw the RFP for that one go by when I was at Ford Aerospace.
I recommended that we not bid, pointing out that an expert system to
make launch-on-warning decisions was a singularly bad idea.  Seen
in that light, no one at Ford wanted to have anything to do with the program.
Nevertheless, RADC apparently found somebody willing to spend their money.

      Fortunately, most of what RADC funds never gets deployed.

                                        John Nagle

------------------------------

Date: 14 Jul 87 16:00:32 GMT
From: eagle!icdoc!esh@ucbvax.Berkeley.EDU  (Edward Hayes)
Subject: Re: Automatic implementation of abstract specifications

I just saw an article giving an inexact reference to an MIT technical report
by MK Srivas, The exact reference (I just happened to have it on my desk) is:

MIT/LCS/TR-276

Automatic Synthesis of Implementations
           for
  Abstract Data Types from
  Algebraic Specifications

Mandayam K Srivas
June 1982


        - hope this is of help.

------------------------------

Date: 17 Jul 87 06:30:19 GMT
From: psivax!polyslo!mshapiro@seismo.CSS.GOV (Mitch Shapiro)
Reply-to: psivax!polyslo!mshapiro@seismo.CSS.GOV (Mitch Shapiro)
Subject: Re: ANIMAL in BASIC ???


In article <8707090304.AA15222@humu.ARPA> dbrauer@humu.UUCP (David L.
Brauer) writes:
>Somewhere in the darkest reaches of my memory I recall seeing a listing
>of the game ANIMAL in BASIC.  It's that old standby introduction to rule-based
>reasoning that tries to deduce what animal you have in mind by asking
>questions like "Does it have feathers?", "Does it have hooves?" etc.

There was originally shipped with the Apple II's (maybe for subsequent
machines as well) that very program written in BASIC.  It learned new
animals and stored them in a text file (I think).  But it did learn
learn them.  Find someone you know who has an Apple II.  I believe this
was shipped with DOS 3.1.  -- Yes, I have a pretty old Apple.  #7919
just in case anyone out there cares.


Mitch Shapiro
mshapiro@polyslo (well, for all of another 3 days, that is.)

"It has been said that when Science climbs the crest of the hill,
   it will see that religion has been sitting there all along."
                ---  Dr. Harry Wolper

------------------------------

Date: 15 Jul 87 19:12:42 GMT
From: David L. Brauer <humu!dbrauer%nosc.UUCP@sdcsvax.ucsd.edu>
Reply-to: dbrauer@humu.nosc.mil.UUCP (David L. Brauer)
Subject: Re: ANIMAL in BASIC ???


Thanks to all who responded to my request for pointers to Animal in
BASIC.  The listing can be found in 101 BASIC Games by David H. Ahl.
There also may be a version on one of the Apple DOS distributions,
although I haven't found it yet.  Please, no more lectures on why
Animal should not be called a rule-based or expert system.  I'm aware
that it is a simple tree traversal algorithm. Merely a misnomer on
my part.  I thought I had seen the listing in an "Intro to AI" slick,
that is why I worded the request that way.

                        David C. Brauer
                        MilNet: dbrauer@NOSC.mil

------------------------------

Date: Fri, 17 Jul 87 08:39 EST
From: MNORTON%rca.com@RELAY.CS.NET
Subject: Re: Immortality via Computer


Concerning the AP story on attaining immortality via computers, readers
of AIList intrested in thinking more about this may wish to read
Fredrick Pohl's new book, "Annals of the Heechee", the forth book in
the series which began with "Gateway."  Mr. Pohl explores some of the
implications of computer subsumption of consciousness, which he calls
'vastening' in the story.  Some of the topics touched on include
altered preception of reality, differing time-rates between biologicals
and computers, and non-corporeal being.

Mark J. Norton,  RCA Advanced Technology Laboratories, AI Lab.

------------------------------

Date: Wed, 15 Jul 1987 19:31 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Corrections

Response to errors discovered by Linda Mead:

  In the summary of the June 1987 Spang Robinson Report, the following
  corrections should be noted:

     1) "The pilot's associate project aims to produce a refrigerator
         sized computing system, having functionality comparable to a
         3-inch by 5-inch checklist card."  The d in "card" was missing.
     2) Charles Anderson was not precisely identified:
         He is a Lt. Col., deputy of technology development for SDI in the
         Command and Control Directorate at Rome Air Development Center at
         Griffis Air Force Base.
     3) The statment regarding "AI research for SDI" was that it "would
        be relatively nil for awhile."   No specific statement on it's "use"
        was made by Spang Robinson Report.
          (The paragraph on this subject in the summary had an extraneous
           double quote character due to a typo.  A direct quote was not
           made.

------------------------------

Date: 17 Jul 87 13:33:46 GMT
From: mcvax!botter!hansw@seismo.css.gov  (Hans Weigand)
Subject: Re: natural kinds


It seems to me that _at least_ three kinds of "natural kinds" should be
distinguished:
 (1) genetic kinds, existing by virtue of reproduction
     ("a horse is a horse because it is born from a horse")
     Examples: animal and vegetable species
 (2) mimetic kinds, existing by virtue of imitation, to be
     subdivided in
     (a) iconic kinds (by causally determined  representation)
     (the "Xerox-principle" of Dretske: an image of an image of x is
     again an image of x)
     Examples: all linguistic symbols (graphic or phonemic)
     (b) artificial kinds (by imitation on purpose),
     existing by virtue of preconceived design followed by
     numerous production (the "Ford-principle" |-) )
     Examples: car models, coins
     (c) fashion kinds (by copying behavior, largely uncontrolled)
     Examples: social groups (punks, yuppies, ..), styles of art, etc.
 (3) anthropic/functional kinds, existing by virtue of readiness_to_hand
     Examples: chair, cup, house, knife, game

The last one needs some comments. Each human being needs
certain things in order to survive and live in a satisfactory way.
These things are mainly determined by the functioning of the
human body and community, although there are also environmental and
historical-cultural influences.  Thus we may recognize an
Eskimo iglo, and an African pile-dwelling both as "houses".
I think it is not so much the form (iconicity) that matters,
but rather that we feel that, when we would live in Greenland
(resp. the jungle), we would naturally appreciate or use these things
as houses too (to protect us against cold, dangers). Similar
arguments can be made for chair etc.. Moreover, (3) combines
with (2). We are born into a human society. Our parents
had the same needs as we have, so each generation copies these
"anthropic kinds" and transfers them to a next generation. This
makes it the more easy to recognize a (say Western) house. [In
most discussions on "family kinds" and so on, (2) and (3) are
not properly distinguished].

   "Don't ask what a kind _is_, but rather how it _persists_"

Hans Weigand (hansw@cs.vu.nl)

------------------------------

Date: Sat 18 Jul 87 15:17:43-CDT
From: Robert L. Causey <AI.CAUSEY@R20.UTEXAS.EDU>
Subject: Natural Kinds


In a message posted 7/15, John McCarthy says that philosophers
have recently introduced the concept of natural kind, and he
suggests how this concept may be useful in AI.  I think this
deserves serious comment, both historical and substantive.  The
following is lengthy, but it may illustrate some general
characteristics about the relationships between philosophy and AI.

                         HISTORY
In their messages, Ken Laws and others are correct -- the idea of
natural kinds is not new.  It is at least implicit in some
Pre-Socratic Greek philosophy, and Aristotle extensively
developed the idea and applied it in both philosophy and biology.
Aristotle's conception is too "essentialist" to fit what McCarthy
refers to.

In the late 1600's John Locke developed an impressive empiricist
analysis of natural kinds.  Further developments were contributed
in the 1800's in J.  S.  Mill's, _A_System_Of_Logic_.  Mill also
made important contributions to our understanding of inductive
reasoning and scientific explanation; these are related to
natural kinds.

In our century a number of concepts of natural kinds have been
proposed, ranging from strongly empiricist "cluster" approaches
(which need NOT preclude expanding the cluster of attributes
through the discovery of new knowledge, cf.  McCarthy 7/17), to
various modal analyses, to some intermediate approaches.  Any of
these analyses may have some value depending on the intended
application, but the traditional notion of natural kinds has
almost always been connected somehow with the idea of natural
laws.

                    SUBSTANTIVE ISSUES
1.  Whatever one's favorite analysis might be, it is important to
distinguish between a NATURAL kind (e.g., the compound silicon
dioxide, with nomologically determined physical and chemical
attributes), and a functional concept like "chair".  There is
generally not a simple one-to-one correspondence between our
functional classifications of objects and the classification
systems that are developed in the natural sciences.  This is true
in spite of the fact that we can learn to recognize sand,
penguins, and chairs.  But things are not always so simple -
Suppose that Rip van Winkle learns in 1940 to recognize at sight
a 1940-style adding machine; he then sleeps for 47 years.  Upon
waking in 1987 he probably would not recognize at sight what a
thin, wallet calculator is.  Functional classifications are
useful, but we should not assume that they are generated and
processed in the same ways as natural classifications.  In
particular, since functional classifications often involve an
abstract understanding of complex behavioral dispositions, they
are particularly hard to learn once one gets beyond simple things
like chairs and tables.

2.  Even discovering the classic examples of NATURAL kinds (like the
classification of the chemical elements) can be a long and
difficult process.  It requires numerous inductive
generalizations to confirm that the attributes in a certain Set
of attributes each apply to gold, and that the attributes in some
other Set of attributes apply to iodine, etc.  We further
recognize that our KNOWLEDGE of what are the elements of these
Sets of attributes grows with the general growth of our
scientific knowledge.  Also, we need not always use the same set
of attributes for IDENTIFICATION of instances of a natural kind.
Most of this goes back to Locke, and philosophers have long
recognized the connection between induction and classification;
Carnap, Hempel, Goodman, and others, have sharpened some of the
issues during the last 50 years.

3.  Now, getting back to McCarthy's suggestion -- in his second
message (7/17) he writes: "...for a child to presume a natural
kind on hearing a word or seeing an object is advantageous, and
it will also be advantageous to built (sic) AI systems with this
presumption." His 7/15 message says, "When an object is named,
the system should generate a gensym, e.g., GOO137.  To this
symbol should be attached the name and what the system is to
remember about the instance." This is an interesting suggestion,
but it prompts some comments and questions:

i) Assuming that children do begin to presume natural kinds at
some stage of development, what inductive processes are they
using, what biologically determined constraints are affecting
these processes, and what prior acquired knowledge is directing
their inductions.  These are interesting psychological questions.
But, depending on our applications, we may not even want to build
robots that emulate young children.  We can attach a name
to a gensym, but it is not at all easy to decide "...what the
system is to remember about the instance,"  or to specify how
it is to process all of the stuff it generates in this manner.

ii) Children receive much corrective feedback from other people;
how much feedback will we be willing or able to give to the
"maturing" robots? Will the more mature robots help train the
naive ones?

iii) Given that classification does involve complex inductive
reasoning, we need to learn a lot more about how to implement
effective inductive procedures, where "induction" is understood
very broadly.

iv) If the AI systems (robots, etc.) are to learn, and reason with,
functional concepts, then things get even more complex.  Ability
to make abstractions and perform complex analogical reasoning
will be required.  In my judgment, we (humans) still have a lot
to learn just about the representation of functional knowledge.
If my Rip van Winkle story seems farfetched, here is a true
story.  I know a person who is familiar with the appearance and
use of 5 1/4 inch floppy diskettes.  Upon first seeing a 3.5 inch
mini-diskette, she had no idea what it was until its function was
described.  Knowledge of diskettes can extend to tracks, sectors,
etc.  The concept of natural kinds is relatively simple (though
often difficult to apply); functional concepts and their
relations with physical structures are harder subjects.

------------------------------

Date: 18 Jul 87  2315 PDT
From: John McCarthy <JMC@SAIL.STANFORD.EDU>
Subject: re: [Robert L. Causey <AI.CAUSEY@R20.UTEXAS.EDU>: Natural
         Kinds]

[In reply to message from AI.CAUSEY@R20.UTEXAS.EDU sent Sat 18 Jul 87.]

I agree with Bob Causey's comments and agree that the open questions he
lists are unsolved and important.  I have one caveat.  The distinction
between nomological and functional kinds exists in sufficiently elaborate
mental structures, but I don't think that under 2 year olds make the
distinction, i.e. have different mechanisms for learning them.  For this
reason, it is an open question whether it should be a primary distinction
for robots.  In a small child's world, chairs are distinguished from other
objects by appearance, not by function.  Evidence: a child doesn't refer
to different appearing objects on which he can also sit as chairs.
Concession:  there may be such a category "sittable" in "mentalese", and
languages with such categories might be as easily learnable as English.
What saves the child from having to make the distinction between kinds
of kinds at an early age is that so many of the kinds in his life are
distinguishable from each other in many ways.  The child might indeed
be fooled by the different generations of calculator, but usually he's
lucky.

I hope to comment later on how robots should be programmed to identify
and use kinds.

------------------------------

End of AIList Digest
********************
19-Jul-87 22:04:08-PDT,19848;000000000000
Mail-From: LAWS created at 19-Jul-87 21:59:00
Date: Sun 19 Jul 1987 21:52-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #185
To: AIList@STRIPE.SRI.COM


AIList Digest            Monday, 20 Jul 1987      Volume 5 : Issue 185

Today's Topics:
  Perception - Seeing-Eye Robots,
  Philosophy - Searchability in Humans vs. Machines

----------------------------------------------------------------------

Date: 16 Jul 87 17:54:49 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU  (M.BRILLIANT)
Subject: Seeing-Eye robots

Suppose one wanted to build a robot that does what a Seeing-Eye dog
does (that is, helping a blind person to get around), but communicates
in the blind person's own language instead of by pushing and pulling.

Clearly this robot does not have to imitate a human being.  But it does
have to recognize objects and associate them with the names that humans
use for them.  It also has to interpret certain situations in its
owner's terms: for instance, walking in one direction leads to danger,
and walking in another direction leads to the goal.

What problems will have to be solved to build such a robot?  Will its
hypothetical designers have to deal with the problem of mere
recognition, or the deeper problem of grounding symbols in meaning?
Could it be built by hardwiring sensors to a top-down symbolic
processor, or would it require a hybrid processor?

M. B. Brilliant                                 Marty
AT&T-BL HO 3D-520       (201)-949-1858
Holmdel, NJ 07733       ihnp4!houdi!marty1

------------------------------

Date: 17 Jul 87 19:31:00 GMT
From: ihnp4!inuxc!iuvax!merrill@ucbvax.Berkeley.EDU
Subject: Re: Seeing-Eye robots


In comp.ai, marty1@houdi (M.B. Brilliant) writes:
>  Suppose one wanted to build a robot that does what a Seeing-Eye dog
>  does (that is, helping a blind person to get around), but communicates
>  in the blind person's own language instead of by pushing and pulling.

>  [Commentary on some of the essential properties of the robot.]

>  What problems will have to be solved to build such a robot?  Will its
>  hypothetical designers have to deal with the problem of mere
>  recognition, or the deeper problem of grounding symbols in meaning?
>  Could it be built by hardwiring sensors to a top-down symbolic
>  processor, or would it require a hybrid processor?

I seriously doubt that recognition itself would be adequate. As
Brilliant observes, one of the functions that the robot must perform
is the detection of "danger to its master."  Consider the problem of
crossing a street.  Is it enough to recognize cars (and trucks, and
motorcycles, and other already--known objects?)  No.

The robodog has to generalize beyond simply cars and trucks and
busses, since their shapes change, to "things travelling along this
stretch of road {and what's a stretch of road?} which are a) moving
{and what does it mean to move?} b) fast {and what is fast?  Why,
fast enough to be dangerous...which begs the question} c) in this
direction."  At this point, I think that we have exceeded the bounds
of recognition and entered a realm where "judgement" is required, but,
if not, I imagine that I can probably extend this situation to meet most
specific objections.  (I assume that the blind woman needs to cross roads
without undue delay.  Traffic lights don't eliminate these problems,
since the robodog must "recognize" drivers who are turning, some of
whow would be safe, since they're either stopped or slow--moving, but some
of whom (at least, here in Bloomington) would run *any* pedestrian
down. !-))

BTW:  I like this example very much.  It raises quite nicely the
underlying issue in the symbol grounding problem discussion without
using the terminology that many of the readers of comp.ai seem to have
objected to.  Congratulations, Mr. Brilliant!

John Merrill
merrill@iuvax.cs.indiana.edu  UUCP:seismo!iuvax!merrill

Dept. of Comp. Sci.
Lindley Hall 101
Indiana University
Bloomington, Ind. 47405

------------------------------

Date: 14 Jul 87 21:21:33 GMT
From: berke@locus.ucla.edu
Subject: An Unsearchable Problem of Elementary Human Behavior


      An Unsearchable Problem of Elementary Human Behavior

                           Peter Berke
                UCLA Computer Science Department

The Artificial Intelligence assumption that  all  human  behavior
can  eventually be  mimicked by computer behavior has been stated
in  various  ways.   Since  Newell  stated  his   Problem   Space
Hypothesis  in   1980,  it has taken on a clearer, and thus, more
refutable form.  Newell stated his hypothesis thus:

"The fundamental organizational unit of all  human  goal-oriented
symbolic activity is the problem space." - Newell, 1980.

In the 1980 work, Newell says  his claim   "hedges   on   whether
all cognitive activity is symbolic."     Laird,  Rosenbloom,  and
Newell  (1985)  ignore this hedge and the  qualification   "goal-
oriented    symbolic"   when  they  propose:   "Our  approach  to
developing  a  general  learning  mechanism  is   based   on  the
hypothesis   that   all   complex   behavior   -  which  includes
behavior concerned with learning - occurs as  search  in  problem
spaces."  They reference Newell (1980), but their claim is larger
than Newell's original claim.

The purpose of this note is to show that, to  be  true,  Newell's
hypothesis  must  be  taken  to  mean  just that goal-search in a
state-space is a formalism that is equivalent to computing.  Then
Newell's  Problem  Space Hypothesis is simply a true theorem. The
reader  is  invited  to  sketch   a   proof   of    the    mutual
simulatability of Turing computation and a process of goal-search
in a state space.   Such a proof has been constructed  for  every
other  prospective  universal  formalism, e.g.,  lambda calculus,
recursive function theory,  and  Post  tag  systems.   That  such
universal  formalisms  are  equivalent  in  this sense led Church
(1936, footnote 3) to speculate that human  calculating  activity
can be given no more general a characterization.

But human behavior is  not  restricted  to  calculating  activity
(though   it   seems   that  at  least  some  human  behavior  is
calculating).   If the Problem  Space  Hypothesis  is  taken   to
be  a  stronger statement,  that  is,  as a statement about human
behavior rather than about the  formalism  of  goal-search  in  a
state-space,  then   I  claim  that the following counter-example
shows it to be false.

Understanding a name is an inherently unsearchable  problem;   It
cannot  be  represented  as  search  in a state or problem space.
Well, it can be so represented, but  then  it  is  not  the  same
problem.   In  searching our states for our goal we are solving a
different problem than the original one.

To understand that understanding is (or how it can be) inherently
unsearchable,  it  is  necessary to distinguish between ambiguity
and equivocacy.   At first the distinction seems  contrived,  but
it  is  required  by  the  assumption   that   there are discrete
objects called 'names' that have  discrete  meaning (some   other
associated object or objects, see Church 1986, Berke 1987).

An equivocal word/image has  more  than  one  clear  meaning,  an
ambiguous  word/image  has  none.   What  is usually meant by the
phrase "lexical ambiguity"  is  semantic equivocacy.   Equivocacy
occurs even in formal languages and systems, though in setting up
a formal system one aims to avoid equivocacy.   For  example,  an
expression  in  a  computer  language may be equivocal ("of equal
voices"), such as: 'IF a THEN IF b THEN c  ELSE  d'.   The  whole
expression  is  equivocal  depending  on which 'IF' the 'ELSE' is
paired with.  In this case there are two clear meanings, one  for
each choice of 'IF'.

On the other hand,  'ELSE'  taken  in  isolation,  is   ambiguous
("like  both"), it's meaning is not one or many alternatives, but
it is like all of them.  [The  reader,  especially  one  who  may
claim  that  'ELSE'  has  no  meaning  in isolation,  may find it
valuable to pause at this point to write down what 'ELSE'  means.
Several  good  attempts  can  be  generated  in very little time,
especially with the aid of a dictionary.]


Resolving equivocacy can be represented  as  search  in  a  state
space;  it  may very well BE search in a state space.   Resolving
ambiguity cannot be represented  as  search  in  a  state  space.
Resolving  environmental  ambiguity  is  the  problem-formulation
stage of decision making; resolving objective  ambiguity  is  the
object-recognition  phase  of  perception.

The difference between ambiguity and equivocacy is a  reason  why
object-recognition    and   problem-formulation   are   difficult
programming   and   management   problems,    only    iteratively
approximable  by computation or rational thought.   A state space
is, by definition,  equivocal rather than  ambiguous.     If   we
confuse  ambiguity   with  equivocacy,  ambiguity  resolution may
seem like search in goal space, but this ignores the  process  of
reducing  an  ambiguous   situation  to  an  equivocal  one  much
the way Turing (1936) consciously ignores  the  transition  of  a
light  switch  from OFF to ON.

A digital process  can  approximate  an  analog  process  yet  we
distinguish the digital process from the analog one.   Similarly,
an equivocal problem can approximate an  ambiguous  problem,  but
the   approximating  problem  differs from  the approximated one.
Even if a bank of mini-switches can  simulate  a   larger   light
switch  moving  from  OFF  to  ON,  we don't evade the problem of
switch transition, we push it "down" a  level,  and  then  ignore
it.     Even   if   we   can   simulate an ambiguity by a host of
equivocacies, we don't  thereby  remove  ambiguity,  we  push  it
"down" a level, and then ignore it.

Ambiguity resolution cannot be accomplished by goal-search  in  a
state   space.   At  best  it  can  be  pushed  down some levels.
Ambiguity must still be resolved at the lower levels.  It doesn't
just  go  away;  ambiguity  resolution is the process of it going
away.   Representation may require ambiguity resolution,  so  the
general problem of representing something (e.g., problem solving,
understanding a name) as  goal-search  in a state space  can  not
be represented as goal-search in a state space.

This  leads  me  to  suspect  what  may  be  a  stronger  result:
"Representing   something"   in   a  given  formalism  cannot  be
represented in  that  formalism.  For  example,  "representing  a
thought  in words," that is, expression, cannot be represented in
words.   "What it is to be a word" cannot be expressed in  words.
Thus there can be no definition of 'word' nor then of 'language'.
Understanding a word, if it  relies  on  some  representation  of
"what  it  is  to  be  a word" in words, cannot be represented in
words.

The meaning of a word is in this way  precluded  from  being  (or
being  adequately  represented by) other words.  This agrees with
our daily observations  that  "the  meaning  of  a  word,"  in  a
dictionary  is  incomplete.   Not all words need be impossible to
completely define, just some of them for this argument  to  hold.
It also agrees with Church's 1950 arguments on the contradictions
inherent in taking words to be the meaning of other words.

If understanding cannot be represented in words, it can never  be
well-defined  and  cannot be programmed.   In programming, we can
and must ignore the low-level process of bit-recognition  because
it  is,  and  must  be,  implemented  in  hardware.    Similarly,
hardware  must  process   ambiguities   into   equivocacies   for
subsequent "logical" processing.

We are thus precluded from saying how  understanding  works,  but
that  does  not preclude us from understanding.   Understanding a
word can be learned  as  demonstrated  by  humans  daily.    Thus
learning  is  not  exhausted  by  any (word-expressed) formalism.
One example  of  a  formalism  that  does  not  exhaust  learning
behavior  is  computation  as defined (put into words) by Turing.
Another is goal-search in a  state-space  as  defined  (put  into
words) by Newell.


References:

Berke,  P.,  "Naming  and  Knowledge:  Implications  of  Church's
Arguments   about  Knowledge  Representation,"  in  revision  for
publication,1987.

Church, A., An Unsolvable Problem  of  Elementary  Number  Theory
(Presented to the American Mathematical Society, April 19, 1935),
Journal of Symbolic Logic, 1936.

Church,  A.,  "On Carnap's Analysis of  Statements  of  Assertion
and Belief," Analysis, 10:5, pp. 97-99, April, 1950.

Church,  A.,  "Intensionality  and  the  Paradox  of   the   Name
Relation," Journal of Symbolic Logic, 1986.

Laird, J.E., P.S. Rosenbloom, and A. Newell, "Towards Chunking as
a General Learning Mechanism,"  CMU-CS-85-110.

Newell, A. "Reasoning, Problem Solving, and  Decision  Processes:
The  problem  space  as a Fundamental Category.  Chapter 35 in R.
Nickerson (Ed.), Attention and Performance VIII.   Erlbaum, 1980.

Turing, A.M., On Computable numbers, with an application  to  the
Entscheidungsproblem.  Proceedings  of  the  London  Mathematical
Society  42-2 (1936-7), 230-265;  Correction, ibid.,  43  (1937),
544-546.

------------------------------

Date: 16 Jul 87 09:23:07 GMT
From: mcvax!botter!roelw@seismo.css.gov  (Roel Wieringa)
Subject: Berke's Unsearchable Problem

In article 512 of comp.ai Peter Berke says that
1. Newell's hypothesis that all human goal-oriented symbolic activity
is searching through a problem-space must be taken to mean that human
goal-oriented symbolic activity is equivalent to computing, i.e. that
it equivalent (mutually simulatable) to a process executed by a Turing
machine;
2. but human behavior is not restricted to computing, the process of
understanding an ambiguous word (one having 0 meanings, as opposed to
an equivocal word, which has more than 1 meanings) being a case in
point. Resolving equivocality can be done by searching a problem
space; ambiguity cannot be so resolved.

If 1 is correct (which requires a proof, as Berke says), then if 2 is
correct, we can conclude that not all human behavior is searching
through a problem space; the further conclusion then follows that
classical AI (using computers and algorithms to reach its goal)
cannot reach the goal of implementing human behavior as search
through a state space.

There are two problems I have with this argument.

First, barring a quibble about the choice of the terms "ambiguity" and
"equivocality", it seems to me that ambiguity as defined by Berke is really
meaninglessness. I assume he does not mean that part of the surplus
capacity of humans over machines is that humans can resolve meaninglessness
whereas machines cannot, so Berke has not said what he wants to say.

Second, the argument applies to classical AI. If one wishes to show
that "machines cannot do everything that humans can do," one should
find an argument which applies to connection machines, Boltzmann
machines, etc. as well.

Supposing for the sake of the argument that it is important to show
that there is an essential difference between man and machine, I
offer the following as an argument which avoids these problems.

1. Let us call a machine any system which is described by a state
evolution function (if it has a continuous state space) or a state
transition function (discrete state space).
2. Let us call a description explicit if (a) it is communicable to an
arbitrary group of people who know the language in which the
description is stated, (b) it is context-independent, i.e. mentions
all relevant aspects of the system and its environment to be able to
apply it, (c) describes a repeatable process, i.e. whenever the same
state occurs, then from that point on the same input sequence will
lead to the same output sequence, where "same" is defined as
"described by the explicit description as an instance of an input
(output) sequence." Laws of nature which describe how a natural process
evolves, computer programs, and radio wiring diagrams are explicit
descriptions.

Now, obviously a machine is an explicitly described system.
The essential difference between man and machine I propose is that
man possesses the ability to explicate whereas machines do not. The
*ability* to explicate is defined as the ability to produce an
explicit description of a range of situations which (i.e. the range
is) not described explicitly. In principle, one can build a machine
which produces explicit descriptions of, say, objects on a conveyor
belt. But the set of kinds of objects on the belt would then have to
be explicitly described in advance, or at least it would in
principle be explicitly describable, even though the description
would be large, or difficult to find. the reason for this is that a
machine is an explicitly described system, so that, among others, the
set of possible inputs is explicitly described.
  On the other hand, a human being in principle can produce
reasonably explicit descriptions of a class of systems which has no
sharp boundaries. I think it is this capability which Berke means
when he says that human beings can disambiguate whereas algorithmic
processes cannot. If the set of inputs to an explication process carried
out by a human being is itself not explicitly describable, then
humans have a capability which machines don't have.

A weak point in this argument is that human beings usually have a
hard time in producing totally explicit descriptions; this is why
programming is so diffcult. Hence, the qualification "reasonably
explicit" above. This does not invalidate the comparison with
machines, for a machine built to produce reasonably explicit
descriptions would still be an explicitly described system, so that
the sets of inputs and outputs would be explicitly described (in
particular, the reasonableness of the explicitness of its output
would be explicitly described as well).

A second argument deriving from the concepts of machine and
explicitness focuses on the three components of the concept of
explicitness. Suppose that an explication process executed by a human
being were explicitly describable.
1. Then it must be communicable; in particular the initial state must be
communicable; but this seems one of the most incommunicable mental states
there is.
2. It must be context-independent; but especially the initial stage
of an explication process seems to be the most context-sensitive
process there is.
3. It must be repeatable; but put the same person in the same
situation (assuming that we can obliterate the memory of the previous
explication of that situation) or put identical twins in the same
situation, and we are likely to get different explicit descriptions
of that situation.

Note that these arguments do not use the concept of ambiguity as
defined by Berke and, if valid, apply to any machine, including
connection machines. Note also that they are not *proofs*. If they
were, they would be explicit descriptions of the relation between a
number of propositions, and this would contradict the claim that the
explication process has very vague beginnings.

Roel Wieringa

------------------------------

End of AIList Digest
********************
21-Jul-87 22:36:36-PDT,15166;000000000000
Mail-From: LAWS created at 21-Jul-87 22:28:10
Date: Tue 21 Jul 1987 22:25-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #186
To: AIList@STRIPE.SRI.COM


AIList Digest           Wednesday, 22 Jul 1987    Volume 5 : Issue 186

Today's Topics:
  Queries - AI Application for DBase of New Chemical Substances &
    Software Reuse,
  Science Fiction - Immortality via Computer,
  Techniques - Garbage Collection Suppression,
  Philosophy - Natural Kinds,
  Courses - Philosophy Courses on Artificial Intelligence &
    Logic and Computability, AI and Formal Learning Theory

----------------------------------------------------------------------

Date: 20 Jul 1987 1220-EDT
From: Holger Sommer <SOMMER@C.CS.CMU.EDU>
Subject: AI/Expert system application for DBase of New Chemical Substances

I am involved in an EPA/NIOSH sponsered project with the title :
"Engineering and Toxic Characterisation Studies and Development of Unit
Operations Predictive Models for New Chemicals"
This project intends to develop an intelligent database and predictive
models to help EPA in the evaluation of premanufacture notices.
The research is directed and conducted to establish data base model for
use in predicting workplace releases and exposures resulting from
manufacturing , processing , use or disposal of new chemical substances.
The main activities of this project are :

  * Formulation of conceptual data base models for filtration and drying
    unit operation
  * Assessment and characterization of worker exposure in manufacturing
    plants and pilot plants
  * Incorporation of sampling data and other relevant information into
    the data base framework
  * Development and validation of computerized predictive models for
    assessment of workplace releases and exposures.

What we try to accomplish in this project is to automate the evaluation
process for premanufacture notices and provide a systematic data base to
assist in this evaluation.
My questions to the AI-list audience are :
1) Are there projects underway with similar content ( not particular
   related to chemicals but other domains ) ?
2) We need information about existing data base programs which
   interface with predictive models. We are looking for a flexible
   programming tool to accomplish the above described assignments.

Thank you for any information I will receive through this network.
Please send responses to :  H.T. Sommer .... Sommer@c.cs.cmu.edu

------------------------------

Date: 18 Jul 87 15:17:05 GMT
From: cbmvax!vu-vlsi!ge-mc3i!sterritt@RUTGERS.EDU  (Chris Sterritt)
Subject: Re: Software Reuse (short title)

Hello,
        I've been following the discussion of this avidly, but am new to the
programming languages (?) ML, SML, and LML.  Could someone (ideally mail
me directly so as not to clog the net!) send me information on these langauges,
so that I might find out more?
        Along the ideas of the discussion, if I remember my Computability
theory correctly -- doesn't it make some sense that to show an algorithm
(either computable or to prove it) you need to give an almost algorithmic
description, as in an inductive proof?  So isn't this what Lisp is (I'm a lisp
hacker at work).  I'd think that Church's Lambda Calculus would shed some light
on this discussion, as I believe that that was what he was trying to do with
the calculus.  Generally, I agree that to specify an algorithm IN ENOUGH DETAIL,
you will probably wind up writing at least as much information down as the code
itself.  I think that 'Requirements' as we define them in 'Software Engineering'
presume a *lot* of human intelligence.
        Any comments?
        Chris Sterritt

------------------------------

Date: 20 Jul 87 13:21:39 GMT
From: sunybcs!rapaport@RUTGERS.EDU (William J. Rapaport)
Reply-to: sunybcs!rapaport@RUTGERS.EDU (William J. Rapaport)
Subject: Re: Immortality via Computer


In article <8707200504.AA05729@ucbvax.Berkeley.EDU> MNORTON@rca.COM writes:
>
>Concerning the AP story on attaining immortality via computers, readers
>of AIList intrested in thinking more about this may wish to read ...

... or Justin Leiber's _Beyond Rejection_.  Leiber is a philosopher and
also the son of SF writer Fritz Leiber.  The novel is about a society in
which brain tapes are made and installed in new bodies; the minds tend
to reject the bodies.

------------------------------

Date: Mon, 20 Jul 87 21:43:00 EDT
From: Chester@UDEL.EDU
Subject: Re: Garbage Collection Suppression

The direct way to avoid garbage collection in lisp is to define your own `cons'
function that prefers to get cell pairs from an `available list', calling the
regular `cons' only when the `available list' is empty.  A `reclaim' function
that puts cell pairs on the `available list' (using `rplacd') will be needed
also.  See any book on data structures.  The technique can be used for cell
pairs and gensym atoms, if needed, but in my experience, not with strings or
numbers.  String manipulations can usually be avoided, but a program that
crunches a lot of numbers cannot avoid consuming memory and eventually
triggering garbage collection (at least in VAX lisp).  I wish there were some
way for a user to reclaim numbers so that they could be reused as cell pairs
can.  If so, I could write all my lisp programs so that they don't need to
garbage collect.  It would also be nice to have a built-in `reclaim' function
that would work in conjunction with the built-in `cons'; it would be dangerous
for novices, but handy for the experienced.

By the way, recursion in itself doesn't cause garbage collection; VAX lisp is
smart enough to reclaim the memory used for the function stack automatically.

Daniel Chester
chester@dewey.udel.edu

------------------------------

Date: 21 Jul 87 17:05:53 GMT
From: rlw@philabs.philips.com (Richard Wexelblat)
Reply-to: rlw@philabs.philips.com (Richard Wexelblat)
Subject: Re: Natural Kinds


It is amusing and instructive to study and speculate on children's language
and conceptualization.  (Wow! That construct's almost Swiftean!)  For those
who would read further in this domain, I recommend:

Brown, Roger
A First Language -- The Early Stages
Harvard Univ. Press, 1973

MacNamara, John
Names for Things -- A Study of Human Learning
MIT Press, 1984

------------------------------

Date: 21 Jul 87 16:56:08 GMT
From: rlw@philabs.philips.com (Richard Wexelblat)
Reply-to: rlw@briar.philips.com (Richard Wexelblat)
Subject: Re: Natural Kinds


In article <8707161942.AA13065@nrl-css.ARPA> mclean@NRL-CSS.ARPA
(John McLean) writes:

>However, I think the issue being raised about recognizing penguins,
>chairs, etc. goes back to Wittgenstein's _Philosophical_Investigations_:

Actually, the particular section chosen is a bit too terse.  Here is more
context:

   Consider, for example the proceedings that we call `games.'  I mean board-
games, card-games, ball-games, Olympic games, and so on.  What is common to
them all?--Don't say:  ``There must be something common, or they would not be
called `games' ''--but look and see whether there is anything common to all.
--For if you look at them you will not see something that is common to all,
but similarities, relationships, and a whole series of them at that ...  a
complicated network of similarities overlapping and criss-crossing; sometimes
overall similarities, sometimes similarities of detail.
   I can think of no better expression to characterize these similarities
than ``family resemblances''; for the various resemblances between the
members of a family: build, features, colour of eyes, gait, temperament,
etc.  etc. overlap and criss-cross in the same way.--And I shall say: `games'
form a family.

                                   * * *

This sort of argument came up in a project on conceptual design tools a few
years ago in attempting to answer the question:  ``What is a design and how
do you know when you have one?''  We attempted to answer the question and got
into the question of subjective classifications of architecture.  What is a
``ranch'' or ``colonial'' house?  If you can get a definition that will
satisfy a homebuyer, you are in the wrong business.

                                   * * *

Gratis, here are two amusing epigrams from W's Notebooks, 1914-1916:

        There can never be surprises in logic.
                  ~~~~~

        One of the most difficult of the philosopher's tasks is to
        find out where the shoe pinches.

------------------------------

Date: 17 Jul 1987 1505-EDT
From: Clark Glymour <GLYMOUR@C.CS.CMU.EDU>
Subject: Philosophy Courses on Artificial Intelligence


                      SEMINAR IN LOGIC AND COMPUTABILITY:
              ARTIFICIAL INTELLIGENCE AND FORMAL LEARNING THEORY

   - Offered by: Department of Philosophy, Carnegie-Mellon University

   - Instructor: Kevin T. Kelly

   - Graduate Listing: 80-812

   - Undergraduate Listing: 80-510

   - Place: Baker Hall 131-A

   - Time: Wednesday, 1:30 to 4:30 (but probably not the full period).

   - Intended Audience: Graduate students and sophisticated undergraduates
     interested  in  inductive  methods,  the   philosophy   of   science,
     mathematical   logic,   statistics,   computer   science,  artificial
     intelligence, and cognitive science.

   - Prerequisites: A good working knowledge  of  mathematical  logic  and
     computation theory.

   - Course Focus: Convergent realism is the philosophical thesis that the
     point of inquiry is to converge (in some sense) to the truth  (or  to
     something  like  it).    Formal  learning theory is a growing body of
     precise results concerning the  possible  circumstances  under  which
     this  ideal  is  attainable.   The basic idea was developed by Hilary
     Putnam in  the  early  1960's,  and  was  extended  to  questions  in
     theoretical  linguistics  by  E.  Mark  Gold.    The main text of the
     seminar will be Osherson and Weinstein's  recent  book  Systems  that
     Learn.    But  we  will also examine more recent efforts by Osherson,
     Weinstein, Glymour and Kelly to apply the  theory  to  the  inductive
     inference  of  theories  expressed  in  logical languages.  From this
     general standpoint, we will move to more detailed  projects  such  as
     the  recent  results  of  Valiant,  Pitt,  and  Kearns  on polynomial
     learnabilitly.  Finally, we will examine the extent to  which  formal
     learning  theory  can  assist  in  the  demonstrable  improvement  of
     learning systems published in the A.I. machine  learning  literature.
     There  is  ample opportunity to break new ground here.  Thesis topics
     abound.

   - Course Format: Several introductory lectures, Seminar reports, and  a
     novel research project.

                    PROBABILITY AND ARTIFICIAL INTELLIGENCE

   - Offered by: Department of Philosophy, Carnegie-Mellon University

   - Instructor: Kevin T. Kelly

   - Graduate Course Number: 80-312

   - Undergraduate Course Number: 80-811

   - Place: Porter Hall, 126-B

   - Time: Tuesday, Thursday, 3:00-4:20

   - Intended Audience: Graduate students and sophisticated undergraduates
     interested  in  inductive  methods,  the   philosophy   of   science,
     mathematical   logic,   statistics,   computer   science,  artificial
     intelligence, and cognitive science.

   - Prerequisites: Familiarity with mathematical logic, computation,  and
     probability theory

   - Course Focus:  There are several ways in which the combined system of
     a rational agent and its environment can be stochastic.  The  agent's
     hypotheses   may   make   claims  about  probabilities,  the  agent's
     environment  may  be  stochastic,  and  the  agent  itself   may   be
     stochastic,  in  any  combination.    In this course, we will examine
     efforts to study computational agents in each  of  these  situations.
     The aim will be to assess particular computational proposals from the
     point of view of logic and probability theory.   Example  topics  are
     Bayesian  systems,  Dempster-Shafer  theory,  medical expert systems,
     computationally  tractable  learnability,  automated  linear   causal
     modelling,   and   Osherson   and   Weinstein's   results  concerning
     limitations on effective Bayesians.

   - Course Format: The grade will be  based  on  frequent  exercises  and
     possibly a final project.  There will be no examinations if the class
     keeps up with the material.

------------------------------

Date: 17 Jul 87 16:54:45 EDT
From: Terina.Jett@b.gp.cs.cmu.edu
Subject: Seminar - Logic and Computability, AI and Formal Learning
         Theory


                    SEMINAR IN LOGIC AND COMPUTABILITY
           ARTIFICIAL INTELLIGENCE AND FORMAL LEARNING THEORY



Offered by:            Department of Philosophy
Instructor:            Kevin T. Kelly
Grad Listing:          80-510
Undergrad Listing:     80-510
Place:                 Baker Hall 131-A
Time:                  Wed, 1:30 - 4:30

Intended Audience:    Graduate students and sophisticated undergraduates
interested in inductive methods, the philosophy of science, mathematical
logic, statistics, computer science, artificial intelligence, and cogni-
tive science.

Prerequisites:  A good working knowledge of mathematical logic and comp-
utation theory.

Course Focus:  Convergent realism is the philosophickal thesis that the
point of inquiry is to converge (in some sense) to the truth (or to
something like it).  Formal learning theory is a growing body of precise
results concerning the possible circumstances under which this ideal is
attainable.  The basic idea was developed by Hilary Putnam in the early
1960's, and was extended to questions in theoretical linguistics by E.
Mark Gold.  The main text fo the seminar will be Osherson and Weinstein's
recent book Systems That Learn.  But we will also examine more recent
efforts by Osherson, Weinstein, Glymour and Kelly to apply the theory to
the inductive inference of theories expressed in logical languages.  From
this general standpoint, we will move to more detailed projects such as
the recent results of Valiant, Pitt, and Kearns on polynomials learn-
abilitly.  Finally, we will examine the extent to which formal learning
theory can assist in the demonstrable improvement of learning systems
published in the A.I. machine learning literature.  There is ample
opportunity to break new ground here.  Thesis topics abound.

Course Format:  Serveral introductory lectures, Seminar reports, and
a novel research project.

------------------------------

End of AIList Digest
********************
26-Jul-87 23:42:07-PDT,18922;000000000000
Mail-From: LAWS created at 26-Jul-87 23:32:49
Date: Sun 26 Jul 1987 23:31-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #187
To: AIList@STRIPE.SRI.COM


AIList Digest            Monday, 27 Jul 1987      Volume 5 : Issue 187

Today's Topics:
  Journal Issue - Planning (Int. J. for AI in Engineering),
  Seminar - Abstraction in Knowledge-Based Systems (MCC),
  Course - Probability and AI (CMU),
  Conferences - CD-ROM & 7th Distributed Computing Systems &
    R&D in Information Retrieval &
    International Neural Network Society

----------------------------------------------------------------------

Date: Fri, 24 Jul 87 09:29:43 EDT
From: sriram@ATHENA.MIT.EDU
Subject: Journal Issue - Planning (Int. J. for AI in Engineering)


             INTERNATIONAL JOURNAL FOR AI IN ENGINEERING
                      SPECIAL ISSUE ON PLANNING
                              APRIL 1988

The   April  1988  issue  of  the  International  Journal  for  AI  in
Engineering will be dedicated to Planning. The guest editors for  this
issue  are: Prof. Chris Hendrickson, Dept. of Civil Engineering, C-MU,
Pittsburgh,  PA  15213  (hendrickson@cive.ri.cmu.edu)  and  Mrs  Julie
Gadsden,  Admiralty  Research  Establishment,  Procurement  Executive,
XCC5.2, Portsdown, Portsmouth, Hants PO6 4AA, UK. Papers in all  areas
of  engineering,  as  related  to planning, are solicited.  Each paper
should not exceed 10,000  words  (roughly  30  doubly  spaced  pages),
including  figures.  The deadline for submission is September 1, 1987.
Please send the papers to either of the guest editors.

Sriram & McCallum (Editors)

------------------------------

Date: Fri 24 Jul 87 11:59:41-CDT
From: Betti Bunce <Ai.Betti@MCC.COM>
Subject: Seminar - Abstraction in Knowledge-Based Systems (MCC)

All interested parties are invited to attend the following:

TALK BY:  B. Chandrasekaran
          Laboratory for AI Research
          Department of Computer and Information Science
          The Ohio State University
          Columbus, OH 43210

DATE:     August 5, 1987
TIME:     10:00 a.m.
WHERE:    MCC Auditorium
          3500 West Balcones Center Drive

CONTACTS: Charles Petrie - MCC
          Ben Kuipers - UT

TITLE:    THE GENERIC TASK TOOLKIT FOR KNOWLEDGE-BASED SYSTEMS:
          BUILDING BLOCKS AT THE ``RIGHT'' LEVEL OF ABSTRACTION

ABSTRACT:

The first part to the talk is a critique of the level of abstraction
of much of the current discussion on knowledge-based systems.  It will
be argued that the discussion at the level of
rules-logic-frames-networks is the ``civil engineering'' level, and
there is a need for a level of abstraction that corresponds to what
the discipline of architecture does for construction of buildings.
The constructs in architecture, viewed as a language of habitable
spaces, can be implemented using the constructs of civil engineering,
but are not reducible to them.  Similarly, level of abstraction that
we advocate is the language of generic tasks, types of knowledge and
control regimes.

In the second part of the talk, I will outline the elements of a
framework at this level of abstraction for expert system design that
we have been developing in our research group over the last several
years.  Complex knowledge-based reasoning tasks can often be
decomposed into a number of generic tasks each with associated types
of knowledge and family of control regimes.  At different stages in
reasoning, the system will typically engage in one of the tasks,
depending upon the knowledge available and the state of problem
solving.  The advantages of this point of view are manifold:  (i)
Since typically the generic tasks are at a much higher level of
abstraction than those associated with first generation expert system
languages, knowledge can be represented directly at the level
appropriate to the information processing task.
(ii) Since each of the generic tasks has an appropriate control
regime, problem solving behavior may be more perspicuously encoded.
(iii)  Because of a richer generic vocabulary in terms of which
knowledge and control are represented, explanation of problem solving
behavior is also more perspicuous.  We briefly describe six generic
tasks that we have found very useful in our work on knowledge-based
reasoning:  classification, state abstraaction, knowledge-directed
retrieval, object synthesis by plan selection and refinement,
hypothesis matching, and assembly of compound hypotheses for
abduction.

Finally, we will describe how the above approach leads naturally to a
new technology:  a toolbox which helps one to build expert systems by
using higher level building blocks.  We will review the toolbox, and
outline what sorts of systems can be built using the toolbox, and what
advantages accrue from this approach.

------------------------------

Date: 20 Jul 87 12:15:08 EDT
From: Terina.Jett@b.gp.cs.cmu.edu
Subject: Course - Probability and AI (CMU)


                   PROBABILITY AND ARTIFICIAL INTELLIGENCE


Offered by:             Department of Philosophy, CMU
Instructor:             Kevin T. Kelly
Grad Course No:         80-312
Undergrad Course No:    80-811
Place:                  Porter Hall, 126-B
Time:                  Tuesday, Thursday, 3:00-4:00


Intended Audience:  Graduate students ans sophisticated undergraduates
interested in inductive methods, the philosophy of science, mathematical
logic, statistics, computer science, artificial intelligence, and
cognitive science.

Prerequisites:  Familiarity with mathematical logic, computation, and
probability theory.

Course Focus:  There are several ways in which the combined system of a
rational agent and its environment can be stochastic.  The agent's
hypotheses may make claims about probabilities, the agent's environment
may be stochastic, and the agent itself may be stochastic, in any com-
bination.  In this course, we will examine efforts to study computational
proposals from the point of view of logic and probability theory.  Example
topics are Bayesian systems, Dempster/Shafer theory, medical expert systems,
computationally tractable learnability, automated linear causal modelling,
and Osherson and Weinstein's results concerning limitations on effective
Bayesians.

Course Format:  The grade will be based on frequent exercises and possibly
a final project.  There will be no examinations if the class keeps up with
the material.

------------------------------

Date: Fri, 17 Jul 1987 14:58 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Conferences - CD-ROM & 7th Distributed Computing Systems

   AI at Upcoming Conferences

CD-ROM Expo, New York City September 21-23
T-8 Using CD-RoM in Expert Systems
H-1 Helping the Non-Expert Use CD-ROM, Artificial Intellgience and Expert
Systems


Seventh International Conference on Distributed Computing Systems
Berlin (West), 21-25th September 1987

Thursday, 24 Sep 1987, 11.00-12.30
On the Application of AI in Decentralized Cnotrol: An Illustration by
Mutual Exclusion


1987 International Conferenceon Parallel Processing

Tutorial 10:30AM Dr. Benjamin W. Wah, Computers for Artificial Intelligence
Processing.

A Parallel Model and Architecture and Architecturee for Production Systems
by A. O. Oshisanwo and P. P. Dasiewicz
Parallel Link Resolution of Connection Graph Refutation and its Implementation
by R. Loganantharaj (Logan)
Combinators as Control Mechanisms in Multiprocessing Systems by D. L. Knox
and C. T. Wright
An AND-OR Parallel Execution System for Logic Program Evaluation
by N. S. Woo and R. Sharma
PESA I - A Parallel Architecture for Production Systems
by F. Schreiner and G. Zimmermann
A New Parallel Graph Reduciton Model and its Machine Architecture
by M. Amamiya
Parallel Garbage Collection on a Virtual Memory System by S. G. Abraham
and J. H. Patel
A Knowledge-Based Parallelization Tool in a Programming Environment
by T. Brandes and M. Sommer
A Heuristic Algorithm for Conflict Resolution Problem in Multistage
Interconnection Networks
by J. S. Deogun and Z. Fang
Exploiting Locality of Reference in MIMD Parallel Symbolic Computation
by Y. Eisenstadter and G. Q. McGuire, Jr.
Efficient Image Template Matching on Hypercube SIMD Arrays
by V. K. P. Kumar and V. Krishnan
Practical Algorithms for Image Component Labeling on SIMD Mesh Connected
Computers
by R. E. Cypher, J. L. C. Sanz and L. Snyder
A Parralel O(logN) algorithm for Finding Connected Components in Planar
Images by A. Agrawal, L. Nekludova and W. LIM
Large Scale Unification Using a Mesh-Connected Array of Hardware Unifiers
by Shih and K. B. Irani
On Source to Source Transformation of Sequential Logic Programs to AND-
parallelism
by A. K. Bansal and L. S. Sterling
An Overlapping Unification Algorithm and its Hardware Implementation
by W. T. Chen and K. R. Hseih
Pipelined Evaluation of Conjunctive PRoblems by S. C. Sheu
Analysis and Design of Parallel Aglortihms and Implementations for Some
Image Processing Operations
by M. Yasrebi, J. C. Browne and D. P. Agrawal
parallel Image Processing on enhanced Arrays
by V. K. P. Kumar and D. Reisis
Parallel Pattern Clustering on a Multiprocessor with Orthogally Shared
Memory
by K. Hwang and D. Kim
A General Purpose VLSI Array for Efficient Signal and Image Processing
by S. Sastry and V. K. P. Kumar
Computing the Two-Dimensional Discrete Fourier Transforma on the ASPEn
Paralle Computer  Architecture by A. L. Gorin, A. Silberger
and L. Auslander

------------------------------

Date: Mon, 20 Jul 87 13:40:35 CDT
From: Don <kraft%lsu.edu@RELAY.CS.NET>
Subject: Conference - R&D in Information Retrieval

I have just received a travel grant for twenty or so stipends covering airfare
from the National Science Foundation so that U.S. residents can attend the
ACM/SIGIR International Conference on Research and Development in Information
Retrieval, to be held in Grenoble, France on June 13-15, 1988.

The conference will include the topics of retrieval system modeling, artificial
intelligence and information retrieval, evaluation techniques, hardware
developments for retrieval systems, natural language processing, database
management and information retrieval, user interfaces, and advanced
applications.

Anyone interested in receiving a travel stipend should contact me.  The deadline
for applying for a travel stipend is March 1, 1988.

Submission of papers (four copies of either a full paper of not more than 20-25
pages, or an extended abstract of about ten pages) with a complete author
identification and an abstract of about one hundred words must be submitted
by January 15, 1988 to:
     Professor Gerard Salton
     Department of Computer Science
     4130 Upson Hall
     Cornell University
     Ithaca, NY  14853-7501
     USA
Final copy is due May 16, 1988, with acceptance notification coming by March 21,
1988.

Don Kraft
kraft@lsu.edu

------------------------------

Date: Tue, 21 Jul 87 09:39 EDT
From: MIKE%BUCASA.BITNET@wiscvm.wisc.edu
Subject: Conference - International Neural Network Society


INTERNATIONAL NEURAL NETWORK SOCIETY
1988 ANNUAL MEETING

September 6--10, 1988
Boston, Massachusetts

The International Neural Network Society (INNS) is an association of
scientists, engineers, students, and others seeking to learn about and advance
our understanding of the modelling of behavioral and brain processes, and the
application of neural modelling concepts to technological problems. The INNS
invites all those interested in the exciting and rapidly expanding field of
neural networks to attend its 1988 Annual Meeting. The planned conference
program includes plenary lectures, symposia on selected topics, contributed
oral and poster presentations, tutorials, commercial and publishing
exhibits, a placement service for employers and educational institutions,
government agency presentations, and social events.

Individuals from fields as diverse as engineering, psychology, neuroscience,
computer science, mathematics, and physics are now engaged in neural network
research. This diversity is reflected in both the 1988 INNS Annual Meeting
Advisory Committee and in the Editorial Board of the INNS journal, Neural
Networks. In order to enhance the effectiveness of these multidisciplinary
ventures and to inform a wide audience, organization of the INNS Annual
Meeting will be carried out with the active participation of several
professional societies.

Meeting Advisory Committee includes:

Demetri Psaltis---Meeting Chairman
Larry Jackel---Program Chairman
Gail Carpenter---Organizing Chairman

Shun-ichi Amari
James Anderson
Maureen Caudill
Walter Freeman
Kunihiko Fukushima
Lee Giles
Stephen Grossberg
Robert Hecht-Nielsen
Teuvo Kohonen
Christoph von der Malsburg
Carver Mead
Edward Posner
David Rumelhart
Terrence Sejnowski
George Sperling
Harold Szu
Bernard Widrow

CALL FOR ABSTRACTS: The INNS announces an open call for abstracts to be
considered for oral or poster presentation at its 1988 Annual Meeting.
Meeting topics include:

--Vision and image processing
--Speech and language understanding
--Sensory-motor control and robotics
--Pattern recognition
--Associative learning
--Self-organization
--Cognitive information processing
--Local circuit neurobiology
--Analysis of network dynamics
--Combinatorial optimization
--Electronic and optical implementations
--Neurocomputers
--Applications

Abstracts must be typed on the INNS abstract form in camera-ready format.
An abstract form and instructions may be obtained by returning the
enclosed request form to: Neural Networks, AT&T Bell Labs, Room 4G-323,
Holmdel, NJ 07733 USA.

In order to be considered for presentation at the INNS 1988 Annual Meeting,
an abstract must be POSTMARKED NO LATER THAN March 31, 1988. Acceptance
notifications will be mailed by June 30, 1988. An individual may make at
most one oral presentation during the contributed paper sessions. Abstracts
accepted for presentation at the Meeting will be published as a supplement
to the INNS journal, Neural Networks. Published abstracts will be available
to participants at the conference.

***** ABSTRACT DEADLINE: MARCH 31, 1988 *****

CONFERENCE SITE: The 1988 Annual Meeting of the International Neural Network
Society will be held at the Park Plaza Hotel in downtown Boston. A block of
rooms has been reserved for the INNS at the rate of $91 per night plus tax
(single or double). Reservations may be made by contacting the hotel directly.
Be sure to give the reference "Neural Networks". A one-night deposit will be
requested.

HOTEL RESERVATIONS:
Boston Park Plaza Hotel
"Neural Networks"
1 Park Plaza at Arlington Street
Boston, MA 02117 USA
(800) 225-2008 (continental U.S.)
(800) 462-2022 (Massachusetts only)
Telex 940107

INTERNATIONAL RESERVATIONS:
Steigenberger, Utell International
KLM Golden Tulip, British Airways
REF: "Neural Networks"


Please note that other nearby hotel accomodations are typically more expensive
and may also sell out quickly.

CONFERENCE REGISTRATION: To register for the 1988 INNS Annual Meeting, return
the enclosed conference registration form, with registration fee; or contact:
UNIGLOBE---Neural Networks 1988, 40 Washington Street, Wellesley Hills, MA
02181 USA, (800) 521-5144 or (617) 235-7500.

The great interest and attention now being devoted to the field of neural
networks promises to generate a large number of meeting participants.
Conference room size and hotel accomodations are limited. Therefore early
registration is strongly advised.

For information about INNS membership, which includes a subscription to the
INNS journal, Neural Networks, write: Dr. Harold Szu---INNS, NRL Code 5756,
Washington, DC 20375-5000 USA, (202) 767-1493.

ADVANCE REGISTRATION FEE SCHEDULE

                       INNS Member  Non-member
Until March 31, 1988   $125         $170*
Until July 31, 1988    $175         $220*
Full-time student      $50          $85*

* Includes the option of electing one-year INNS membership and subscription
to the INNS journal, Neural Networks, free of charge.

The conference registration fee schedule has been set to cover abstract
handling costs, the book of abstracts, a buffet dinner reception, coffee
breaks, informational mailings, and administrative expenses. Anticipated
financial support by government and corporate sponsors will cover additional
basic meeting costs.

Tutorials and other special programs will require payment of additional fees.

STUDENTS AND VOLUNTEERS: Students are particularly welcome to join the INNS
and to participate fully in its Annual Meeting. Reduced registration and
membership rates are available for full-time students. In addition, financial
support is anticipated for students and meeting volunteers. To apply, please
enclose with the conference registration application a letter of request and a
brief description of interests.

-----ABSTRACT REQUEST FORM-----

INTERNATIONAL NEURAL NETWORK SOCIETY
1988 ANNUAL MEETING

September 6--10, 1988
Boston, Massachusetts

Please send an abstract form and instructions to:

Name:
Address:
Telephone(s):

All abstracts must be submitted camera-ready, typed on the INNS abstract form
and postmarked NO LATER THAN March 31, 1988.

MAIL TO:

Neural Networks
AT&T Bell Labs
Room 4G-323
Holmdel, NJ 07733 USA


-----REQUEST FOR INFORMATION-----

INTERNATIONAL NEURAL NETWORK SOCIETY
1988 ANNUAL MEETING

September 6--10, 1988
Boston, Massachusetts

Please send information on the following topics to:

Name:
Address:
Telephone(s):


(  ) Placement/Interview service
     (  ) Employer
     (  ) Educational institution
     (  ) Candidate
(  ) Hotel accomodations
(  ) Travel and discounted fares
     Discounts of up to 60% off coach fare can be obtained on conference
     travel booked through UNIGLOBE: (800) 521-5144 or (617) 235-7500.
(  ) Volunteer and student programs
(  ) Proposals for symposia and special programs
(  ) Exhibits
     (  ) Commercial vendor
     (  ) Publisher
     (  ) Government agency
(  ) Tutorials
(  ) Press credentials
(  ) INNS membership

MAIL TO:

Center for Adaptive Systems---INNS
Boston University
111 Cummington Street, Room 244
Boston, Massachusetts 02215 USA

ELECTRONIC MAIL TO:

mike@bucasa.bu.edu

------------------------------

End of AIList Digest
********************
26-Jul-87 23:45:47-PDT,16081;000000000000
Mail-From: LAWS created at 26-Jul-87 23:39:38
Date: Sun 26 Jul 1987 23:37-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #188
To: AIList@STRIPE.SRI.COM


AIList Digest            Monday, 27 Jul 1987      Volume 5 : Issue 188

Today's Topics:
  Queries - Graphics/AI Bibliography &
    Blackboard Architectures in Prolog &
    VPExpert Parameters &
    Knowledge Representation in Sanskrit,
  Techniques - Garbage Collection Suppression,
  Philosophy - Natural Kinds & AI, Science, and Pseudo-Science

----------------------------------------------------------------------

Date: 22-JUL-1987 15:13:29
From: THOWARD%graphics.computer-science.manchester.ac.uk@Cs.Ucl.AC.UK
Subject: Graphics-AI bibliography

I am currently investigating what work has been done on connecting/integrating
AI methods and computer graphics. I would be very grateful if anyone can
send me any references, or bibliographies (or comments!) etc in this area.
If there's enough interest, I will summarise responses. Thanks...
______________________________________________________________________________
                          - Toby Howard -
Computer Graphics Unit, Department of Computer Science
Manchester University, England, M13 9PL. Phone: 061 273 7121 x5429/5406
Janet: thoward@uk.ac.man.cs.cgu
ARPA:  thoward%cgu.cs.man.ac.uk@cs.ucl.ac.uk

------------------------------

Date: Wed 22 Jul 87 11:52:28-CDT
From: OLIVIER J. WINGHART <CS.WINGHART@R20.UTEXAS.EDU>
Subject: Blackboard architectures in Prolog

I am looking for natural ways of implementing a blackboard architecture
in Prolog. Has anyone already thought about this, and are there any papers
that I could look at ? I would appreciate any pointer.
Olivier
cs.winghart@utexas.edu

------------------------------

Date: Fri, 24 Jul 87 18:00:05 EDT
From: Brady@UDEL.EDU
Subject: VPExpert Parameters

The VPExpert manual says that data can be passed to a batch
file, and that this is the only way to directly pass parameters
to an external program. But when I try to do this, the system
tells me the syntax of my call is wrong. I am sure my error is
not in the call to the batch file itself, since I am able to call
and execute a batch file that does not require parameters.

Anyone out there using this shell who has figured
out how to pass parameters to a batch file, please send me
mail. I will post answers back to the net. Thank you.
/////////
joe brady

------------------------------

Date: Thu, 23 Jul 87 10:54:47 PDT
From: bwidlans%zodiac@ads.arpa (Bob Widlansky)
Subject: Knowledge Representation in Sanskrit


Recently, I read a short intriguing article in AI magazine about the
First International Conference on Knowledge Representation and
Inference in Sanskrit (held in Bangalore, India between December
20-22, 1986).

Does anyone know where I can get a copy of the proceedings?

If you do, please contact me at bwidlans@ads.ARPA

Thank you,

Bob Widlansky

------------------------------

Date: 22 Jul 87 14:28:51 GMT
From: "J. A. \"Biep\" Durieux" <mcvax!cs.vu.nl!biep@seismo.CSS.GOV>
Reply-to: "J. A. \"Biep\" Durieux"
          <mcvax!cs.vu.nl!biep@seismo.CSS.GOV>
Subject: Re: Garbage Collection Suppression


In article <8707202143.aa23792@Dewey.UDEL.EDU> Chester@UDEL.EDU writes:
>The direct way to avoid garbage collection in lisp is to define your own `cons'
>function that prefers to get cell pairs from an `available list' (...).

Also handy in many cases (small functions like append, alist-functions, subst)
is icons: (defun icons (a d cell)
                 (cond ((and (eq (car cell) a) (eq (cdr cell) d)) cell)
                       (t (cons a d))))

In this way whenever it turns out the new cells weren't really needed, the
old ones are used again (as in (append x nil)). Be aware, however, that your
copy-function may not work any more if it's defined as (subst nil nil x)!
--
                                                Biep.  (biep@cs.vu.nl via mcvax)
                        Never confound beauty with truth!

------------------------------

Date: Wed, 22 Jul 1987  10:43 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Natural Kinds (Re: AIList Digest   V5 #186)


About natural kinds.  In "The Society of Mind", pp123-129, I propose a
way to deal with Wittgenstein's problem of defining terms like "game"-
or "chair".  The basic idea was to probe further into what
Wittgenstein was trying to do when he talked about "family
resemblances" and tried to describe a game in terms of properties, the
way one might treat members of a human family: build, features, colour
of eyes, gait, temperament, etc.

In my view, Wittgenstein missed the point because he focussed on
"structure" only.  What we have to do is also take into account the
"function", "goal", or "intended use" of the definition.  My trick is
to catch the idea between two descriptions, structural and functional.
Consider a chair, for example.

  STRUCTURE: A chair usually has a seat, back, and legs - but
     any of them can be changed in so many ways that it is hard
     to make a definition to catch them all.

  FUNCTION: A chair is intended to be used to keep one's bottom
     about 14 inches off the floor, to support one's back
     comfortably, and to provide space to bend the knees.

If you understand BOTH of these, then you can make sense of that list
of structural features - seat, back, and legs - and engage your other
worldly knowledge to decide when a given object might serve well as a
chair.  This also helps us understand how to deal with "toy chair" and
such matters.  Is a toy chair a chair?  The answer depends on what you
want to use it for.  It is a chair, for example, for a suitable toy
person, or for reminding people of "real" chairs, or etc.

In other words, we should not worship Wittgenstein's final defeat, in
which he speaks about vague resemblances - and, in effect, gives up
hope of dealing with such subjects logically.  I suspect he simply
wasn't ready to deal with intentions - because nothing comparable to
Newell and Simon's GPS theory of goals, or McCarthy's meta-predicate
(Want P) was yet available.

I would appreciate comments, because I think this may be an important
theory, and no one seems to have noticed it.  I just noticed, myself,
that I didn't mention Wittgenstein himself (on page 130) when
discussiong the definition of "game".  Apologies to his ghost.

------------------------------

Date: Wed, 22 Jul 87 12:40:58 EDT
From: mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET
Subject: AI, science, and pseudo-science


In AIlist Digest v5 #171, July 6, 1987, Don Norman
        <norman%ics@sdcsvax.ucsd.edu> wrote:
> [Here's why] many of us otherwise friendly folks in the sciences that
> neighbor AI [are] frustrated with AI's casual attitude toward theory:
> AI is not a science and its practitioners are woefuly untutored in
> scientific method."
        [ 15 lines deleted ]
> AI worries a lot about methods and techniques, with many books and
> articles devoted to these issues.  But by methods and techniques I
> mean such topics as the representation of knowledge, logic,
> programming, control structures, etc.  None of this method includes
> anything about content.  And there is the flaw: nobody in the field of
> Artificial Intelligence speaks of what it means to study intelligence,
> of what scientific methods are appropriate, what emprical methods are
> relevant, what theories mean, and how they are to be tested.  All the
> other sciences worry a lot about these issues, about methodology,
> about the meaning of theory and what the appropriate data collection
> methods might be.  AI is not a science in this sense of the word.
        [ 22 more lines deleted ]

I think he's found an issue of critical importance here, so I'm going
to pull it out of context even further and repeat it again:

"nobody in the field of Artificial Intelligence speaks of what it means
to *study* intelligence" (my emphasis).

No wonder those of us outside the field have trouble figuring out
what AI is really about.  My impression is that AI researchers try
to study intelligence by building artifacts that will make a convincing
show of intelligent behavior.  This might be why books on AI methods are all
about sophisticated representations and fancy program structures -
they're techniques of building more complex (hopefully more intelligent)
programs.  But this is nearsighted.  Intelligence is the *difference*
between unintelligent and intelligent behavior. The study of intelligence
begins when the programming stops.  And on what to do then, the AI textbooks
are silent.
        Now I don't want to spend time talking about the consequences
of this failure, Don did that much better than I can.  (However, I can't
resist throwing in my excuse: programming is fun; science is hard, often
boring, work.  Science is far more rewarding, though.) What I'm going to
discuss in the rest of this note stems from his remark that AI workers
are "woefully untutored in scientific method".  Assuming for the purposes
of discussion that we know enough about intelligence to make principled
distinctions between it and stupidity (counterintelligence?), what would
the scientific study of intelligence look like?

One way of answering this question is to look at some of the enterprises
that claim to be scientific, but aren't.  The main distinction in the
list below is between those fields that are unarguably sciences, and those
that fail to be scientific in one way or another.  True science, the authentic,
natural sciences, are ones like astronomy, geology, biology, physics, or
chemistry.  False sciences are harder to characterize, but here goes:

Here's a list of examples of different claimants to the name "science";
mostly impostors, all of them can be called "quasi-sciences".  By looking
at them, we can gain some sense of what qualities are necessary for
real sciences, since the quasi-sciences don't have them.

* Fraudulent sciences: Creation Science, Lysenkoism, Scientology
        (the most generous thing I can say about these is that they
         appear to proceed by trusting exceptional, one-of-a-kind
         reports, and denying persistent, repeated, quantitative,
         skeptical observations.  In rhetoric this is called "appeal
         to authority.")

* Trivial sciences: Clairol Science, barbeque science, accelerator science
        (Clairol Science has discovered a new way to make your
         hair silkier and more full-bodied.  Barbeque science has
         conclusively determined that mesquite smoke is superior to
         hickory smoke. We need to build the superconducting supercollider
         so America won't fall behind in accelerator science.)

* Semi-sciences: Theoretical Physics, Descriptive Linguistics
        (complementary halves of their respective fields.)

* Interdisciplinary Sciences: Materials Science, Neuroscience
        (characterized by their subject matter not yielding coherently
         to any single experimental technique or theoretical paradigm.)

* Artifact Sciences: Economics, Political Science, Anthropology
        (Herbert Simon's "sciences of the artificial" - these study artifacts
         of human society - without civilization, they wouldn't exist.
         However, civilization is big and complex enough that techniques
         developed to deal with natural phenomena give useful insights.)

* Synthetic Sciences: Mathematics, Computer Science
        (These study the consequences of small sets of fundamental concepts.
         Mathematics under Russell&Whitehead and Bourbaki has been "nothing
         but" an incredibly vast and elegant elaboration of set theory,
         while [I claim with a certain trepidation] that the fundamental
         basis of the scientific part of computer science lies in the
         elaboration of the consequences of the notion of an algorithm.)

The authentic, natural sciences, on the other hand, are the body of analytic,
experimental studies of phenomena that go on whether or not the experimenter
is there to observe them, [philosophers can complain about "naive realism" --
I'll confess to the realism, but not not the naivete] and the results,
conclusions, and theoretical relations that tie the studies together.
The key concepts here are "experimental" and "objective".  If a researcher
(or a team of them) isn't doing experiments on some external phenomenon,
then it ain't real science.
        What do you get from real science? Reality. Not wishful thinking,
not hallucinations, not mythology, not common sense. (Strictly speaking,
what you get is the most compact model of reality consistent with the
most reliable, most detailed, widest ranging set of observations.)
Uncommon sense.
        What you don't get is completeness, or even closure.  First of all,
there's too much knowledge, as anyone with a Ph.D. in a natural science will
tell you.  Second of all, the universe isn't closed under observation: there's
always more detail to examined, further frontiers to be explored, greater
complexities to be explained.  And most exciting of all, there's the
possibility of revolution - that a new model will explain more data,
resolve old inconsistencies, or be statable more succinctly, hopefully
all at once.
        The natural sciences generate an interconnected web of explanations
that should contain a place for AI, if AI is a science.  It's in this
explanatory web that people claim to see the bugaboo of reductionism
(without which no discussion of scientific method would be complete).
Stripped of the argumentative mumbo-jumbo that keeps philosophers in business,
a reductionist would claim that a pile of parts on the floor is equivalent to
an assembled machine, while a holist would claim that the parts are irrelevant
to any description of the machine.  Both views are incomplete, but there is
indeed an ordering by "is explained in terms of" that reductionists
have grabbed onto.  Because it's only a partial ordering, I'd like to borrow
a term from evolutionary biology and suggest that scientific knowledge has
the same kind of familial, clade structure as do charts of the genetic
relations among organisms.  Reading "<--" as "is used to explain", we have

One path through a Cladistic epistemology:
        Particle Physics <--
         Condensed-matter physics <--
          Quantum Chemistry <--
           Organic Chemistry <--
            Molecular Biology/Genetics <--
             Developmental Biology <--
              Neuroscience <--
               Ethology <--
                Psychology <--
                 Cognitive Science <--
                  Mathematics

I would put intelligence in at the same level as mathematics.  Congratulations!
Scientific AI would be among the most complex of sciences.  However,
in reality the picture isn't this clean.  Aside from those sciences that
aren't in a direct explanatory line to intelligence, there are shortcuts
among levels due to the logic of experimental science, that makes it possible
to do things like manipulate genetic structure and get a behavioral result.

But this note is already too long to go into this further, and I've barely
alluded to the formal role of the hypothesis.

Hope this helps,
        - George McKee
          College of Computer Science
          Northeastern University, Boston 02115
CSnet: mckee@Corwin.CCS.Northeastern.EDU
Phone: (617) 437-5204
Usenet: in New England, it's not unusual to have to say
                "can't get there from here."

------------------------------

End of AIList Digest
********************
26-Jul-87 23:49:11-PDT,23781;000000000000
Mail-From: LAWS created at 26-Jul-87 23:44:52
Date: Sun 26 Jul 1987 23:42-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V5 #189
To: AIList@STRIPE.SRI.COM


AIList Digest            Monday, 27 Jul 1987      Volume 5 : Issue 189

Today's Topics:
  Bibliography - Leff File a55AB

----------------------------------------------------------------------

Date: Sat, 18 Jul 1987 10:37 CST
From: Leff (Southern Methodist University)
      <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Bibliography - Leff File a55AB

%A Rudolph E. Seviora
%T Knowledge-Based Program Debugging Systems
%J IEEE Software
%V 4
%N 3
%D May 1987
%P 20-32
%K AA08
%X Divides debugging, whether done by human or by computer into three
categories:
a) those that look at the code and compare to specifications
b) those that look at the output
c) those that look at the internal trace
In the latter category, there exists only one system, MTA which is
a PROLOG based system to view internal traces from communication
based software which is often done using finite-state machines.
Falosy is an example of a system that tries to debug by comparing
the output and then reasoning to the program.  Proust, Laura, and Pudsy
are example of systems that look at the code and compare it to the
specification.

%T New Products
%J ComputerWorld
%D APR 13, 1987
%V 21
%N 15
%P 44
%K H01 AT02 AI06
%X AST's new read right software associated with it's page scanner
will handle mixed fonts, variable character sizes and spacing and
reduced and enlarged photocopies.

%T Breaking the Lisp Language Barrier with COBOL
%J Mini-Micro Systems
%D May 1987
%P 27
%V 20
%N 5
%K AA06 AT02
%X  Cullinet has announced a series of expert system programs based on
COBOL, OrderEXL, SalesEXL, VoiceEXL and DMS applications expert.

%A John Goach
%T Coming: A System for Real Time Dialog
%J Electronics
%D APR 30, 1987
%P 38-39
%K AI05
%V 60
%N 9
%X Spicos, handles 1000 word vocabulary, in continuous speech.  The system
gives spoken answers and also uses the context of the word to help identify it.

%A Henry Eric Firdman
%T Which Comes First -- Development Or Specs
%J ComputerWorld
%D APR 13, 1987
%V 21
%N 15
%P 69-73
%K O02 AI01 rapid prototyping AA08
%X Discusses whether a prototype model of development is appropriate
for expert systems as well as software projects in general and what actions
are appropriate and not appropriate under such a model of development.

%A Jean S. Bozman
%T MDBS Develops Guru for VAX
%J ComputerWorld
%V 21
%D May 25, 1987
%N 21
%K H01 T03 AI09 AT02
%X GURU, which currently runs on PC's, will be ported to
both VMS and Ultrix with costs of $17,000 to $60,000

%A Charles Babcock
%T Quick and Dirty Fixes May Work Best
%J ComputerWorld
%V 21
%D May 25, 1987
%N 21
%P 25
%K AA08
%X A study has shown that advanced development techniques leads to more
costly maintenance, not less.  "Programmers  who perform impromptu fixes
without checking documentation may be just as effective as those who
follow more structured approaches."  They also found that a greater
percentage of the requests made by users were implemented in systems
where the user under the system.

%A Rosemary Hamilton
%T DG Courts LISP Machine
%J ComputerWorld
%D APR 13, 1987
%V 21
%N 15
%P 93+
%K H02 AT16 Lisp Machine Inc. LMI Data Bankruptcy General
%X Data General made an offer to buy Lisp Machine, which is subject
to approval of an LMI creditor's committee.  Lisp Machine has filed
under Chapter 11 of the U. S. Bankruptcy Law

%A Paul Wallich
%T Putting Speech Recognizers to Work
%J IEEE Spectrum
%D APR 1987
%P 55-57
%K AI05 H01
%V 24
%N 4
%X List of current products available.
.DS L
SSB-1000, Speaker dependent isolated word, 144 words, 95% accuracy, $250
VoDialer, Speaker Dependent isolated word, 48 words, 95% accuracy, $349
   (for allowing cellular telephone users to dial numbers)
Dragon Systems Voice Scribe, Speaker-dependent, isolated word, 1000 words, $1195
IBM, Speaker-dependent isolated word, 64 words, 95-98% accuracy, $1195
Intel, Speaker Dependent Isolated Word, 200 words
Interstate Voice Products, Speaker-dependent, connected speech, 400 words,
  98% accuracy, $395
Interstate Voice Products, Speaker-dependent continuous speech, 100 words,
  99% accuracy, $4000
Kurzweil Applied Intelligence, Speaker-dependent, isolated word, 1000 words,
   $6000
NEC, SAR 10, Speaker-dependent, isolated word, 250words , 98% accuracy, $599
NEC, SR10, Speaker-Dependent, isolated words, 128 words, 98% accuracy, $600
NEC, DP-200, Speaker-Dependent, connected speech, 150 words, $7500
Speech Systems, speaker-dependent, connected speech, 20000 words, 90% accuracy,
$5000.00
TI, speaker-dependent, isolated word, 1000 words, $995.00
Voice Connection, speaker-dependnet, isolated-word, 400 words, 98% accuracy,
  $495.00
Voice Control Systems, Speaker-independent, isolated word, 40 words, 98.5%
  accuracy, $1000.00
Votan, Speaker-independent isolated-word, 13 words, 98% accuracy, $1350
Votan, speaker-dependnet, continuous speech, 640 words, 94 percent, $1200.00
.DE

%A H. Sardar Amin Saleh
%T Artificial Intelligence and Computer Aided Design in Civil Engineering
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 781-789
%K AA05 AI01 T02

%A Shuichi Fukuda
%T Development of an Expert System for the Design Support of an Oil Storage
Tank
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 791-796
%K AA05 AI01 GA01
%X This system copes with such as issues, corrosions, local regulations
and interfaces with numerical software to assist in the design of oil
storage tanks.

%A John F. Brotchie
%A Ron Sharpe
%A Bertil Marksjo
%A Michael Georgeff
%T Introducing Intelligence and Knowledge Into CAD
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 797-810
%K AA05 optimization quadratic programming
%X discusses applications of AI to quadratic programming with nonconvex
solutions.

%A U. Flemming
%A R. Coyne
%A T. Glavin
%A M. Rychener
%T A Generative Expert System for the Design of Building Layouts
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 811-821
%K AA05 kitchen bathroom generate and test AI09 DENDRAL
%X This system designs kitchen and bathrooms using an approach based
upon DENDRAL with a generator generating possible layouts and a
tester evaluating them against the constraints.

%A S. F. Jozwiak
%T Applications of Artificial Intelligence in Structural Optimization
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 823-834
%K AA05 AI04
%X AI techniques are used to reduce the computer time in determining
the optimal positions of the nodes in a
three dimensional truss.  This type of optimization is done by setting up
a constraint corresponding to each member of the truss insuring that
that member does not bear unacceptable stresses.  This work compares
known truss structures against the one being optimized to determine
which elements are likely to have stresses lower than adjacent
stresses so they don't need to have their stresses computed.
Same content as a paper appearing in \fIComputers
in Structures\fR by the same author.

%A T. J. Ross
%A F. S. Wong
%T Structural Damage Assessment Using AI Techniques
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 835-846
%K AI01 AA05 AA18
%X This system helps assess the possible damage to buried concrete
boxes from nearby nuclear explosions

%A Peter W. Mullarkey
%T A Geotechnical KBS Using Fuzzy Logic
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 843-860
%K AA05 AI01 O04
%X This system helps interpret the results of cone penetrometer tests in
determining the soil conditions where some structure will have its foundation.

%A Kenneth R. Maser
%T Automated Interpretation of Sensor Data for Evaluating In-Situ Conditions
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 861-888
%K AA05 radar signal interpretation AI06 AI01
%X The system helps model bridge deck deterioration using ground penetrometer
studies.  The expert system deals with considerations from radar signal
analysis, radar/concrete physics, and bridge engineering.

%A Yoon-Pin Foo
%A Hideaki Kobayashi
%T A Framework for Managing VLSI CAD Data
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 889-898
%K AA05 AA09 AA16
%X compares a frame based system using is-a type inheritance with
INGRES database approach showing that operations are performed
about sixty percent faster in their frame based system.

%A Nikhil Balram
%A William P. Birmingham
%A Sean Brady
%A Robert Tremain
%A Daniel P. Siewiorek
%T The MICON System for Single Board Computer Design
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 899-910
%K AA04 AI01
%X This system constructions microcomputer boards.  It deals with
the problems of interfacing IO chips from one family such as a
Z80 SIO chip to some other microprocessor.  The system also handles
analog type constraints such as bus propagation, etc.

%A Jeffrey L. Dawson
%T Excirsize - An Expert System for VLSI Transistor Sizing
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 911-916
%K AA04 AI01
%X Describes how to size  transistors for NMOS fabrication where
the goal is to achieve some propagation constraint at minimum
power consumption.

%A Ravi Malhotra
%A Ken Chao
%A Osama Mowafi
%T A Knowledge-Based System for Network Communication Design
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 917-924
%K AA08
%X discusses applications to designing the backbone part of a network
consisting of high speed communication paths and the access part consisting
of low speed lines connecting to various cities.

%A Stuart C. Shapiro
%A Sargur N. Srihari
%A Ming-Ruey Taie
%A James Geller
%T VMES: A Network-Based Versatile Maintenance System
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 925-936
%K AA21
%X Shows how in a structure based diagnostic expert system, to save
memory for common parts.  I. E., if there are several op-amps in
the circuit, one doesn't want to store the capacitors, etc. for each
op-amp in the memory that comprise the op-amps.  Techniques are developed
to phase in the detailed description of a part when needed to save computer
time.  The article also discussed the graphic interface to the VMES
and shows how the system chooses what to display and how to arrange
these items on the screen.

%A Tao Li
%T Heuristic Search in Digital System  Diagnosis
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 937-946
%K AI03 AA04
%X shows a variation of the A* algorithm for use in detecting
faults in sequential circuits.  The article also shows how to handle
circuits that are not "resettable," there is no signal that is
guaranteed to force the system into a known state.
Various theoretical results regarding such fault detections are also
provided.

%A William P. C. Ho
%T A Plan Patching Approach to Switchbox Routing
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 947-958
%K AA04 AI09
%X When a conventional routing system reports failure, i. e. all the
rules that it has available have been tried, this system will come in
and try and patch the almost completed solution into a successful
routing of the switchbox.


%A Bryant W. York
%T KBTA: An Expert Aid for Chip Test
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 959-970

%A Jozsef Vancza
%T CODEX: A Coding Expert for Programmable Logic Controllers
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 971-984
%K AA05
%X This system takes a generic description of the control task
to be performed and translates it into the language for a specific
make and model of programmable controller.

%A Ernesto Guerrieri
%A Vinod Grover
%T Octtree Solid Modeling with Prolog
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 985-1002
%K T02 AA05
%X Shows how the OCTTREE data structure for representing objects can be
entered as  Prolog acts and unions, interference checking and
neighbor finding are performed upon them.

%A G. Goldbogen
%A D. Ferrucci
%T Extending the Octree Model to Include Knowledge for Manufacturing
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1003-1012
%K T01 T02 AA26
%X Describes feature extraction algorithms on octtrees for
features such as hole boundaries.

%A C. B. Bouleeswaran
%A H. G. Fischer
%T A Knowledge Based Environment for Process Planning
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1013-1028
%K AA26 AI01
%X describes a system that generates process plans for rotational parts
such as screws.  The system supports integrated design of the part
to be machined and the manufacturing process to use on it.

%A Joao P. Martins
%A Stuart C. Shapiro
%T Hypothetical Reasoning
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1029-1042
%K AI16
%X Discusses a generic purpose tool to allow users to raise hypotheses,
reason from them, discard various hypotheses and perform the appropriate
truth maintenance.  The system uses contexts to avoid backtracking.



%A Robert Milne
%T Fault Diagnosis Using Structure and Function
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1043-1054
%K AA19 AI01
%X A troubleshooting paradigm called the "Theory of Responsibilities"
is introduced and applied to testing circuits.   It works from "second
principles" in assigning responsability for various parts of the output
waveform to various components of the circuit.

%A D. Sharma
%A B. Chandrasekaran
%A D. Miller
%T Dynamic Procedure Synthesis, Execution, and Failure Recovery
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1055-1072
%K AA05 nuclear power plant AI01 AI09
%X Describes a system for planning failure recovery, synthesis,
monitoring for nuclear power plants.  A comparison of the "event-oriented"
and "function oriented" approaches to nuclear power plant management
is provided.  The nuclear industry is shifting to the latter in reaction
to the TMI difficulties.  The implications of this for expert system
applications and an example from reactor scram concerns are also
provided.  Various plan templates and blackboards are used in processing.
The final expert system consists of system specialists, specialists in
various kind of undesirable events and specialists in various kind of goals
such as reducing radioactivity.

%A B. Demo
%A M. Tilli
%T Expert System Functionalities for Database Design Tools
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1073-1082
%K AI01 AA09
%X Discusses the CARS system, an expert system for the design of databases.

%A Geoffrey D. Gosling
%A Anna M. Okseniuk
%T SLICE - A System for Simulation Through a Set of Cooperating Experts
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1083-1096
%K This paper describes simulation tools to investigate the application
of expert systems to aircraft control environments.

%A T. J. Grant
%T Maintenance Engineering Management Applications of Artificial Intelligence
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1097-1122
%K AA21 AI01 AA05
%X This is a survey of potential applications of artificial intelligence
to managing the maintenance of aircraft.  An interesting comment is
that twenty percent of all faults are novel (noone ever saw them before).
These faults required twice as many repair hours to fix as the average
fault.  For any given diagnostician, the number of faults that he never
saw before approaches sixty percent.  It is interesting to note that
63 percent of the Royal Air Force's manpower is employed doing maintenance.

%A Benoit Faller
%T Expert Systems in Meteorology
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1123-1127
%K AA16 AI01
%X presents expert systems for forecasting airport fog-in conditions,
storm forecasting and avalanche risks.  Fogs are predicted in the afternoon
for the following morning.

%A Karl-Erik Arzen
%T Expert Systems for Process Control
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1127-1138
%K AA05 AI01
%X Uses of expert systems with various control systems concepts such
as the Ziegler-Nichols auto-tuner, smart PID controller and the Nichols
auto-tuner.


%A Atsumi Imamiya
%A Akoio Kondoh
%A Akiyoshi Miyatake
%T An Artificial Intelligence Approach to the Modeling of the User-Computer
Communications
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1139-1151
%K AA08 AA15 AI01
%X describes a system to automate the production of help systems
for software.

%A John R. Hogley
%A Alan R. Korncoff
%T Artificial Intelligence in Engineering: A Revolutionary Change
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1155-1160
%K AA05 AI01
%X This is a general article devoid of technical content.

%A Kai-li Kan
%T Expert Systems in Telecommunications Network Planning and Design
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1161-1165
%K AI01 AA08
%X discusses the implications of expert system in design networks.
The gentleman belongs to the "Strategic Technology Assessment" department
of Pacific Bell.

%A Ye-Sho Chen
%T Expert System for On-Line Quality Control
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1165-1174
%K AI01 AA05 AA21 automotive O04 Pareto
%X Discusses a diagnostic system for automobile brakes.
The system uses Pareto optimality to assist in uncertainty calculus.

%A K. M. Chalfan
%T An Expert Executive Which Integrates Heterogenous Computational Programs
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1175-1174
%K AI01 AA05 aerospace preliminary design
%X Discusses a proposed system to automate weight, aerodynamics,
propulsion and performance codes in the preliminary design of airplanes.

%A A. Kissil
%A A. Kamel
%T An Expert System Finite Element Modeler
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1179-1186
%K AI01 AA05
%X Discusses an expert system for use in generating meshes for
finite elements.  Includes a discussion of heuristics that will generate
a mesh with a desired accuracy.  This is done by comparing a parametric
distortion of a region whose stresses are known with the unknown region.

%A Paul F. Monaghan
%A James G. Doheny
%T Knowledge Representation in the Conceptual Design Process for Building
Energy Systems
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1187-1192
%K HVAC AA05 AI01
%X discusses using expert systems and hierarchies in the design of
HVAC systems (Heating, Ventilation and Air Conditioning)

%A Prem Kumar Kalra
%T Development of Expert System for Fault Diagnosis in HVDC Systems Using
Spectral Approach
%B Applications of Artificial Intelligence in Engineering Problems
%E D. Sriram
%E R. Adey
%V 2
%I Computational Mechanics Publications
%C Woburn, Massachussetts
%D 1986
%P 1193-1198
%K AA21 AA05 AI01
%X discusses using Fast Fourier Transform, Fast Walsh Transform
and expert systems to help diagnose
high voltage DC and analog systems

------------------------------

End of AIList Digest
********************
28-Jul-87 23:34:14-PDT,14082;000000000000
Mail-From: LAWS created at 28-Jul-87 23:25:15
Date: Tue 28 Jul 1987 23:21-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #190 - Msc.
To: AIList@STRIPE.SRI.COM


AIList Digest           Wednesday, 29 Jul 1987    Volume 5 : Issue 190

Today's Topics:
  Queries - AI/Graphics & Examples of KEE Frames &
    Problem Recognition in Prolog Databases &
    NLP Front Ends to INGRES,
  Policy - Virtual Sublists,
  Philosophy - Natural Kinds

----------------------------------------------------------------------

Date: Mon, 27 Jul 87 14:03:36 BST
From: mcvax!ux.cs.man.ac.uk!arnold@seismo.CSS.GOV
Reply-to: thoward@uk.ac.man.cs.cgu
Subject: AI/Graphics: help wanted


I am currently investigating what work has been done on connecting/
integrating AI methods and computer graphics. I would be very grateful
if anyone can send me any references, or bibliographies (or comments!)
etc in this area. If there's enough interest, I will summarise responses.
Please mail to me directly, *not* to the source of this posting, as it's
not my own account. Thanks...

Toby Howard                    Janet: thoward@uk.ac.man.cs.cgu
University of Manchester       ARPA:  thoward%cgu.cs.man.ac.uk@cs.ucl.ac.uk
Computer Graphics Unit         Phone: +44 61 273 7121 x5429/5406

------------------------------

Date: 28 Jul 87 04:46:37 GMT
From: munnari!uqcspe.OZ!twine@uunet.UU.NET (Steven Twine)
Subject: Examples of KEE frames Requested


I am currently revising a semantic analysis of KEE's frame language.
By semantic analysis, I mean trying to answer the question
   What facts does X encode about the current Universe of Discourse
where X is each of the syntactic ingredients in a KEE knowledge base
(units, slots, links etc).
This is not as simple as it seems, because a given KEE construct can
represent many different things (as Brachman showed for IsA links).

Anyway, in revising this paper, I would like to add many more examples
of KEE structures that have been used in practice, for the
purpose of analysing the facts that they encode.  I am particularly
interested in any ambiguous or otherwise tricky examples that I can
test my interpretations out on.  I would appreciate any examples of
KEE units etc that people could send me for this purpose (examples in
other frame languages may also be useful, but KEE is preferred)

All senders will get a lovely acknowledgement at the end of the paper
(what an incentive!) as well as my heartfelt gratitude.

Thanks in advance,  folks!

=========================================================================
Steven Twine,                   ARPA:   twine%uqcspe.oz@seismo.css.gov
Department of Computer Science, ACSnet: twine@uqcspe.oz
University of Queensland,       UUCP:   seismo!munnari!uqcspe.oz!twine
St Lucia, 4067.                 CSNET:  twine@uqcspe.oz
AUSTRALIA.                      JANET:  uqcspe.oz!twine@ukc

------------------------------

Date: 26 Jul 87 20:37:16 GMT
From: dartvax!balu.UUCP@seismo.css.gov (Balu Raman)
Subject: Problem recognition in Prolog database


I am working on recognizing problem instances in Prolog database. The problems
can be typical Graph-color, Linear Programming Problem, Critical Path Problems
etc.etc. Does anybody in the netland have references, pointers ,prolog programs
to do what I am trying to do.

thanks in advance.
Balu Raman.

------------------------------

Date: Mon, 27 Jul 87 08:37:24 PDT
From: vor!cris%esosun.UUCP@sdcsvax.ucsd.edu (Cris Kobryn)
Subject: NLP Front-Ends to INGRES

I am interested in developing an NLP front-end to INGRES.  Lest I
reinvent: Is there any "stock" software which already does this?
(INTELLECT does not *currently* accommodate INGRES; I've heard "DataTalker"
mentioned as a possibility, but have no details--capabilities, company name,
phone#, etc.)

Re building an NLP front-end:  Prolog's DCG's (Definite Clause Grammars)
seem to provide an attractive tool to construct an NLP front-end.  I would
appreciate feedback re their effectiveness, and pointers to work done or
being done relevant to this interest.

I will be glad to summarize and post if the response merits it.

-- Cris Kobryn

+----------------------------------------------------------------------------+
|   Cris Kobryn                         UUCP:  {sdcsvax|seismo}!esosun!cris  |
|   Geophysics Division, MS/22          ARPA:  esosun!cris@seismo.css.gov    |
|   SAIC                                SOUND: (619)458-2697                 |
|   10210 Campus Point Drive                                                 |
|   San Diego, CA  92121                                                     |
+----------------------------------------------------------------------------+

------------------------------

Date: 27 Jul 87 14:28 PDT
From: Ghenis.pasa@Xerox.COM
Subject: PROPOSAL: We need "virtual sublists"

The recent meta-discussion on what to include in the Digest was rather
similar to the one about whether to include the AI Expert code listings.
At that time I made a proposal that may have drowned in the noise. I
still think it would solve the filtering problem so here it goes:


PROBLEM:

You can't tell what is inside the digest until you start reading it. The
title is non-descriptive. How does an AIList reader filter unwanted
topics?

If a reader has an unsophisticated mail reading channel, there is an
irritating time cost to opening an unwanted 20,000 character message.
This is even worse for folks who read their mail through a modem
connection.

Proposing the creation of a new list for each topic that generates a
large mail volume is not only unrealistic but also unnecessary.



SOLUTION:

The moderator is already thoughtful enough to segregate topics so that
each digest is fairly homogeneous. Now if only the "Subject:" line could
read

        AIList V5 #183 - Symbol Grounding

instead of

        AIList Digest   V5 #183

then it would be easy to filter topics even with the crudest of mail
programs, and our personal archives would also be much more descriptive
at the table-of-contents level.



I believe that this scheme would address the objections of folks who
voted against continuing to distribute symbol grounding messages or
source code listings.



MODERATOR: Would this be a difficult change to implement?

FELLOW READERS: Is this proposal missing the point? Is there anything
else we could do to better prepare for the next large discussion? Should
we move this discussion to the META-META-DISCUSSIONS list? :-)


Pablo Ghenis
Xerox Artificial Intelligence Systems
Educational Services


  [This has been suggested several times, by several people, so
  I might as well give it a try.  I am reminded, though, of a
  parody of Reader's Digest that condensed an entire Hemmingway
  novel to the word "Bang!".  A good many digests will have to
  be tagged as "Msc.", including this one.

  I really don't see the advantage in the longer subject line,
  but perhaps that is because my mailer clips the subject at about
  40 characters.  The cost of examining the full Topics section
  is only about one page of data.  (Are there really mailers out
  there that let you read the subject line without the cost of
  "pulling in" the entire digest?)

  What is really needed here is an intelligent mail-reading system.
  I'm sure that special digest-reading commands could -- but
  probably won't -- be added to any of our mailers.  Even better
  would be an intelligent Information Lens system.  Won't someone
  take this on as an AI project?  -- KIL]

------------------------------

Date: 27 Jul 87 09:45:19 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Natural Kinds (Re: AIList Digest   V5 #186)

Your functional description of "chair" does capture more of "what's
essential to chairs" than the structural description could.  Some
quibbles, however.  First, it includes couches since it doesn't say
that it's for exactly one person.  Second, it doesn't seem to include
"Balenz" chairs, those kind in which the person rests on his/her
shins, since the "support for one's back" is rather indirect -- what
they do is to make it easier to balance the spine by tilting the
pelvis forward.  Third, some people might say that Balenz chairs
aren't chairs at all, but stools, because the back support is indirect
-- the point being that the functional description might have to take
into account who's saying what about chairs to whom.  Probably, other
Ailist readers will come up with more borderline cases, which brings
me to the speculation that functional descriptions may end up with as
many exceptions as structural descriptions do.

------------------------------

Date: Mon, 27 Jul 1987  11:16 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Natural Kinds (Re: AIList Digest   V5 #186)


I agree:
   1. Yes, I think we'd all agree that a chair is for 1 person to sit on.
   2. The boundary is fuzzy, indeed, and some people might not
      consider a Balenz chair to be a chair.
   3. Yes, indeed, the "functional description" does indeed depend  on
whose "intention" is ivolved, and upon who is saying what to whom.

My point is not that such terms can be defined in foolproof, clear-cut
ways.  There are really two sorts of points.

1.  You can get much further in making good definitions by squeezing
in from both structural and function directions - and surely others as well.

2.  In Society of Mind, section 30.1 I discuss how meanings must depend on
speakers, etc.

As Ken Laws remarked, we should not be too hasty to thank philosophers
for concept of "natural kind".  McCarthy make useful remarks about
penguins, which form a clear-cut cluster because of the speciation
mechanism of sexual reproduction.  The class is un-fuzzy even though,
as McCarthy notes, penguins have properties that scientists have not
yet discovered.

But then, I think, McCarthy defeats this clarity by proceeding to
discuss how children learn about chairs - and tries to subsume this,
too, into natural kinds.  He describes what seems clearly to be not
"natural" aspects of chairs, but the clustering and debugging
processes a child might use.

My conclusion - and, I'd bet, Ken Laws would agree - is that the
concept of "natural kind" has an illusory generality.  It seems to me
that, rather than good philosophy, it is merely low-grade science
contaminated by naive, traditional common sense concepts.  The
clusters that have good boundaries, in the world, usually have them
for good - but highly varied reasons.  Animals form good clusters
because of Darwinian speciation of various sorts.  Certain metals,
like Gold, have "natural" boundaries because of the Pauli exclusion
principle which causes things like periodic tables of elements.
Philosophers like to speak about gold - but their arguments won't work
so well for Steel, whose boundary is fuzzy because there are so many
ways to strengthen iron.  All in all, the clusters we perceive that
have sharp boundaries are quite important, pragmatically, but exist
for such a disorderly congeries of reasons that I consider the
philosophical discussion of them to be virtually useless in this
sense: the class of clusters with "suitable sharp boundaries" to
desaerve the title "natural kinds" is itself too fuzzy a concept to
help us clarify the nature of how we think about things.

------------------------------

Date: Mon, 27 Jul 87 09:57:26 MDT
From: shebs@cs.utah.edu (Stanley Shebs)
Reply-to: cs.utah.edu!shebs@cs.utah.edu (Stanley Shebs)
Subject: Re: Natural Kinds (Re: AIList Digest   V5 #186)

In article <MINSKY.12320404487.BABYL@MIT-OZ> MINSKY@OZ.AI.MIT.EDU writes:

>About natural kinds.  In "The Society of Mind", pp123-129, I propose a
>way to deal with Wittgenstein's problem of defining terms like "game"-
>or "chair".  The basic idea was to probe further into what
>Wittgenstein was trying to do when he talked about "family
>resemblances" and tried to describe a game in terms of properties, the
>way one might treat members of a human family: build, features, colour
>of eyes, gait, temperament, etc.

>[... details of Wittgenstein vs Minsky :-) ...]

>I would appreciate comments, because I think this may be an important
>theory, and no one seems to have noticed it. [...]

I recently finished reading "Society of Mind", and quite enjoyed it.
There are a lot of interesting ideas.  There are also many that are
familiar to people in the field, but with new syntheses that make the
ideas much more plausible than in the past.  I had been getting cynical
about AI, but after reading this, I wanted to go and hack out programs
to test the hypotheses about action, and memory, and language.  But there's
a serious problem;  how *can* these hypotheses be tested?  The society of
mind follows human thinking so closely that any implementation is going
to be a model of human minds rather than minds in general, and will probably
be handicapped by being too small and simple to be recognizably human-like
in its behavior.  Tracing a mind society's behavior will generate lots
of data but little insight.  So my ardor has been replaced by odd moments
speculating on tricky but believable tests, and a greater appreciation for
people interested in a more formal approach to minds.

Getting down to specifics, the theory about recognition of objects by either
structure or functions was one of the parts I really liked.  A robot should
be able to sit on a desk without getting neurotic, or to sit carefully on
a chair that's missing one leg...

                                                        stan shebs

------------------------------

End of AIList Digest
********************
29-Jul-87 22:03:07-PDT,12036;000000000000
Mail-From: LAWS created at 29-Jul-87 21:56:03
Date: Wed 29 Jul 1987 21:51-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #191 - LISP Techniques
To: AIList@STRIPE.SRI.COM


AIList Digest           Thursday, 30 Jul 1987     Volume 5 : Issue 191

Today's Topics:
  Techniques - Graphics-AI References &
    Garbage Collection Suppression

----------------------------------------------------------------------

Date: Tue, 28 Jul 87 13:56:14 pdt
From: Eugene Miya N. <eugene@ames-pioneer.arpa>
Subject: Re: Graphics-AI bibliography

>I am currently investigating what work has been done on connecting/integrating
>AI methods and computer graphics. I would be very grateful if anyone can
>send me any references, or bibliographies (or comments!) etc in this area.
>                          - Toby Howard -

Computer graphics: get the current edition of ACM Computer Graphics (the
Quarterly).  It has the yearly bibliography in computer graphics (June 1987).
There is a biblio for June 1986 which over the year 1985.  Recently, we
had a meeting where we had a speaker from SRI cover some of the common
ground (Sandy Pentland) because we perceived that the AI people were
reinventing what the graphics people invented 20 years ago.

--eugene miya
  Bay Area ACM/SIGGRAPH

------------------------------

Date: Wed, 29 Jul 87 11:58 PDT
From: nesliwa%telemail@ames.arpa (NANCY E. SLIWA)
Subject: Garbage Collection Suppression (Response Summary)


My thanks to all the respondants to my question about garbage
collection suppression. As several people asked for the results, I'm
posting it for all:

    Date: Friday, 17 July 1987  07:32-CDT
    From: nancy at grasp.cis.upenn.edu (Nancy Orlando)

    Are there any "accepted" methods of writing code that minimize a LISP's
    tendancy to garbage-collect? I don't mean a switch to turn it off;
    just a means of minimizing the need for it. I'm dealing particularly with
    DEC VAX lisp. I have assumed that iteration as opposed to recursion was
    one way; is this correct?

From: Chester@UDEL.EDU
Subject:  Re: Garbage Collection Suppression

The direct way to avoid garbage collection in lisp is to define your own `cons
function that prefers to get cell pairs from an `available list', calling the
regular `cons' only when the `available list' is empty.  A `reclaim' function
that puts cell pairs on the `available list' (using `rplacd') will be needed
also.  See any book on data structures.  The technique can be used for cell
pairs and gensym atoms, if needed, but in my experience, not with strings or
numbers.  String manipulations can usually be avoided, but a program that
crunches a lot of numbers cannot avoid consuming memory and eventually
triggering garbage collection (at least in VAX lisp).  I wish there were some
way for a user to reclaim numbers so that they could be reused as cell pairs
can.  If so, I could write all my lisp programs so that they don't need to
garbage collect.  It would also be nice to have a built-in `reclaim' function
that would work in conjunction with the built-in `cons'; it would be dangerous
for novices, but handy for the experienced.

By the way, recursion in itself doesn't cause garbage collection; VAX lisp is
smart enough to reclaim the memory used for the function stack automatically.

Daniel Chester
chester@dewey.udel.edu

Date: Mon, 20 Jul 87 01:36:44 PDT
From: woutput@ji.Berkeley.EDU (Andrew Purshottam)
Subject: Re: Garbage Collection Suppression

Forgive me if my response is too trivial, but you ommited
the most important technqiue for reducing gc use, limiting the use,
implict and explict, of cons. Particularly nasty is the use of
append or append1 (not sure what that is called in CL) to build up a list
by adding elements to its end. This method uses O(n^2) cons cells,
where n is the length of list built. Standard solutions include the use
of "accumulators", arguments which hold a partial result which
is modified in inner recursions and finally returned as
value when the function returns; building the list backword
and maybe reversing it at end; nconc, which uses O(n^2) time but
only O(n) space; or tconc structures, which keep a pointer to
the end of the list. (In prolog we have a cute method avail,
putting an uninstantiated element at the end of the list, effectively
a "hole" that can be filled by an element and another hole).

Not also that some popular functional programming techniques,
particularly those involving streams and higher order procedures
are quite greedy in cons cells, as they build intermediate lists,
most of whose elements are thrown away. The apply-append-mapcar
trick, the set functions like (filter 'pred 'list), union, and intersect
all do this if implemented in the obvious way, with the sets represented
and fully computed lists. The Black Book (Charniak/McDerrmot, AI Programming)
discusses more eff. ways to deal with this using generators, where
no more elements are computed than needed. (See also Abelson/Sussman
for a very readable (we inflict it on freshman!) discussion of delay
and force).

Again, excuse if this is too simple, no offense intended.

    Andy
--
    Cheers, Andy (...!ucbvax!woutput woutput@ji.berkeley.edu)
(cond ((lovep you (quote LISP)) (honk)) (t (return ())))

Date: Mon, 20 Jul 1987  05:32 CDT
From: AI.DUFFY@R20.UTEXAS.EDU
Subject: Garbage Collection Suppression

No.  You make garbage when you create data structures. Recursion v.
iteration has nothing to do with it, unless VAXlisp is more
brain-damaged than I already know it to be.

    Are there other techniques?

Use destructive list operations (e.g., NCONC instead of APPEND) when
you can.  If you have any arrays, structures, etc., that you are using
temporarily, you can resource them (make a bunch of them and push them
onto a list, pop one off when you want to use it, and when you are
finished with it, nullify its slots and push it back onto the list).

Your best bet, of course, is to get more memory.

Date: 20 Jul 87 09:32:01 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Garbage Collection Suppression

Iteration vs. Recursion is orthogonal.  By using recursion where you
could have used iteration, you may be using _stack_ space, but that's
trivially `garbage collected' every time you return from a function
(this is non-tail recursion I'm talking about).  The only surefire way
to reduce garbage collection is to call CONS and and MAKE-ARRAY (and
things that call them) less often.  There are a number of implications
of that for coding style (e.g. pass functions down instead of passing
consed structures up), but using iteration is not one of them.

Date: Mon, 20 Jul 1987  10:12 EDT
From: "Scott E. Fahlman" <Fahlman@C.CS.CMU.EDU>
Subject: Consing

Nancy,

In order to avoid GC's, you have to write your code in a way that avoids
consing up data structures, especially in inner loops.  In the Vax
Common Lisp, I think that recursion is faster than the equivalent
iteration, but consing is not the reason; what you're seeing is the
difference between access to just a few variables (in registers or in
the cache) versus spreading out copies of those registers on the stack,
with all the associated memory references for pushing and popping.

How to avoid consing in any given Comon Lisp is a complex topic.
Perhaps the DEC people have some training materials on how to do this in
their Lisp.  But there are a few things to watch for:

Make sure all your code is compiled.  A lot of Lisps cons furiously in
the interpreter while consing very much less in compiled code.

If you are consing up vectors and strings in some inner loop solely for
communication with other routines, consider passing the info in a single
pre-allocated vector instead.

Some Lisps cons when passing &rest args and &keyword args.  Check this.

Often a bit of Consing can make code clearer and easier to maintain.
Find who is doing the consing that is bothering you and squeeze that
part of the code for maximum efficiency; don't just squeeze everything,
because maintainability will be harmed.

-- Scott Fahlman

Date: Mon, 20 Jul 87 10:15:53 EDT
From: Mario O. Bourgoin <mob@MEDIA-LAB.MEDIA.MIT.EDU>
Subject: Re: Garbage Collection Suppression

Hello,
        What you want to do is avoid storage allocation operations.
Most of the methods for doing this are implementation dependant.  For
example, in Scheme iteration constructs expand to tail-recursive calls
so there's no point in trying to change function calls to do-loops.
Furthermore, good Lisp implementations optimize function calls since
they are the most used operation; they are usually cheaper than the
looping alternatives.
        Lisp compilers can usually do excellent optimizations if you
use the implemetation's features.  For example, ZetaLisp offers a LOOP
iteration macro which allows the programmer to communicate to the
compiler the necessary information for the latter to produce the most
efficient code possible.
        What you can do reliably is to avoid using operations that
cons a lot such as `append' and use their structure modifying
alternatives such as `nconc'.  You should be careful to write your
programs with the modifying operations from the beginning to avoid
encountering problems with them if you change over from the consing
operations.
        Remember that operations such as the arithmetic functions must
allocate storage for their result.  It might be worth your while to
code basic operations and inner loops in another language such as `C'
to avoid allocation.

--Mario O. Bourgoin

(This next was is response to a follow-up request of mine, asking if call-outs
to non-lisp external routines helped decrease garbage collection.)

Date:     Thu, 23 Jul 87 9:11:18 EDT
From: Chester@UDEL.EDU
Subject:  Re:  garbagecollection

We have no experience with calling out to another language just to do
number crunching.  My guess is that the overhead of switching languages
and of communicating between them and lisp would be too much, but that is
just a guess.  If you find out differently, let me know.


Date: 22 Jul 87 14:28:51 GMT
From: "J. A. \"Biep\" Durieux" <mcvax!cs.vu.nl!biep@seismo.CSS.GOV>
Subject: Re: Garbage Collection Suppression


In article <8707202143.aa23792@Dewey.UDEL.EDU> Chester@UDEL.EDU writes:
>The direct way to avoid garbage collection in lisp is to define your own `con
>function that prefers to get cell pairs from an `available list' (...).

Also handy in many cases (small functions like append, alist-functions, subst)
is icons: (defun icons (a d cell)
                 (cond ((and (eq (car cell) a) (eq (cdr cell) d)) cell)
                       (t (cons a d))))

In this way whenever it turns out the new cells weren't really needed, the
old ones are used again (as in (append x nil)). Be aware, however, that your
copy-function may not work any more if it's defined as (subst nil nil x)!
--
                                             Biep.  (biep@cs.vu.nl via mcvax)

*****************************************************************************
I also noticed that the current (Aug.-Sept. 87) issue of LISP Pointers
has two good articles about garbage collection: "Overview of Garbage
Collection in Symbolic Computing," by Timothy J. McEntree (TI) and
"Address/Memory Management For A Gigantic LISP Environment or, GC
Considered Harmful," by Jon L. White (MIT). LISP Pointers subscriptions
are available from:
        LISP Pointers
        Mary S. Van Deusen, Editor
        IBM Watson Research
        PO Box 704
        Yorktown Heights, NY 10598

------------------------------

End of AIList Digest
********************
29-Jul-87 22:05:19-PDT,8479;000000000000
Mail-From: LAWS created at 29-Jul-87 22:01:27
Date: Wed 29 Jul 1987 21:59-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@STRIPE.SRI.COM>
Reply-to: AIList@STRIPE.SRI.COM
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList V5 #192 - Philosophy
To: AIList@STRIPE.SRI.COM


AIList Digest           Thursday, 30 Jul 1987     Volume 5 : Issue 192

Today's Topics:
  Philosophy - Philosophy-Bashing & AI as a Science &
    Natural Kinds & Iconic Representation

----------------------------------------------------------------------

Date: Mon, 27 Jul 87 15:40:12 pdt
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: philosophy-bashing

i wish contributors to the ailist who indulge in philosophy could
refrain from including diffuse comments alluding to the lack of
worth of philosophy. philosophers do the same thing, so it's hard
to keep track of who's who.

peter ladkin
ladkin@kestrel.arpa

------------------------------

Date: 29 Jul 87 15:58:46 GMT
From: sbrunnoc@hawk.CS.ULowell.Edu (Sean Brunnock)
Reply-to: sbrunnoc@hawk.cs.ulowell.edu (S. Brunnock)
Subject: Re: Why AI is not a science

      Gentlemen, please! (my apologies to any women reading this)

      AI is a very young branch of science. Computer science as a whole
   is only a little more than 40 years old. How can you compare AI with
   mathematics or physics which are thousands of years old?

      Aristotle made some of the first stabs at elemental chemistry and
   gravitation. From our enlightened viewpoint, can we call him a scientist?

      Give it time, its too early to tell.


                                   S. Brunnock

------------------------------

Date: 29 July 1987, 14:55:35 EDT
From: Andrew Taylor <ATAYLOR@ibm.com>
Subject: in defence of penguins (natural kinds)

Penguins have been the topic of some discussion. I'd like to correct some
some misconceptions. Penguins are not one species, currently they
are classified into 18 species. Their inability to fly is not a
deficiency. Their wings are merely adapted to a more dense medium, water.
They are not the only flightless birds there are 40+ species
of flightless birds (0.5% of all bird species).

It is not certain penguins are birds. In the past it was believed
that they were independently descended from the reptiles. It is possible
fossils will be found which will cause this belief to rise again.

Penguins may form a clear cut group (order) to ornithologists but
people less expert could easily classify other birds of similar
appearance and habits (e.g auks) into the same group.

Unfortunately species are sometimes not clear cut either.
When two populations are separated, then it can be difficult to decide
whether they are 1 or 2 species. Biologists often merge or split
species in new classifications.

People living close to nature (e.g Amazon Indians) have "kinds"
which mostly correspond to species. Most of us are content with
kinds which lump together a number of species on the basis
of superficial similarities. These kinds often differ from
the classifications biologists make.

Andrew Taylor

------------------------------

Date: Wed, 29 Jul 87 08:43:05 -0200
From: Eyal mozes <eyal%wisdom.bitnet@jade.berkeley.edu>
Subject: Re: natural kinds

An important theory that has so far not been mentioned in the
discussion on "natural kinds" is the Objectivist theory of concepts.
In essence, this theory regards universal concepts, such as "chair" or
"bird", as the result of a process of "measurement-omission", which
mentally integrates objects by omitting the particular measurements of
their common characteristics.  The theory takes into account the point
mentioned in Minsky's recent message about structure and function, and
completely solves Wittgenstein's problem.

The theory is presented in the book "Introduction to Objectivist
Epistemology" by Ayn Rand, and, more recently, in the paper "A theory
of abstraction" by David Kelley (Cognition and Brain Theory, vol. 7
no. 3&4, summer/fall 1984, pp. 329-357).

        Eyal Mozes

        BITNET:                 eyal@wisdom
        CSNET and ARPA:         eyal%wisdom.bitnet@wiscvm.wisc.edu
        UUCP:                   ...!ihnp4!talcott!WISDOM!eyal

------------------------------

Date: Wed, 29 Jul 87 08:03:56 EDT
From: powell%mwcamis@mitre.arpa
Subject: Natural Kinds

  Minsky's notion of natural types involving both structure and function
does seem plausible.  One could think of each natural type as a
bipartite graph where one node class represents structural components
and where the other node type represents each function of the natural
type.  Connections between the two node classes would represent
(in a crude way) the way in which portions of each class relate to
the nodes of the other class.

  Even more specifically, the entire design foundations
as would be recorded in the data dependency net of an ATMS recording
the design process (function to structure) would capture still more about
the natural type.  This seems
like a bizarely specific way to define a hazy notion like natural types,
but it does appear to follow naturally from Minsky's proposal.

------------------------------

Date: Wed 29 Jul 87 11:28:26-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Subject: Structure, Function, and Intention

Minsky's initial message described function (of a chair) in terms
of intended use.  I don't believe he elaborated, but it seems
obvious that it could be either the designer of the chair or the
user who provides the intention.  (For instance, a chair designed
for one person does not become a couch just because two kids sit
on it at the same time.)  Semantic classification thus requires
at least three viewpoints: structure, intended function, and
perceived or implemented function.

                                        -- Ken

------------------------------

Date: Wed, 29 Jul 87 16:11:23 edt
From: amsler@flash.bellcore.com (Robert Amsler)
Subject: Re: Structural and Functional descriptions

Another division of information which I find significant is that of
visual vs. the combined structural and functional descriptions.  While a
visual description might be termed `structural' I think there is a
significant difference.  Visual information, i.e. information
obtained from looking at a visual still or moving image of an object,
is often not available in pre-recorded structural form.  It `may' be
possible to describe visual information in symbolic text, but it
can prove very hard to extract it from existing descriptions because
there is so much visual information to represent and often the
description doesn't contain the key element needed to answer a
question.

I first encountered this when looking at the information dictionaries
present for a word such as `horse'. They give definitions of all the
parts of a horse, but you cannot assemble a horse from these part
definitions accurately enough to answer a simple question such as
whether the horse's head is higher than its tail? (Dictionaries
almost universally have an illustration for a horse, which suggests
they know something about how hard it is to describe one by
definitions only). Initially I saw this as demonstrating the
complimentarity of visual and definitional information, much in the
same manner that Minsky sees the complimentarity of the structural
and functional descriptions. But now, it looks to be a more basic
problem. Even if you could assemble a horse from the definition plus
the static visual knowledge (e.g. add coordinates and a wire frame model of
a horse to the description), I can't animate it well enough to
answer questions (Are all the feet ever off the ground simultaneously
while running?)

This probably suggests a simulation as the correct representation,
but often a simulation is really just a means of displaying the
visual representation of the object so you can perform the
observation needed on the simulated entity rather than on the real
entity. What this seems to imply is that ultimately the `description'
of an object should be a simulation accurate enough to permit direct
observation and generation of the functional and structural
information we know about the object?

------------------------------

End of AIList Digest
********************