----------

Date: 28 May 84 12:55:37-PDT (Mon)
From: hplabs!hao!seismo!cmcl2!floyd!vax135!cornell!jqj @ Ucb-Vax.arpa
Subject: Re: KS300 Question
Article-I.D.: cornell.195

KS300 is owned by (and a trademark of) Teknowledge, Inc.  Although
it is largeley based on Emycin, it was extensively reworked for
greater maintainability and reliability, particularly for Interlisp-D
environments (the Emycin it was based on ran only on DEC-20
Interlisp).

Teknowledge can be reached by phone (no net address, I think)
at (415) 327-6600.

------------------------------

Date: Wed 30 May 84 19:41:17-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: CSLI Report

         [Forwarded from the CSLI newsletter by Laws@SRI-AI.]

                New CSLI-Report Available

``Lessons from Bolzano'' by Johan van Benthem, the latest CSLI-Report,
is now available. To obtain a copy of Report No. CSLI-84-6, contact
Dikran Karagueuzian at 497-1712 (Casita Hall, Room 40) or Dikran at SU-CSLI.

------------------------------

Date: Thu 31 May 84 11:15:35-PDT
From: Al Davis <ADavis at SRI-KL>
Subject: Hardware Prototyping


On the issue of the Stone - Shaw wars.  I doubt that there really is
a viable "research paradigm shift" in the holistic sense.  The main
problem that we face in the design of new AI architectures is that
there is a distinct possibility that we can't let existing ideas
simply evolve.  If this is true then the new systems will have to try
to incorporate a lot of new strategies which create a number of
complex problems, i.e.

        1.  Each new area means that our experience may not be
            valid.

        2.  Interactions between these areas may be the problem,
            rather than the individual design choices - namely
            efficient consistency is a difficult thing to
            achieve.

In this light it will be hard to do true experiments where one factor
gets isolated and tested.  Computer systems are complex beasts and the
problem is even harder to solve when there are few fundamental metrics
that can be applied microscopically to indicate success or failure.
Macroscopically there is always cost/performance for job X, or set of
tasks Y.

The experience will come at some point, but not soon in my opinion.
It will be important for people like Shaw to go out on a limb and
communicate the results to the extent that they are known.  At some
point from all this chaos will emerge some real experience that will
help create the future systems which we need now.  I for one refuse to
believe that an evolved Von Neumann architecture is all there is.

We need projects like DADO, Non-Von, the Connection Machine, ILLIAC,
STAR, Symbol, the Cosmic Cube, MU5, S1, .... this goes on for a long
time ..., --------------- if given the opportunity a lot can be
learned about alternative ways to do things.  In my view the product
of research is knowlege about what to do next.  Even at the commercial
level very interesting machines have failed miserably (cf. B1700, and
CDC star) and rather Ho-Hum Dingers (M68000, IBM 360 and the Prime
clones) have been tremendous successes.

I applaud Shaw and company for giving it a go along with countless
others.  They will almost certainly fail to beat IBM in the market
place.  Hopefully they aren't even trying.  Every 7 seconds somebody
buys an IBM PC - if that isn't an inspiration for any budding architect
to do better then what is?

Additionally, the big debate over whether CS or AI is THE way is
absurd.  CS has a lot to do with computers and little to do with
science, and AI has a lot to do with artificial and little to do with
intelligence.  Both will and have given us something worthwhile, and a
lot of drivel too.  The "drivel factor" could be radically reduced if
egotism and the ambition were replaced with honesty and
responsibility.

Enough said.

                                        Al Davis
                                        FLAIR

------------------------------

Date: Mon, 28 May 84 14:28:32 PDT
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Identity

    The thing about sameness and difference is that humans create them;  back
to the metaphor and similie question again.  We say, "Oh, he's the same old
Bill.", and in some sense we know that Bill differs from "old Bill" in many
ways we cannot know.  (He got a heart transplant, ...)  We define by
declaration the context within which we organize the set of sensory perceptions
we call Bill and within that we recognize "the same old Bill" and think that
the sameness is an attribute of Bill!  No wonder the eastern sages say that we
are asleep!

[Read Hubert Dreyfus' book "What Computers Can't Do".]

  --Charlie

------------------------------

Date: Wed, 30 May 1984  16:15 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: A restatement of the problem (phil/ai)

  From: (Alan Wexelblat) decvax!ittvax!wxlvax!rlw @ Ucb-Vax

  Suppose that, while touring through the grounds of a Hollywood movie
  studio, I approach what, at first, I take to be a tree.  As I come
  near to it, I suddenly realize that what I have been approaching is,
  in fact, not a tree at all but a cleverly constructed stage prop.

  So, let me re-pose my original question: As I understand it, issues of
  perception in AI today are taken to be issues of feature-recognition.
  But since no set of features (including spatial and temporal ones) can
  ever possibly uniquely identify an object across time, it seems to me
  (us) that this approach is a priori doomed to failure.

Spatial and temporal features, and other properties of objects that
have to do with continuity and coherence in space and time DO identify
objects in time.  That's what motion, location, and speed detectors in
our brains to.  Maybe they don't identify objects uniquely, but they
do a good enough job most of the time for us to make the INFERENCE of
object identity.  In the example above, the visual features remained
largely the same or changed continuously --- color, texture normalized
by distance, certainly continuity of boundary and position.  It was
the conceptual category that changed: from tree to stage prop.  These
latter properties are conceptual, not particularly visual (although
presumably it was minute visual cues that revealed the identity in the
first place).  The bug in the above example is that no distiction is
made between visual features and higher-level conceptual properties,
such as what a thing is for.  Also, identity is seen to be this
unitary thing, which, I think, it is not.  Similarities between
objects are relative to contexts.  The above stage prop had
spatio-termporal continuity (i.e., identity) but not conceptual
continuity.

Fanya Montalvo

------------------------------

Date: Wed, 30 May 84 09:18 EDT
From: Izchak Miller <Izchak%upenn.csnet@csnet-relay.arpa>
Subject: The experience of cross-time identity.

      A follow-up to Rosenberg's reply [greatings, Jay].  Most
commentators on Alan's original statement of the problem have failed to
distinguish between two different (even if related) questions:
   (a) what are the conditions for the cross-time (numerical) identity
       of OBJECTS, and
   (b) what are the features constitutive of our cross-time EXPERIENCE
       of the (numerical) identity of objects.
The first is an ontological (metaphysical) question, the second is an epis-
temological question--a question about the structure of cognition.
      Most commentators addressed the first question, and Rosenberg suggests
a good answer to it. But it is the second question which is of importance to
AI. For, if AI is to simulate perception, it must first find out how
perception works. The reigning view is that the cross-time experience of the
(numerical) identity of objects is facilitated by PATTERN RECOGNITION.
However, while it does indeed play a role in the cognition of identity, there
are good grounds for doubting that pattern recognition can, by itself,
account for our cross-time PERCEPTUAL experience of the (numerical) sameness
of objects.
     The reasons for this doubt originate from considerations of cases of
EXPERIENCE of misperception.  Put briefly, two features are characteristic of
the EXPERIENCE of misperception: first, we undergo a "change of mind" regar-
ding the properties we attribute to the object; we end up attributing to it
properties *incompatible* with properties we attributed to it earlier. But--
and this is the second feature--despite this change we take the object to have
remained *numerically one and the same*.
     Now, there do not seem to be constraints on our perceptual "change of
mind": we can take ourselves to have misperceived ANY (and any number) of the
o