From in%@vtcs1 Mon Dec  1 10:58:13 1986
Date: Mon, 1 Dec 86 10:57:57 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #267
Status: RO


AIList Digest           Wednesday, 26 Nov 1986    Volume 4 : Issue 267

Today's Topics:
  Queries - Lisp or Smalltalk for Amiga & XLISP 1.8,
  Philosophy - Searle, Turing, Nagel

----------------------------------------------------------------------

Date: 24 Nov 86 11:21:33 PST (Monday)
From: Tom.EdServices@Xerox.COM
Subject: Lisp, Smalltalk for Amiga


Does anyone know of Smalltalk or any Lisps (besides Xlisp and Cambridge
Lisp) for the Commodore-Amiga?  What I really want is a Common Lisp.

Thanks for any help.

------------------------------

Date: 24 Nov 86 17:42:57 GMT
From: mcvax!ukc!einode!tcdcs!omahony@seismo.css.gov  (O'Mahony Donal)
Subject: Looking for source of XLISP 1.8

I am looking for the source of Dave Betz's XLISP version 1.8.  This is
a version of LISP with object oriented extensions.  I understand that
it is available on the BIX bulletin board, but it is difficult to
gain access from here.  I would be grateful if sombody would post a copy

Donal O'Mahony,
Trinity College,
Dublin,
Ireland

------------------------------

Date: 22 Nov 86 21:46:13 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov  (Stevan
      Harnad)
Subject: Re: Searle, Turing, Nagel


On mod.ai, rjf@ukc.UUCP <8611071431.AA18436@mcvax.uucp>
Rob Faichney (U of Kent at Canterbury, Canterbury, UK) made
nonspecific reference to prior discussions of intelligence,
consciousness and Nagel. I'm not altogther certain that his
contribution was intended as a followup to the discussion that has
been going on lately under the heading "Searle, Turing, Categories,
Symbols," but since it concerns the issues of that discussion, I am
responding on the assumption that it was. R. Faichney writes:

>       [T. Nagel's] paper [See Mortal Questions, Cambridge University Press
>       1979, and The View From Nowhere, Oxford University Press 1986]
>       is not ... strictly relevant to a discussion of machine
>       intelligence, because what Nagel is concerned with is not intelligence,
>       but consciousness. That these are not the same, may be realised on a
>       little contemplation. One may be most intensely conscious while doing
>       little or no cogitation. To be intelligent - or, rather, to use
>       intelligence - it seems necessary to be conscious, but the converse
>       does not hold - that to be conscious it is necessary to be intelligent.
>       I would suggest that the former relationship is not a necessary one
>       either - it just so happens that we are both conscious and (usually)
>       intelligent.

It would seem that if you believe that "to use intelligence...it seems
necessary to be conscious" then that amounts to agreeing that Nagel's
paper on consciousness is "relevant to a discussion of machine
intelligence." It is indisputable that intelligence admits of degrees,
both as a stable trait and as a fluctuating state. What is at issue in
discussions of the turing test is not the proposition that consciousness
is the same as intelligence. Rather, it is whether a candidate has
intelligence at all. It seems that consciousness in man is a sufficient
condition for being intelligent (i.e., for exhibiting performance that is
validly described as "intelligent" in the same way we would apply that
term to our own performance). Whether consciousness is a necessary
condition for intelligence is probably undecidable, and goes to the
heart of the mind/body problem and its attendant uncertainties.

The converse proposition -- that intelligence is a necessary condition for
consciousness is synonymous with the proposition that consciousness is
a sufficient condition for intelligence, and this is indeed being
claimed (e.g., by me). The argument runs like this: The issue in
turing-testing is sorting out intelligent performance from its unintelligent
look-alikes. As a completely representative example, consider my asking
you how much 2 + 2 is, and your replying "4" -- as compared to my writing
a computer program whose only function is to put out the symbol "4" whenever
it encounters the string of symbols "How much is 2 + 2?" (this is basically
Searle's point too). There you have it all in microcosm. If the word
"intelligence" has any meaning at all, over and above displaying ANY
arbitrary performance  at all (including a rock sliding down a hill, or,
for that matter, a rock NOT sliding down a hill), then we need a principled
way of distinguishing these two cases. That's what the Total Turing
Test I've proposed is meant to do; it amounts to equating
intelligence with total performance capacities indistinguishable from
our own. This also coincides with our only basis for inferring that
anyone else but ourselves has a mind (i.e., is conscious).

There is no contradiction between agreeing that intelligence admits
of degrees and that mind is all-or-none. The Total Turing Test does
not demand the performance capacity of Newton or Bach, only that of an
(undistinguished) person indistinguishable from any other person one might
know for a lifetime. Moreover, the Total Turing Test admits of
variants for other species, although this involves problems of ecological
knowledge and intuitions that humans may lack for any other species but
their own. It even admits of pathological variants of our own species
(retardation, schizophrenia, aphasia, paralysis, coma, etc. as discussed
in other iterations of this discussion, e.g., with J. Cugini) although
here too intuitions and validity probably break down.

>       Animals probably are conscious without being intelligent. Machines may
>       perhaps be intelligent without being conscious.  If these are defined
>       seperately, the problem of the intelligent machine becomes relatively
>       trivial (though that may seem too good to be true): an intelligent
>       machine is capable of doing that which would require intelligence in
>       a person, eg high level chess.

Not too good to be true: Too easy. And it would fail to capture
almost all of our relevant pretheoretic generalizations or intuitions.
Animals ARE intelligent (in addition to being conscious), although, as usual,
their intelligence admits of degrees, and can only be validly assessed
relative to their ecological or adaptive contexts (although even
relative to our own ecology, many other species display some degree of
intelligence). The machine intelligence problem -- which is the heart
of the matter -- cannot be settled so quickly and easily. Moreover,
the empirical question of what intelligence is cannot be settled by a
definition (remember "2 + 2 = 4" and the rolling stone, above). Many
intelligent people (with minds) can't play high-level chess, but no
machine can currently do EVERYTHING that the least intelligent of
these people can do. That's the burden of the Total Turing Test.

>       Nagel views subjectivity as irreducible to objectivity, indeed the
>       latter derives from the former, being a corrected and generalised
>       version of it. A maximally objective view of the world must admit
>       the reality of subjectivity.

Nagel is one of the few thinkers today who doesn't lapse into
arbitrary hand-waving on the issue of consciousness and its
"reducibility" to something else. Nagel's point is that there is
something it's "like" to have experience, i.e., to be conscious, and
that it's only open to the 1st person point of view. It's hence radically
unlike all other "objective" or "intersubjective" phenomena in science
(e.g., meter-readings), which anyone else can verify as being independent of
one's "point of view" (although Nagel correctly reminds us that even
objectivity is parasitic on subjectivity). The upshot of his analysis
is that utopian scientific mind-science (cognitive science?)
-- that future complete theory that will predict and explain it all --
will be essentially "incomplete" in a way that utopian physics will not be:
Both will successfully predict and explain all their respective observable
(objective) data, but mind-science will be left with something
irreducible, hence unexplained.

For me, this is not a great problem, since I regard the mission of
devising a candidate that can pass the Total Turing Test to be an abundantly
profound and challenging one, and I regard its potential results -- a
functional explanation of the objective features of the mind -- as
sufficiently desirable and useful, so that the part it will FAIL to
explain does not bother me. That may well forever remain philosophy's
province. But I do keep reminding the overzealous that that utopian
mind science will be turing-indistinguishable from a mindless one. I
keep doing this for two reasons: First, because I believe that this
Nagelian point is correct, and worth keeping in mind. And second, because
I believe that attempts to capture or incorporate consciousness in cognitive
science more "directly" are utterly misguided, and lead in the direction of
highly subjective over-interpretations, hermeneutics and self-delusion,
instead of down the only objective scientific road to be traveled: modeling
lifesize performance capacity (i.e., the Total Turing Test). It is for
this reason that I recommend "methodological epiphenomenalism" as a
research strategy in cognitive science.

>       So what, really, is consciousness?  According to Nagel, a thing is
>       conscious if and only if it is like something to be that thing.
>       In other words, when it may be the subject (not the object!) of
>       intersubjectivity.  This accords with Minsky (via Col. Sicherman):
>       'consciousness is an illusion to itself but a genuine and observable
>       phenomenon to an outside observer...'  Consciousness is not
>       self-consciousness, not consiousness of being conscious, as some
>       have thought, but is that with which others can identify. This opens
>       the way to self-awareness through a hall of mirrors effect - I
>       identify with you identifying with me...  And in the negative mode
>       - I am self-conscious when I feel that someone is watching me.

The Nagel part is right, but unfortunately all the rest
(Minsky/Sicherman/hall-of-mirrors) has it all wrong, and is precisely
the type of lapse into hermeneutics and euphoria I warned against earlier.
The quote above (via the Colonel) is PRECISELY THE OPPOSITE of Nagel's
point. The only aspect of conscious experience that involves direct
observability is the subjective, 1st-person aspect (and the fact THAT I
am having a conscious experience is certainly no illusion since
Descartes at least, although what it tells me about the outside world may be,
at least since Hume). Let's call this private terrain Nagel-land.
The part others "can identify" is Turing-land: Objective, observable
performance (and its structural and functional substrates). Nagel's point
is that Nagel-land is not reducible to Turing-land.

Consciousness is the capacity to have subjective experience (or perhaps
the state of having subjective experience). The rest of the "mirrors"
business is merely metaphor and word-play; such subject matter may make for
entertaining and thought-provoking reading, as in Doug Hofstadter's books,
but it hardly amounts to an objective contribution to cognitive science.

>       It may perhaps be supposed that the concept of consciousness evolved
>       as part of a social adaptation - that those individuals who were more
>       socially integrated, were so at least in part because they identified
>       more readily, more intelligently and more imaginatively with others,
>       and that this was a successful strategy for survival. To identify with
>       others would thus be an innate behavioural trait.

Except that Nagel would no doubt suggest (and I would agree) that
there's no reason to believe that the asocial or minimally social
animals are not conscious too. But apart from that, there's a much
deeper reason why it is probably futile to try to make evolutionary
conjectures about the adaptive function of conscious experience:
According to standard evolutionary theory, the only traits that are
amenable to the kind of trial-and-error selection on the basis of
their consequences for the survival of the organism and propogation of its
genes are (what Nagel would call) OBJECTIVE traits: structure,
function and behavior. Standard evolutionary conjectures about the
putative adaptive function of consciousness are open to precisely the
same objection as the utopian mind-science spoken of earlier:
Evolution is blind to the difference between organisms that are
actually conscious and organisms that merely behave as if they were
conscious. Turing-indistinguishability again. On the other hand, recent
variants of standard evolutionary theory would be compatible with a
NON-selectional origin of consciousness, as an epiphenomenon.

(In pointing out the futility of adaptive scenarios for the origin of
consciousness, I am drawing on my own theoretical failures. I tried
that route in an earlier paper and only later realized that such
"Just-SO" stories suffer from even worse liabilities in speculations
about the evolutionary origins of consciousness than they do in
speculations about the evolutionary origins of behaviors; radically
worse liabilities, for the reason given above. Caveat Emptor.)

>       ...When I suppose myself to be conscious, I am imagining myself
>       outside myself - taking the point of view of an (hypothetical) other
>       person.  An individual - man or machine - which has never communicated
>       through intersubjectivity might, in a sense, be conscious, but neither
>       the individual nor anyone else could ever know it.

I'm afraid you've either gravely misunderstood Nagel or left him far
behind here. When I feel a pain -- when I am in the qualitative state of
knowing what it's like to be feeling a pain -- I am not "supposing"
anything at all. I'm simply feeling pain. If I were not conscious, I
wouldn't be feeling pain, I'd just be acting as if I felt pain. The
same is true of you and of animals. There's nothing social about this.
Nor is "imagination" particularly involved (except perhaps in whatever
external attributions are made to the pain, such as, "there must be something
wrong with my tooth"). Even what is called clinically "imaginary" or
psychosomatic pain -- such as phantom-limb pain or hysterical pain --
is subjectively real, and that's the point: When I'm really feeling
pain, I'm not imagining I'm in pain; I AM in pain.

This is referred to by philosophers as the "incorrigibility" of 1st-person
experience.  Although it's not without controversy, it's useful to keep in
mind, because it's what's really at issue in the problem of artificial
minds. We are asking whether candidates have THAT sort of qualitative,
conscious experience. (Again, the "mirror" images about
self-consciousness, etc., are mere icing or fine-tuning, compared to
the more basic issue of whether or not, to put it bluntly, a machine
can actually FEEL pain, or merely ACTS as if it did.)

>       Subjectively, we all know that consciousness is real.  Objectively,
>       we have no reason to believe in it.  Because of the relationship
>       between subjectivity and objectivity, that position can never be
>       improved on.  Pragmatism demands a compromise between the two
>       extremes, and that is what we already do, every day, the proportion
>       of each component varying from one context to another.  But the
>       high-flown theoretical issue of whether a machine can ever be
>       conscious allows no mere pragmatism.  All we can say is that we do
>       not know, and, if we follow Nagel, that we cannot know - because the
>       question is meaningless.

Some crucial corrections that may set the whole matter in a rather different
light: Subjectively (and I would say objectively too), we all know that
OUR OWN consciousness is real. Objectively, we have no way of knowing
that anyone else's consciousness is real. Because of the relationship
between subjectivity and objectivity, direct knowledge of the kind we
have in our own case is impossible in any other. The pragmatic
compromise we practice every day with one another is called the Total
Turing Test: Ascertaining that others behave indistinguishably from our
paradigmatic model for a creature with consciousness: ourselves. We
were bound to come face-to-face with the "high-flown theoretical
issue" of artificial consciousness as soon as we went beyond everyday naive
pragmatic considerations and took on the burden of constructing a
predictive and explanatory causal thoery of mind.

We cannot know directly whether any other organism OR device has a mind,
and, if we follow Nagel, our inferences are not meaningless, but in some
respects incomplete and undecidable.


--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Dec  1 10:57:51 1986
Date: Mon, 1 Dec 86 10:57:32 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #268
Status: RO


AIList Digest           Wednesday, 26 Nov 1986    Volume 4 : Issue 268

Today's Topics:
  Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 22 Nov 86 12:13:02 GMT
From: mcvax!lambert@seismo.css.gov  (Lambert Meertens)
Subject: Re: Searle, Turing, Symbols, Categories

In article <229@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> I know directly that my
> performance is caused by my mind, and I infer that my
> mind is caused by my brain. I'll go even further (now that we're
> steeped in phenomenology): It is part of my EXPERIENCE of my behavior
> that it is caused by my mind. [I happen to believe (inferentially) that
> "free will" is an illusion, but I admit it's a phenomenological fact
> that free will sure doesn't FEEL like an illusion.] We do not experience our
> performance in the passive way that we experience sensory input. We
> experience it AS something we (our minds) are CAUSING. (In fact, that's
> probably the source of our intuitions about what causation IS. I'll
> return to this later.)

I hope I am not suffering from a terrible disease like incipient
schizophrenia, but for me it is not the case that I perceive/experience/
am-directly-aware-of my performance being caused by anything.  It just
happens.  I have some indirect evidence that there is some relation between
the performance I can watch happening and some sensations (such as anxiety
or happiness) that I can somehow experience directly whereas others have no
such direct access and can only infer the presence or absence of these
sensations within me by circumstantial evidence.

How do I know I have a mind?  This reminds me of the question put to a
priest (teaching religion) by one of the pupils: "Father, how do we know
that people have a soul?"  "Well," said the priest, "here I have a card in
memory of Klaas de Vries.  Look, here it says: `Pray for the soul of Klaas
de Vries.'  They wouldn't put that there if people had no souls, would
they?"  There is something funny with this debate: it is hardly
translatable into Dutch.  The problem is that if you look up "mind" in an
English-Dutch dictionary, some eight translations are suggested, none of
which has "mind" as their primary meaning if translated back to English,
except for idiomatic reasons (like in: "So many men, so many minds").
Instead, we find (1) memory; (2) meaning; (3) thoughts; (4) ghost; (5)
soul; (6) understanding; (7) attention; (8) desire.  Of these, I contend,
"ghost" and "soul" are closest in meaning if someone says: "I know I have
mind.  But how can I know that other people have minds?"

OK, if you substitute "consciousness" for "mind", then this does no
essential harm to the debate and things become translatable to Dutch.  What
you gain, is that you loose the suggestion evoked (at least to me) by the
word "mind" that it is something perhaps not quite, but almost, tangible,
something that you could lock up in a box, or cut in three, or take a
picture of with a camera using aura-sensitive film.  "Consciousness" is
more like "appetite": you can have it and you can loose it, but even though
it is functionally related to bodily organs, you normally don't think of it
as something located somewhere.  Does our appetite cause our eating?  ("My
appetite made me eat too much.")  How can we know for sure that other
people have appetites as well?  I propose to consider the question, "Can
machines have an appetite?"

Now why is consciousness "real", if free will is an illusion?  Or, rather,
why should the thesis that consciousness is "real" be more compelling than
the analogous thesis for free will?  In either case, the essential argument
is: "Because I [the proponent of that thesis] have direct, immediate,
evidence of it."  Sometimes we are conscious of certain sensations.  Do
these sensations disappear if we are not conscious of them?  Or do they go
on on a subconscious level?  That is like the question if a falling tree in
the middle of a forest makes a sound in the absence of creatures capable of
hearing.  That is a matter of the most useful (convenient) definition.  Let
us agree that the sensations continue at least if it can be shown that the
person involved keeps behaving as if the concomitant sensations continued,
even though professing in retrospection not to have been aware of them.  So
people can be afraid without realizing it, say, or drive a car without
being conscious of the traffic lights (and still halt for a red light).

How can you know that you have been conscious of something that you reacted
upon?  You stopped in front of a red light (or so others tell you) while
involved in a heated argument.  You have no remembrance whatsoever of that
light being red, or of your slowing down (or of having been at that
intersection at all).  Maybe your attention was so completely focussed on
the argument that the reaction to the traffic light was fully automatic.
Now someone tells you: No, it wasn't automatic.  You muttered something
unfriendly about that other car driver who made as if he was going to drive
on and then suddenly braked.  And now, zzzap!, the whole episode pops up in
your mind.  You remember that car, the intersection, the traffic light, its
jumping to red, the slight annoyance at not making it, and the anger about
that *@#$%!!! other driver whose car you almost crashed into.

Maybe everything is conscious.  Maybe stones are conscious of lying on the
ground, being kicked against, being picked up.  Their problem is, they can
hardly tell us.  The other problem is, they have no memory (lacking an
appropriate substrate for storing a trace of these experiences).  They are
like us with that traffic light, if there hadn't been that other car with
that idiot driver.  Even if we experience something consciously, if we
loose all remembrance of it, there is no way in which we can tell for sure
that there was a conscious experience.  Maybe we can infer consciousness by
an indirect argument, but that doesn't count.  Indirect evidence can be
pretty strong, but it can never give certainty.  Barring false memories, we
can only be sure if we remember the experience itself.  Now maybe
everything we experience is stored in memory.  It may be that we cannot
recall it like that, but using special techniques (hypnosis, electro-
stimulation, mnemonic drugs) it could be retrieved.  On the other hand, it
is more plausible that not quite everything is stored in memory, since that
would require a tremendous channel width for storing things, which is not
really functional, or, at least, there are presumably better trade-offs in
terms of survival capability given a limited bran capacity.

If some things we experience do not leave a recallable trace, then why
should we say that they were experienced consciously?  Or, why shouldn't we
maintain the position that stones are conscious as well?  That position is
maintainable, but it is not very useful in the sense that the word
"consciousness" looses its meaning; it becomes coextensive with
"existence".  We "loose" our bicameral minds, Freud, and all that jazz.
More useful, then, to use "consciousness" only for experiences that are,
somehow, recallable.  It makes sense that not all, not most of, but some of
the things that go on in our heads are stored away: in order to use for
determining patterns, for better evaluation of the expected outcome of
alternatives, for collecting material that is useful for the construction
or refinement of the model we have of the outside world, and so on.

Being the kind of animal homo is, it also makes sense to store material
that is useful for the refinement of the model we have of our inside world,
that which we think of as "ourselves".  After all, we consult that model to
pre-evaluate the outcome of certain alternatives.  If we don't "know"
ourselves, we are bound to do things (take on a responsibility, marry
someone, etc., things with a long-term commitment) that will lead us unto
suffering.  (We do these things anyway, and one of the causes is that we
don't know ourselves that well.)  So a lot of the things that go on "in the
front of our minds" are stored away, and are recallable.  And it is only
because of this recallability that we can say that these things were "in
the front of our minds", or "in our minds" at all.

Imagine now a machine programmed to "eat" and also to keep up some dinner
conversation.  It has some rules built-in about etiquette like that it is
impolite to eat too much, but also some parameter varying in time to model
"hunger", and a rule IF hunger THEN eat.  It just happens that the machine
is very, very hungry.  There is a conflict here, but fortunately our
machine is equipped with a conflict-resolution module (CRM) that uses fuzzy
logic to get an outcome for conflicting rules.  The outcome here is that
the machine eats more than is polite.  The dinner-conversation module (DCM)
has no direct interface with the CRM, but it is supplied with the resultant
behaviour as part of its input data and so it concludes (using the rule
base) that it is not behaving too politely.  Speaking anthropomorphically,
we would say that the machine is feeling uneasy about it.  Actually, a flag
"uneasiness" is raised, and the DCM is programmed to do something about it.
Using the rule base, the DCM finds a rule that tells it that uneasiness
about being impolite can be reduced by apologizing about it.  The apology
submodule (ASM) is invoked, which discovers that a casual apology will do
in this case, one form of which is just to state an appropriate cause for
the inappropriate behaviour.  The rule base tells ASM that PROBABLE CAUSE
OF eat IS appetite, (next to tape-worms, but these are measured as less
appropriate under the circumstances), so "<<SELF, having, appetite>;
<goodness, 0.6785>>" is passed back to DCM, which, after invoking
appropriate syntactic transformations, utters the unforgettable words:
"Boy, do I have an appetite today."

How different are we from that machine?  If we keep wolfing down food at a
dinner, knowing that we are misbehaving (or just substitute any behaviour
that you are prone to and that you realize is just not quite right--come
on, there must be something), is the choice made the result of a conscious
process?  I think it is not.  I have no reason to think it is.  Even if we
ponder a question consciously ("Whether 'tis nobler in the mind to suffer
..."), I think the outcome is not the result of the conscious process, but,
rather, that the consciousness is a side-effect of the conflict-resolution
process going on.  I think the same can be said about all "conscious"
processes.  The process is there, anyway; it could (in principle) take
place without leaving a trace in memory, but for functional reasons it does
leave such a trace.  And the word we use for these cognitive processes that
we can recall as having taken place is "conscious".

We can as it were instantly focus our attention on things that we are not
conscious of most of the time (the sensation of sitting on a chair, the
colour of the sky).  This means merely that we can influence which part of
the processes going on all the time get the preferential treatment of being
stored away for future reference.  The ability to do so is clearly
functional, notwithstanding the fact that we can make a non-functional use
of it.  This is not different from the fact that it is functional that I
can raise my arm by "willing" it to raise, although I can use that ability
to raise it gratuitously.  If the free will here is an illusion (which I
think is primarily a matter of how you choose to define something as
elusive as "free will"), then so is the free will to direct your attention
now to this, then to that.  Rather than to say that free will is an
"illusion", we might say that it is something that features in the model
people have about "themselves".  Similarly, I think it is better to say
that consciousness is not so much an illusion, but rather something to be
found in that model.  A relatively recent acquisition of that model is
known as the "subconscious".  A quite recent addition are "programs",
"sub-programs", "wrong wiring", etc.

A sufficiently "intelligent" machine, able to pass not only the dinner-
conversation test but also a sophisticated Turing test, must have a model
of itself.  Using that model, and observing its own behaviour (including
"internal" behaviour!), it will be led to conclude not only that it has an
appetite, but also volition and awareness, and it will probably attribute
some of its darker sides (about which it comes to conclude that it feels
guilt, from which it deduces that it has a conscience) to lack of affection
in childhood or "wrong wiring".  Is it mistaken then?  Is the machine taken
in by an illusion?

I propose to consider the question, "Can machines have illusions?"

--

Lambert Meertens, CWI, Amsterdam; lambert@mcvax.UUCP

------------------------------

Date: 21 Nov 86 19:08:02 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa  (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories: Reply to Cugini (2)

      [Part I.  See the next digest for the conclusion.  -- KIL]


On mod.ai <8611200632.AA19202@ucbvax.Berkeley.EDU> "CUGINI, JOHN"
<cugini@nbs-vms.ARPA> wrote:

>       I know I have a mind.  In order to determine if X
        [i.e., anyone else but myself]
>       has a mind I've got to look for analogous
>       external things about X which I know are causally connected with mind
>       in *my own* case. I naively know (and *how* do I know this??) that large
>       parts of my performance are an effect of my mind.  I scientifically
>       know that my mind depends on my brain.  I can know this latter
>       correlation even *without* performance correlates, eg, when the dentist
>       puts me under, I can directly experience my own loss of mind which
>       results from loss of whatever brain activity.  (I hope it goes
>       without saying that all this knowledge is just regular old
>       reliable knowledge, but not necessarily certain - ie I am not
>       trying to respond to radical skepticism about our everyday and
>       scientific knowledge, the invocation of deceptive dentists, etc.)

These questions and reflections are astute ones, and very relevant to
the issues under discussion. It is a matter of some ancillary interest
that the people who seem to be keeping their heads more successfully
in the debates about artificial intelligence and (shall we call it)
"artificial consciousness" are the more sceptical ones, as you reveal
yourself to be at the end of this module. The zealous advocates, on
the other hand, seem to be more prone to flights of
over-interpretative fancy, leaving critical judgment by the wayside.
(This is not to say that some of the more dogged critics haven't waxed
irrational in their turn too.)

Now on to the substance of your criticism. I think the crucial points
will turn on the difference between what you call "naively know" and
"scientifically know." It will also involve (like it or not) the issue
of radical scepticicm, uncertainty and the intersubjectivity and validity of
inferences and correlations. Now, I am neither an expert in, nor an advocate
of, phenomenological introspection, but if you will indulge me and do
a little of it here, I think you will notice that there is something very
different about "naive knowing" as compared to "scientific knowing."

Scientific knowing is indirect and inferential. It is based on
inference to the best explanation, the weight of the evidence, probability,
Popperian (testability, falsifiability) considerations, etc. It is the
paradigm for all empirical inquiry, and it is open to a kind of
radical scepticism (scepticism about induction) that we all reasonably
agree not to worry about, except insofar as noting that scientific
"knowledge" is not certain, but only highly likely on the evidence,
and is always in principle open to inductive "risk" or falsification
by future evidence. This is normal science, and if that were all there
was to the special case of the mind/body problem (or, more perspicuously,
the other-minds problem) then a lot of the matters we are discussing
here could be settled much more easily.

What you call "naive knowing," on the other hand (and about which you
ask "*how* do I know this?") is the special preserve of 1st-hand,
1st-person subjective experience. It is "privileged" (no one has
access to it but me), direct (I do not INFER from evidence that I am
in pain, I know it directly), and it has been described as
"incorrigible" (can I be wrong that I am feeling pain?). The
inferences we make (about the outside world, about inductive
regularities, about other minds) are open to radical scepticism, but
the phenomenological content of 1st-hand experience is different. This
makes "naive knowing" radically different from "scientific knowing."

(Let me add a quick parenthetical remark, but not pursue it unless
someone brings it up: Even our inferential knowledge depends on our
capacity for phenomenological experience. Put another way: we must
have direct experience in order to make indirect inferences, otherwise
the inferences would have no content, whether right or wrong. I
conjecture that this is significantly connected with what I've called
the "grounding" problem that lies at the root of this discussion. It
is also related to Locke's (inchoate) distinction between primary and
secondary qualities, turning his distinction on its head.)

Now let's go on. You say that I "naively know" that my performance
is caused by my mind and I "scientifically know" that my mind is caused
by my brain. (Let's not quibble about "cause"; the other words, such
as "determined by," "a function of," "supervenient on," or Searle's
notorious "caused-by-and-realized-in" are just vague ways of trying to
finesse a problematic and unique relationship otherwise known as the
mind/body problem. Let's just bite the bullet with "cause" and see
where that gets us.) Let me translate that: I know directly that my
performance is caused by my mind, and I infer that my
mind is caused by my brain. I'll go even further (now that we're
steeped in phenomenology): It is part of my EXPERIENCE of my behavior
that it is caused by my mind. [I happen to believe (inferentially) that
"free will" is an illusion, but I admit it's a phenomenological fact
that free will sure doesn't FEEL like an illusion.] We do not experience our
performance in the passive way that we experience sensory input. We
experience it AS something we (our minds) are CAUSING. (In fact, that's
probably the source of our intuitions about what causation IS. I'll
return to this later.)

So there is a very big difference between my direct knowledge that my
mind causes my behavior and my inference (say, in the dentist's chair)
that my brain causes my mind. [Even my rational inference (at the
metalevel) that my mind doesn't really cause my behavior, that that's
just an illusion, leaves the incorrigible phenomenological fact that I
know directly that that's not the way it FEELS.] So, to put it briefly,
what I've called the "informal component" of the Total Turing Test --
does the candidate act as if it had a mind (i.e., roughly as I would)? --
appeals to precisely those intuitions, and not the inferential kind, about
brains, etc. Note, however, that I'm not claiming we have direct
knowledge of other minds. That's just an inference. But it's not the
same kind of inference as the inference that there are, say, quarks, or
cosmic strings. We are appealing, in the informal TTT, to our
intuitions about subjectivity, not to ordinary, objective scientific
evidence (such as brain-correlates).

As a consequence (and again I invite you to do some introspection), the
intuitive force of the direct knowledge that I have (or am) a mind, and
that that causes my behavior, is of an entirely different order from my
empirical inference that I have a brain and that that causes my mind.
Consider, for example, that there are plenty of people who doubt that
their brains are the true causes of their minds, but very few (like
me) who venture to doubt that their minds cause their behavior; and I
confess that I am not very successful in convincing myself, because my
direct experience keeps contradicting my inference, incorrigibly.

In summary: There is a vast difference between knowing causes directly and
inferring them; subjective phenomena are unique and radically different from
other phenomena in that they confer this direct certainty; and
inferences about other minds (i.e., about subjective phenomena in
others) are parasitic on these direct experiences of causation, rather
than on ordinary causal inference, which carries little or no
intuitive force in the case of mental phenomena, in ourselves or
others. And rightly not, because mind is a private, direct, subjective
matter, not something that can be ascertained -- even in the normal
inductive sense -- by public, indirect, objective correlations.

                        [To be continued ...]

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************

From in%@vtcs1 Mon Dec  1 10:58:48 1986
Date: Mon, 1 Dec 86 10:58:19 est
From: vtcs1::in% <LAWS@SRI-STRIPE.ARPA>
To: ailist@sri-stripe.arpa
Subject: AIList Digest   V4 #269
Status: RO


AIList Digest           Wednesday, 26 Nov 1986    Volume 4 : Issue 269

Today's Topics:
  Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 21 Nov 86 19:08:02 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa  (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories: Reply to Cugini (2)

                              [Part II]


If you want some reasons why the mind/body case is so radically
different from ordinary causal inference in science, here are two:

(1) Generalizations about correlates of having a mind
are, because of the peculiar nature of subjective, 1st-person
experience, always doomed to be based on an N = 1. We can have
intersubjective agreement about a meter-reading, but not about a
subject's experience. This already puts mind-science in a class by
itself. (One can even argue that the intersubjective agreement on
"objective" meter readings is itself parasitic on, or grounded in,
some turing-equivalence assumptions about other people's reports of
their experiences -- of meter readings!)

But, still more important and revealing: (2) Consider ordinary scientific
inferences about "unobservables," say, about quarks (if they should continue
to play an inferred causal role in the future, utopian, "complete"
explanatory/predictive theory in physics): Were you to subtract this
inferred entity from the (complete) theory, the theory would lose its
capacity to account for all the (objective) data. That's the only
reason we infer unobservables in the first place, in ordinary
science: to help predict and causally explain all the observables.
A complete, utopian scientific theory of the "mind," in radical
contrast with this, will always be just as capable of accounting
for all the (objective) data (i.e., all the observable data on what
organisms and brains do) WITH or WITHOUT positing the existence of mind(s)!

In other words, the complete explanatory/predictive theory of organisms
(and devices) WITH minds will be turing-indistinguishable from the
complete explanatory/predicitive theory of organisms (and devices)
WITHOUT minds, that simply behave in every observable way AS IF they
had minds.

That kind of inferential indeterminacy is a lot more serious than the
underdetermination of ordinary scientific inferences about
unobservables like quarks, gravitons or strings. And I believe that this
amounts to a demonstration that all ordinary inferential bets (about
brain-correlates, etc.) are off when it comes to the mind.
The mind (subjectivity, consciousness, the capacity to have
qualitative experience) is NEITHER an ordinary, intersubjectively
verifiable objectively observable datum, as in normal science, NOR is
it an ordinary unobservable inferred entity, forced upon us so that we
can give a successful explanatory/predictive account of the objective
data.

Yet the mind is undoubtedly real. We know that, noninferentially, for
one case: our own. It is to THAT direct knowledge that the informal component
of the TTT appeals, and ONLY to that knowledge. Any further indirect
inferences, based on, say, correlations, depend ultimately for their
validation only on that direct knowledge, and are always secondary to
it, in that split inferences are always settled by an appeal to the
TTT criterion, not vice versa (or some third thing), as I shall try to
show below.

(The formal component of the TTT, on the other hand [i.e., the formal
computer-testing of a theory that purports to generate all of our
performance capacities], IS just a case of ordinary scientific
inference; here it is an empirical question whether brain correlates
will be helpful in guiding theory-construction. I happen to
doubt they will be helpful even there; not, at least until we
get much closer to TTT utopia, when we've all but captured
total performance capacity, and the fine-tuning [errors, reaction
times, response style, etc.] may begin to matter. There, as I've
suggested, the boundary between organism-performance and
brain-performance may break down somewhat, and microfunctional and
structural considerations may become relevant to the success and
verisimilitude of the performance modeling itself.

>       Now then, armed with the reasonably reliable knowledge that in my own
>       case, my brain is a cause of my mind, and my mind is a cause of my
>       performance, I can try to draw appropriate conclusions about others.

As I've tried to argue, these two types of knowledge are so different
as to be virtually incommensurable. In particular, your knowledge that
your brain causes your performance is direct and incorrigible, whereas
your knowledge that your brain causes your mind is indirect,
inferential, and parasitic on the former. Inferences about other minds
are NOT ordinary cases of scientific inference. The mind/body case is
special.

>       X3 has brains, but little/no performance - eg a case of severe
>       retardation.  Well, there doesn't seem much reason to believe that
>       X has intelligence, and so is disqualified from having mind, given
>       our definition.  However, it is still reasonable to believe that
>       X3 might have consciousness, eg can feel pain, see colors, etc.

For the time being, intelligence is as mind does. X3 may not be VERY
intelligent, but if he has any mind-like performance capacity (to pass
some variant of the TTT for some organism or other -- a tricky issue),
that amounts to having some intelligence. As discussed in another
module, intelligence may be a matter of degree, but having a mind
seems to be an all-or-none matter. Also, having a mind seems to be a
sufficient condition for having intelligence; if it's not also a
necessary condition, we have the radical indeterminacy I mentioned
earlier, and we're in trouble.

So the case of severe retardation seems to represent no problem.
Retarded people pass (some variant of) the TTT, and we have no trouble
assigning them minds. This is fine as long as they have some (shall we
call it "intelligible") performance capacity, and hence some
intelligence. Comatose people are another matter. But they may well
not have minds. (I might add that our inclination to assign a mind to
a person who is so retarded that his performance capacity is reduced
to vegetative functions such as blinking, breathing and swallowing,
could conceivably be an overgeneralization, motivated by considerations
of biological origins and humanitarian concerns.) I repeat, though,
that these special cases belong more to the domain of near-utopia
fine-tuning than the basic issue of whether it is performance or brain
correlates that should guide us in inferring minds in others. Certainly
neither TTT-enthusiasts nor brain-enthusiasts have any grounds for
feeling confident about their judgments in such ambiguous cases.

>       X4 has normal human cognitive performance, but no brains, eg the
>       ultimate AI system.  Well, no doubt X4 has intelligence, but the issue
>       is whether X4 has consciousness.  This seems far from obvious to me,
>       since I know in my own case that brain causes consciousness causes
>       performance.  But I already know, in the case of X4, that the causal
>       chain starts out at a different place (non-brain), even if it ends up
>       in the same place (intelligent performance).  So I can certainly
>       question (rationally) whether it gets to performance "via
>       consciousness" or not.
>       If this seems too contentious, ask yourself: given a choice between
>       destroying X3 or X4, is it really obvious that the more moral choice
>       is to destroy X3?

I don't think the moral choice is obvious in either case. However, I
don't think you're imagining this case sufficiently vividly. Let's make
it the one I proposed: A lifelong friend turns out to be robot, versus
a human born (irremediably) with only vegetative function. These issues
are for the right-to-lifers; the alternatives imposed on us are too
hypothetical and artificial (akin to having to choose between saving
one's mother or father). But I think it's fairly clear which way I'd
go here. And what we know (or don't know) about brains has very little
to do with it.

>       Finally, a gedanken experiment (if ever there was one) - suppose
>       (a la sci-fi stories) they opened you up and showed you that you
>       really didn't have a brain after all, that you really did have
>       electronic circuits - and suppose it transpired that while most
>       humans had brains, a few, like yourself, had electronics.  Now,
>       never doubting your own consciousness, if you *really* found that
>       out, would you not then (rationally) be a lot more inclined to
>       attribute consciousness to electronic entities (after all you know
>       what it feels like to be one of them) than to brained entities (who
>       knows what, if anything, it feels like to be one of them?)?
>       Even given *no* difference in performance between the two sub-types?
>       Showing that "similarity to one's own internal make-up" is always
>       going to be a valid criterion for consciousness, independent of
>       performance.

Frankly, although it might disturb me for other reasons, I think that
discovering I had complex, ill-understood electronic cicuits inside my
head instead of complex, ill-understood biochemical ones would not
sway me one way or the other on the basic proposition that it is
performance alone that is responsible for my inferring minds in other
people, not my (or anyone else's) dim knowledge about their inner
structure of function. I agreed in an earlier module, though, that
such a demonstration would be a bit of a blow to the sceptics about robots
(which I am not) if they discovered THEMSELVES to be robots. On the
other hand, it wouldn't move an outside sceptic one bit. For example,
*you* would presumably be unifluenced in your convictions about the
relevance of brain-correlates over and above performance if *I* turned
out to be X4. And that's just the point! Like it or not, the
1st-person stance retains center stage in the mind/body problem.

>       I make this latter point to show that I am a brain-chauvinist *only
>       insofar* as I know/believe that I *myself* am a brained entity (and
>       that my brain is what causes my consciousness).  This really
>       doesn't depend on my own observation of my own performance at all -
>       I'd still know I had a mind even if I never did any (external) thing
>       clever.

Yes. But the problem for *you* is whether *I* (or some other candidate)
have a mind, not whether *you* do. Moreover, no one suggested that the
turing test was the basis for knowing one has a mind in the 1st person
case. That problem is probably closer to the Cartesian Cogito, solved
directly and incorrigibly. The other-minds problem is the one we're
concerned with here.

Perhaps I should emphasize that in the two "correlations" we are
talking about -- performance/mind and brain/mind -- the basis for the
causal inference is radically different. The causal connection between
my mind and my performance is something I know directly from being the
performer. There is no corresponding intuition about causation from
being the possessor of my brain. That's just a correlation, depending
for its causal interpretation (if any), on what theory or metatheory I
happen to subscribe to. That's why nothing compelling follows from
being told what my insides are made of.

>       To summarize: brainedness is a criterion, not only via the indirect
>       path of: others who have intelligent performance also have brains,
>       ergo brains are a secondary correlate for mind; but also via the
>       much more direct path (which *also* justifies performance as a
>       criterion): I have a mind and in my very own case, my mind is
>       closely causally connected with brains (and with performance).

I would summarize it differently: In the 1st-person case, I know directly
that my performance is caused by my mind. I infer (from the correlation)
that my brain causes my mind. In the other-minds case I know nothing
directly; however, I am intuitively persuaded by performance similarity.
I have no intuitions about brains, but of course every confirmatory
cue helps; so if you also have a brain, my confidence is increased.
But split the ticket, and I'll go with performance every time. That
makes it seem as if performance is still the decisive criterion, and
brainedness is only a secondary correlate.

Putting it yet another way: We have direct knowledge of the causal
connection between our minds and our performance and only indirect
inferences about the causal connection between our brains and our
minds (and performance). This parasitism is hence present in our
inferences about other minds too.

>       I agree that there are some additional epistemological problems,
>               [with subjective/objective causation, as opposed to
>               objective/objective causation, i.e., with the mind/body problem]
>       compared to the usual cases of causation.  But these don't seem
>       all that daunting, absent radical skepticism.

But "radical" scepticism makes an unavoidable, substantive appearance
in the contemporary scientific incarnation of the other-minds problem:
The problem of robot minds.

>       We already know which parts of the brain
>       correlate with visual experience, auditory experience, speech
>       competence, etc. I hardly wish to understate the difficulty of
>       getting a full understanding, but I can't see any problem in
>       principle with finding out as much as we want.  What may be
>       mysterious is that at some level, some constellation of nerve
>       firings may "just" cause visual experience, (even as electric
>       currents "just" generate magnetic fields.)  But we are
>       always faced with brute-force correlation at the end of any scientific
>       explanation, so this cannot count against brain-explanatory theory of
>       mind.

There is not quite as much disagreement here as there may seem. We
agree on (1) the basic mystery in objective/subjective causation -- though I
disagree that it is no more mysterious than objective/objective
causation. Never mind. It's mysterious. I also agree that (2) I would
feel (negligibly) more confident in inferring that a candidate who
passed the TTT had a mind if it had a real brain than if it did not.
(I'd feel even more confident if it was my identical twin.) We agree
that (3) the brain causes the mind, that (4) the brain can be studied,
that (5) there are anatomical and physiological correlations
(objective/subjective), and that (6) these are very probably causal.

Where we may disagree is on the methodology for arriving at a causal theory
of mind. I don't think peeking-and-poking at the brain in search of
correlations is likely to generate a successful causal theory; I think
trial-and-error modeling of performance will, and that it will in fact
guide brain research, suggesting what functions to look for
implementations of, and how they cause performance. What I believe
will fall by the wayside in this brute-force correlative account --
I'm for correlations too, of course, except that I'm for
objective/objective correlations -- is subjectivity itself. For, on
all the observable evidence that will ever be available, the
complete theory of the mind -- whether implemented as a brain or as some
other artificial causal device -- will always be just as true of a
device actually having a mind as of a mindless device merely acting as
if it had a mind. And there will be no way of settling this, short of
actually BEING the device in question (which is no help to the rest of
us). If that's radical scepticism, it's come home to roost, and should
be accepted as a fact of life in mind-science. (I've dubbed this
"methodological epiphenomenalism" in the paper under discussion.)

You may feel more confident in attributing a mind to the
brain-implementation than to a synthetic one (though I can't imagine you'll
have good reasons, since they'll be functionally equivalent in every
observable and ostensibly relevant respect), but that too is a
question we will never be able settle objectively.

(Let me add, in case it's not apparent, that performances such as
reporting "It hurts now" are perfectly respectable, objective data,
both for the brain-correlation investigator and the mind-modeler. So
whereas we can never investigate subjectivity directly except in our
own case, we can approximate its behavioral manifestations as closely
as the expressive power of introspective reports will allow. What's
not clear is how useful this aspect of performance modeling will be.)

>       Well, I plead guilty to diverting the discussion into philosophy, and as
>       a practical matter, one's attitude in this dispute will hardly affect
>       one's day-to-day work in the AI lab.  One of my purposes is a kind of
>       pre-emptive strike against a too-grandiose interpretation of the
>       results of AI work, particularly with regard to claims about
>       consciousness.  Given a behavioral definition of intelligence, there
>       seems no reason why a machine can't be intelligent.  But if "mind"
>       implies consciousness, it's a different ball-game, when claiming
>       that the machine "has a mind".

I plead no less guilty than you. Neither of us is responsible for the
fact that scepticism looms large in making inferences about other
minds and how they work, which is what cognitive science is about. I
do disagree, though, that these considerations are irrelevant to one's
research strategy. It does matter whether you choose to study the
brain directly, or to model it, or to model performance-equivalent
alternatives. Other issues in this discussion matter too: modeling
toy modules versus the Total Turing Test, symbolic modeling versus
robotic modeling, and the degree of attention focused on modeling
phenomenological reports.

I also agree, of course, about the grandiose over-interpretation of
which AI (and, lately, connectionism too) has been guilty. But in the
papers under discussion I try to propose principled constraints (e.g.,
robotic capacity, groundedness, nonmodularity and the Total Turing
Test) that might restrain such excesses, rather than merely scepticism
about artificial performance. I also try to sort out the empirical
issues from the methodological and metaphysical ones. And, as I've
argued in several iterations, "inetlligence" is not just a matter of
definition.

>       My as-yet-unarticulated intuition is that, at least for people, the
>       grounding-of-symbols problem, to which you are acutely and laudably
>       sensitive, inherently involves consciousness, ie at least for us,
>       meaning requires consciousness.  And so the problem of shoehorning
>       "meaning" into a dumb machine at least raises the issue about how
>       this can be done without making them conscious (or, alternatively,
>       how to go ahead and make them conscious).  Hence my interest in your
>       program of research.

Thank you for the kind words. One of course hopes that consciousness
will be captured somewhere along the road to Utopia. But my
methodological epiphenomenalism suggests that this may be an undecidable
metaphysical problem, and that, empirically and objectively, total
performance capacity is the most we can know ("scientifically") that
we have captured.

--

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************